[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-07 Thread Yedidyah Bar David
On Fri, Nov 6, 2020 at 11:10 AM Marcin Sobczyk  wrote:
>
>
>
> On 11/5/20 2:03 PM, Yedidyah Bar David wrote:
> > On Thu, Nov 5, 2020 at 2:40 PM Marcin Sobczyk  wrote:
> >>
> >>
> >> On 11/5/20 1:22 PM, Vojtech Juranek wrote:
> > IMO OST should be made easy to interact with from your main development
> > machine.
>  TBH I didn't see much interest in running OST on developers' machines.
> >>> as it's little bit complex to setup?
> >> Definitely, that's why the playbook and 'setup_for_ost.sh' was created.
> >> I hope it will be the long term solution for this problem.
> >>
> >>> Making it more easy maybe would increase
> >>> number of people contributing to OST ...
> >> But that's a chicken and egg problem - who else is going to contribute
> >> to OST if not us?
> >> If we want the setup to be easier, then let's work on it.
> > I agree, but this requires time.
> >
> > Most of the work done on OST so far was for CI. Using it by developers
> > was a side-interest.
> >
>  People are mostly using manual OST runs to verify things and that is what
>  most of the efforts focus on. It's not that I wouldn't like OST to be 
>  more
>  developer-friendly, I definitely would, but we need more manpower
>  and interest for that to happen.
> 
> >> I noticed that many of you run OST in a VM ending up with three layers
> >> of VMs.
> >> I know it works, but I got multiple reports of assertions' timeouts and
> >> TBH I just don't
> >> see this as a viable solution to work with OST - you need a bare metal
> >> for that.
> > Why?
> >
> > After all, we also work on a virtualization product/project. If it's
> > not good enough for ourselves, how do we expect others to use it? :-)
>  I'm really cool with the engine and the hosts being VMs, but deploying
>  engine and the hosts as VMs nested in other VM is what I think is
>  unreasonable.
> >>> I tried this approach in past two days and works fine for me (except the 
> >>> fast
> >>> it's slow)
> > (One of the best freudian slips I noticed recently)
> >
>  Maybe I'm wrong here, but I don't think our customers run whole oVirt
>  clusters
>  inside VMs. There's just too much overhead with all that layers of 
>  nesting
>  and the performance sucks.
> 
> > Also, using bare-metal isn't always that easy/comfortable either, even
> > if you have the hardware.
>  I'm very happy with my servers. What makes working with bm
>  hard/uncomfortable?
> >>> IMHO main issue is lack of HW. I really don't want to run it directly on 
> >>> my
> >>> dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports 
> >>> to/
> >>> from VM or some bare metal server really doesn't matter (assuming 
> >>> reasonable
> > Lack of HW is definitely an issue, but not the only one.
> >
> > When I did use BM for OST at home, my setup was something like this:
> >
> > (home router) <-wifi or cable-> (laptop) <-cable-> (switch) <-cable-> BM
> >
> > I configured on my laptop PXE for reinstallations. Worked quite well even
> > if not perfect.
> >
> > I worked quite hard on _not_ using my laptop as a router. I ran on it
> > squid + squid-rpm-cache, and I think I never managed to make OST work
> > (through a proxy). Eventually I gave up and configured the laptop as a
> > router, and this worked.
> >
> > Before that, I also tried disabling dhcp on the router and running dhcp+dns
> > PXE on a raspberry pi on the home network, but this was problematic in
> > other ways (I think the rpi was simply not very stable - had to power-
> > cycle it every few months).
> >
> > I can't use the router as PXE.
> Hmmm maybe we have a different understanding on "bare metal usage for OST".
> What I meant is you should use a physical server, install lago et al on
> it and use it to run OST as usual.

Yes, that's what I meant too.

> I didn't mean using a separate bare metal for each of the engine and the
> hosts.
> That way you don't need to have your own PXE and router - it's taken care
> of by lago and prebuilt images of VMs.

I used these BM machines in the past for various different uses (engine/host,
el7/8/fc, etc.) so at some point decided to invest in configuring PXE.

If indeed I consider OST to cover my entire spectrum, it's probably enough
to install from a USB stick.

> >
> >> I agree. I have the privilege of having separate servers to run OST.
> >> Even though that would work, I can't imagine working with OST
> >> on a daily basis on my laptop.
> > I wonder why. Not saying I disagree.
> OST eats a lot of resources and makes your machine less responsive.
> Given that, and the lengthy debugging cycles, with a separate server
> I'm able to work on 2-3 things simultaneously.
>
> >
> > I find the biggest load on my laptop to be from browsers running
> > web applications. I think this was ok also with my previous laptop,
> > so assume I can restrict the browsers (with cgroups or whatever) to
> > 

[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-06 Thread Marcin Sobczyk



On 11/5/20 2:03 PM, Yedidyah Bar David wrote:

On Thu, Nov 5, 2020 at 2:40 PM Marcin Sobczyk  wrote:



On 11/5/20 1:22 PM, Vojtech Juranek wrote:

IMO OST should be made easy to interact with from your main development
machine.

TBH I didn't see much interest in running OST on developers' machines.

as it's little bit complex to setup?

Definitely, that's why the playbook and 'setup_for_ost.sh' was created.
I hope it will be the long term solution for this problem.


Making it more easy maybe would increase
number of people contributing to OST ...

But that's a chicken and egg problem - who else is going to contribute
to OST if not us?
If we want the setup to be easier, then let's work on it.

I agree, but this requires time.

Most of the work done on OST so far was for CI. Using it by developers
was a side-interest.


People are mostly using manual OST runs to verify things and that is what
most of the efforts focus on. It's not that I wouldn't like OST to be more
developer-friendly, I definitely would, but we need more manpower
and interest for that to happen.


I noticed that many of you run OST in a VM ending up with three layers
of VMs.
I know it works, but I got multiple reports of assertions' timeouts and
TBH I just don't
see this as a viable solution to work with OST - you need a bare metal
for that.

Why?

After all, we also work on a virtualization product/project. If it's
not good enough for ourselves, how do we expect others to use it? :-)

I'm really cool with the engine and the hosts being VMs, but deploying
engine and the hosts as VMs nested in other VM is what I think is
unreasonable.

I tried this approach in past two days and works fine for me (except the fast
it's slow)

(One of the best freudian slips I noticed recently)


Maybe I'm wrong here, but I don't think our customers run whole oVirt
clusters
inside VMs. There's just too much overhead with all that layers of nesting
and the performance sucks.


Also, using bare-metal isn't always that easy/comfortable either, even
if you have the hardware.

I'm very happy with my servers. What makes working with bm
hard/uncomfortable?

IMHO main issue is lack of HW. I really don't want to run it directly on my
dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports to/
from VM or some bare metal server really doesn't matter (assuming reasonable

Lack of HW is definitely an issue, but not the only one.

When I did use BM for OST at home, my setup was something like this:

(home router) <-wifi or cable-> (laptop) <-cable-> (switch) <-cable-> BM

I configured on my laptop PXE for reinstallations. Worked quite well even
if not perfect.

I worked quite hard on _not_ using my laptop as a router. I ran on it
squid + squid-rpm-cache, and I think I never managed to make OST work
(through a proxy). Eventually I gave up and configured the laptop as a
router, and this worked.

Before that, I also tried disabling dhcp on the router and running dhcp+dns
PXE on a raspberry pi on the home network, but this was problematic in
other ways (I think the rpi was simply not very stable - had to power-
cycle it every few months).

I can't use the router as PXE.

Hmmm maybe we have a different understanding on "bare metal usage for OST".
What I meant is you should use a physical server, install lago et al on 
it and use it to run OST as usual.
I didn't mean using a separate bare metal for each of the engine and the 
hosts.

That way you don't need to have your own PXE and router - it's taken care
of by lago and prebuilt images of VMs.



I agree. I have the privilege of having separate servers to run OST.
Even though that would work, I can't imagine working with OST
on a daily basis on my laptop.

I wonder why. Not saying I disagree.

OST eats a lot of resources and makes your machine less responsive.
Given that, and the lengthy debugging cycles, with a separate server
I'm able to work on 2-3 things simultaneously.



I find the biggest load on my laptop to be from browsers running
web applications. I think this was ok also with my previous laptop,
so assume I can restrict the browsers (with cgroups or whatever) to
only use a slice (and I usually do not really care about their
performance). Just didn't spend the time on trying this yet.

(Previous laptop was from 2016, 16GB RAM. Current from 2019, 32GB).


That also kinda proves my point that people are not being interested
in running OST on their machines - they don't have machines they could use.
I see three solutions to this:
- people start pushing managers to have their own servers

Not very likely. We are supposed to use VMs for that.


- we will have a machine-renting solution based on beaker
   (with nice, automatic provisioning for OST etc.), so we can
   work on bare metals

+1 from me.


- we focus on the CI and live with the "launch and prey" philosophy :)

If it's fast enough and fully automatic (e.g. using something like Nir's
script, but perhaps with some more features), also ok for 

[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Yedidyah Bar David
On Thu, Nov 5, 2020 at 2:40 PM Marcin Sobczyk  wrote:
>
>
>
> On 11/5/20 1:22 PM, Vojtech Juranek wrote:
> >>> IMO OST should be made easy to interact with from your main development
> >>> machine.
> >> TBH I didn't see much interest in running OST on developers' machines.
> > as it's little bit complex to setup?
> Definitely, that's why the playbook and 'setup_for_ost.sh' was created.
> I hope it will be the long term solution for this problem.
>
> > Making it more easy maybe would increase
> > number of people contributing to OST ...
> But that's a chicken and egg problem - who else is going to contribute
> to OST if not us?
> If we want the setup to be easier, then let's work on it.

I agree, but this requires time.

Most of the work done on OST so far was for CI. Using it by developers
was a side-interest.

>
> >
> >> People are mostly using manual OST runs to verify things and that is what
> >> most of the efforts focus on. It's not that I wouldn't like OST to be more
> >> developer-friendly, I definitely would, but we need more manpower
> >> and interest for that to happen.
> >>
>  I noticed that many of you run OST in a VM ending up with three layers
>  of VMs.
>  I know it works, but I got multiple reports of assertions' timeouts and
>  TBH I just don't
>  see this as a viable solution to work with OST - you need a bare metal
>  for that.
> >>> Why?
> >>>
> >>> After all, we also work on a virtualization product/project. If it's
> >>> not good enough for ourselves, how do we expect others to use it? :-)
> >> I'm really cool with the engine and the hosts being VMs, but deploying
> >> engine and the hosts as VMs nested in other VM is what I think is
> >> unreasonable.
> > I tried this approach in past two days and works fine for me (except the 
> > fast
> > it's slow)

(One of the best freudian slips I noticed recently)

> >
> >> Maybe I'm wrong here, but I don't think our customers run whole oVirt
> >> clusters
> >> inside VMs. There's just too much overhead with all that layers of nesting
> >> and the performance sucks.
> >>
> >>> Also, using bare-metal isn't always that easy/comfortable either, even
> >>> if you have the hardware.
> >> I'm very happy with my servers. What makes working with bm
> >> hard/uncomfortable?
> > IMHO main issue is lack of HW. I really don't want to run it directly on my
> > dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports 
> > to/
> > from VM or some bare metal server really doesn't matter (assuming reasonable

Lack of HW is definitely an issue, but not the only one.

When I did use BM for OST at home, my setup was something like this:

(home router) <-wifi or cable-> (laptop) <-cable-> (switch) <-cable-> BM

I configured on my laptop PXE for reinstallations. Worked quite well even
if not perfect.

I worked quite hard on _not_ using my laptop as a router. I ran on it
squid + squid-rpm-cache, and I think I never managed to make OST work
(through a proxy). Eventually I gave up and configured the laptop as a
router, and this worked.

Before that, I also tried disabling dhcp on the router and running dhcp+dns
PXE on a raspberry pi on the home network, but this was problematic in
other ways (I think the rpi was simply not very stable - had to power-
cycle it every few months).

I can't use the router as PXE.

> I agree. I have the privilege of having separate servers to run OST.
> Even though that would work, I can't imagine working with OST
> on a daily basis on my laptop.

I wonder why. Not saying I disagree.

I find the biggest load on my laptop to be from browsers running
web applications. I think this was ok also with my previous laptop,
so assume I can restrict the browsers (with cgroups or whatever) to
only use a slice (and I usually do not really care about their
performance). Just didn't spend the time on trying this yet.

(Previous laptop was from 2016, 16GB RAM. Current from 2019, 32GB).

>
> That also kinda proves my point that people are not being interested
> in running OST on their machines - they don't have machines they could use.
> I see three solutions to this:
> - people start pushing managers to have their own servers

Not very likely. We are supposed to use VMs for that.

> - we will have a machine-renting solution based on beaker
>   (with nice, automatic provisioning for OST etc.), so we can
>   work on bare metals

+1 from me.

> - we focus on the CI and live with the "launch and prey" philosophy :)

If it's fast enough and fully automatic (e.g. using something like Nir's
script, but perhaps with some more features), also ok for me.

Right now, this is what I use mostly. For things that are much faster
to verify manually, I use manual (non-OST) setup on VMs.

>
> > connection speed to bare metal server)
> >
> >> I can think of reprovisioning, but that is not needed for OST usage.
> >>
> >>> CI also uses VMs for this, IIUC. Or did we move there to containers?
> >>> Perhaps we should invest in making 

[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Marcin Sobczyk



On 11/5/20 1:22 PM, Vojtech Juranek wrote:

IMO OST should be made easy to interact with from your main development
machine.

TBH I didn't see much interest in running OST on developers' machines.

as it's little bit complex to setup?

Definitely, that's why the playbook and 'setup_for_ost.sh' was created.
I hope it will be the long term solution for this problem.


Making it more easy maybe would increase
number of people contributing to OST ...
But that's a chicken and egg problem - who else is going to contribute 
to OST if not us?

If we want the setup to be easier, then let's work on it.




People are mostly using manual OST runs to verify things and that is what
most of the efforts focus on. It's not that I wouldn't like OST to be more
developer-friendly, I definitely would, but we need more manpower
and interest for that to happen.


I noticed that many of you run OST in a VM ending up with three layers
of VMs.
I know it works, but I got multiple reports of assertions' timeouts and
TBH I just don't
see this as a viable solution to work with OST - you need a bare metal
for that.

Why?

After all, we also work on a virtualization product/project. If it's
not good enough for ourselves, how do we expect others to use it? :-)

I'm really cool with the engine and the hosts being VMs, but deploying
engine and the hosts as VMs nested in other VM is what I think is
unreasonable.

I tried this approach in past two days and works fine for me (except the fast
it's slow)


Maybe I'm wrong here, but I don't think our customers run whole oVirt
clusters
inside VMs. There's just too much overhead with all that layers of nesting
and the performance sucks.


Also, using bare-metal isn't always that easy/comfortable either, even
if you have the hardware.

I'm very happy with my servers. What makes working with bm
hard/uncomfortable?

IMHO main issue is lack of HW. I really don't want to run it directly on my
dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports to/
from VM or some bare metal server really doesn't matter (assuming reasonable

I agree. I have the privilege of having separate servers to run OST.
Even though that would work, I can't imagine working with OST
on a daily basis on my laptop.

That also kinda proves my point that people are not being interested
in running OST on their machines - they don't have machines they could use.
I see three solutions to this:
- people start pushing managers to have their own servers
- we will have a machine-renting solution based on beaker
 (with nice, automatic provisioning for OST etc.), so we can
 work on bare metals
- we focus on the CI and live with the "launch and prey" philosophy :)


connection speed to bare metal server)


I can think of reprovisioning, but that is not needed for OST usage.


CI also uses VMs for this, IIUC. Or did we move there to containers?
Perhaps we should invest in making this work well inside a container.

CI doesn't use VMs - it uses a mix of containers and bare metals.
The solution for containers can't handle el8 and that's why we're
stuck with running OST on el7 mostly (apart from the aforementioned
bare metals, which use el8).

There is a 'run-ost-container.sh' script in the project. I think some people
had luck using it, but I never even tried. Again, my personal opinion, as
much as I find containers useful and convenient in different situations,
this is not one of them - you should be using bare metal.

The "backend for OST" is a subject for a whole, new discussion.
My opinion here is that we should be using oVirt as backend for OST
(as in running oVirt cluster as VMs in oVirt). I'm a big fan of the
dogfooding
concept. This of course creates a set of new problems like "how can
developers
work with this", "where do we get the hosting oVirt cluster from" etc.
Whooole, new discussion :)

Regards, Marcin


On my bare metal server OST basic run takes 30 mins to complete. This is
something one
can work with, but we can do even better.

Thank you for your input and I hope that we can have more people
involved in OST
on a regular basis and not once-per-year hackathons. This is a complex
project, but it's
really useful.

+1!

Thanks and best regards,


Nice.

Thanks and best regards,

[1]
https://github.com/lago-project/lago/blob/7bf288ad53da3f1b86c08b3283ee9c5
118e7605e/lago/providers/libvirt/network.py#L162 [2]
https://github.com/oVirt/ovirt-system-tests/blob/6d5c2a0f9fb3c05afc854712
60065786b5fdc729/ost_utils/ost_utils/pytest/fixtures/engine.py#L105

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4UTS3UIZ37WIBNVZQUZOS7MMASWQRVLK/


[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Vojtech Juranek
> > IMO OST should be made easy to interact with from your main development
> > machine.
> TBH I didn't see much interest in running OST on developers' machines.

as it's little bit complex to setup? Making it more easy maybe would increase 
number of people contributing to OST ... 

> People are mostly using manual OST runs to verify things and that is what
> most of the efforts focus on. It's not that I wouldn't like OST to be more
> developer-friendly, I definitely would, but we need more manpower
> and interest for that to happen.
> 

> >> I noticed that many of you run OST in a VM ending up with three layers
> >> of VMs.
> >> I know it works, but I got multiple reports of assertions' timeouts and
> >> TBH I just don't
> >> see this as a viable solution to work with OST - you need a bare metal
> >> for that.
> > 
> > Why?
> > 
> > After all, we also work on a virtualization product/project. If it's
> > not good enough for ourselves, how do we expect others to use it? :-)
> 
> I'm really cool with the engine and the hosts being VMs, but deploying
> engine and the hosts as VMs nested in other VM is what I think is
> unreasonable.

I tried this approach in past two days and works fine for me (except the fast 
it's slow)

> Maybe I'm wrong here, but I don't think our customers run whole oVirt
> clusters
> inside VMs. There's just too much overhead with all that layers of nesting
> and the performance sucks.
> 
> > Also, using bare-metal isn't always that easy/comfortable either, even
> > if you have the hardware.
> 
> I'm very happy with my servers. What makes working with bm
> hard/uncomfortable?

IMHO main issue is lack of HW. I really don't want to run it directly on my 
dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports to/
from VM or some bare metal server really doesn't matter (assuming reasonable 
connection speed to bare metal server)

> I can think of reprovisioning, but that is not needed for OST usage.
> 
> > CI also uses VMs for this, IIUC. Or did we move there to containers?
> > Perhaps we should invest in making this work well inside a container.
> 
> CI doesn't use VMs - it uses a mix of containers and bare metals.
> The solution for containers can't handle el8 and that's why we're
> stuck with running OST on el7 mostly (apart from the aforementioned
> bare metals, which use el8).
> 
> There is a 'run-ost-container.sh' script in the project. I think some people
> had luck using it, but I never even tried. Again, my personal opinion, as
> much as I find containers useful and convenient in different situations,
> this is not one of them - you should be using bare metal.
> 
> The "backend for OST" is a subject for a whole, new discussion.
> My opinion here is that we should be using oVirt as backend for OST
> (as in running oVirt cluster as VMs in oVirt). I'm a big fan of the
> dogfooding
> concept. This of course creates a set of new problems like "how can
> developers
> work with this", "where do we get the hosting oVirt cluster from" etc.
> Whooole, new discussion :)
> 
> Regards, Marcin
> 
> >> On my bare metal server OST basic run takes 30 mins to complete. This is
> >> something one
> >> can work with, but we can do even better.
> >> 
> >> Thank you for your input and I hope that we can have more people
> >> involved in OST
> >> on a regular basis and not once-per-year hackathons. This is a complex
> >> project, but it's
> >> really useful.
> > 
> > +1!
> > 
> > Thanks and best regards,
> > 
> >>> Nice.
> >>> 
> >>> Thanks and best regards,
> >> 
> >> [1]
> >> https://github.com/lago-project/lago/blob/7bf288ad53da3f1b86c08b3283ee9c5
> >> 118e7605e/lago/providers/libvirt/network.py#L162 [2]
> >> https://github.com/oVirt/ovirt-system-tests/blob/6d5c2a0f9fb3c05afc854712
> >> 60065786b5fdc729/ost_utils/ost_utils/pytest/fixtures/engine.py#L105



signature.asc
Description: This is a digitally signed message part.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3HCX7ML4AASGMPEAVQBSHQ4OC6H4WFTW/


[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Yedidyah Bar David
On Thu, Nov 5, 2020 at 12:39 PM Marcin Sobczyk  wrote:
>
>
>
> On 11/5/20 9:09 AM, Yedidyah Bar David wrote:
> > On Wed, Nov 4, 2020 at 9:49 PM Nir Soffer  wrote:
> >> I want to share useful info from the OST hackathon we had this week.
> >>
> >> Image transfer must work with real hostnames to allow server
> >> certificate verification.
> >> Inside the OST environment, engine and hosts names are resolvable, but
> >> on the host
> >> (or vm) running OST, the names are not available.
> Do we really need this? Can't we execute those image transfers on the
> host VMs instead?
>
> >>
> >> This can be fixed by adding the engine and hosts to /etc/hosts like this:
> >>
> >> $ cat /etc/hosts
> >> 127.0.0.1   localhost localhost.localdomain localhost4 
> >> localhost4.localdomain4
> >> ::1 localhost localhost.localdomain localhost6 
> >> localhost6.localdomain6
> >>
> >> 192.168.200.2 engine
> >> 192.168.200.3 lago-basic-suite-master-host-0
> >> 192.168.200.4 lago-basic-suite-master-host-1
> Modifying '/etc/hosts' requires root privileges - it will work in mock,
> but nowhere else and IMO is a bad idea.
>
> > Are these addresses guaranteed to be static?
> >
> > Where are they defined?
> No, they're not and I think it's a good thing - if we end up assigning
> them statically
> sooner or later we will stumble upon "this always worked because we
> always used the
> same ip addresses" bugs.
>
> Libvirt runs 'dnsmasq' that the VMs used. The XML definition for DNS is
> done by lago [1].
>
> >
> >> It would be if this was automated by OST. You can get the details using:
> >>
> >> $ cd src/ovirt-system-tests/deployment-xxx
> >> $ lago status
> > It would have been even nicer if it was possible/easy to have this working
> > dynamically without user intervention.
> >
> > Thought about and searched for ways to achieve this, failed to find 
> > something
> > simple.
> >
> > Closest options I found, in case someone feels like playing with this:
> >
> > 1. Use HOSTALIASES. 'man 7 hostname' for details, or e.g.:
> >
> > https://blog.tremily.us/posts/HOSTALIASES/
> >
> > With this, if indeed the addresses are static, but you do not want to have
> > them hardcoded in /etc (say, because you want different ones per different
> > runs/needs/whatever), you can add them hardcoded there with some longer
> > name, and have a process-specific HOSTALIASES file mapping e.g. 'engine'
> > to the engine of this specific run.
> 'HOSTALIASES' is really awesome, but it doesn't always work.
> I found that for machines connected to VPNs, the overriden DNS
> servers will have the priority in responding to name resolution
> and 'HOSTALIASES' definitions won't have effect.

Really? Weird. I never tried HOSTALIASES so far (just tested it
before sending above and it worked, also with vpn up), but I agree
it's not the way to go, especially since I agree with you that we
should not rely on the static addresses.

>
> >
> > 2. https://github.com/fritzw/ld-preload-open
> >
> > With this, you can have a process-specific /etc/resolv.conf, pointing
> > this specific process to the internal nameserver inside lago/OST.
> > This requires building this small C library. Didn't try it or check
> > its code. Also can't find it pre-built in copr (or anywhere).
> >
> > (
> > Along the way, if you like such tricks, found this:
> >
> > https://github.com/gaul/awesome-ld-preload
> > )
> This sounds really complex.

I do not think it's that complex, but agree that I'd prefer to not introduce
a C sources to OST, if possible. But I can't see easy means for that.

> As mentioned before I would prefer
> if things could be done on the host VMs instead.

Of course they can, but it's not convenient.

You do not have your normal environment there - your IDE, configuration, etc.

You can't run a browser there.

Etc.

IMO OST should be made easy to interact with from your main development machine.

>
> >> OST keeps the deployment directory in the source directory. Be careful if 
> >> you
> >> like to "git clean -dxf' since it will delete all the deployment and
> >> you will have to
> >> kill the vms manually later.
> This is true, but there are reasons behind that - the way mock works
> and libvirt permissions that are needed to operate on VMs images.
>
> >>
> >> The next thing we need is the engine ca cert. It can be fetched like this:
> >>
> >> $ curl -k 
> >> 'https://engine/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA'
> >>> ca.pem
> >> I would expect OST to do this and put the file in the deployment directory.
> We have that already [2].
>
> >>
> >> To upload or download images, backup vms or use other modern examples from
> >> the sdk, you need to have a configuration file like this:
> >>
> >> $ cat ~/.config/ovirt.conf
> >> [engine]
> >> engine_url = https://engine
> >> username = admin@internal
> >> password = 123
> >> cafile = ca.pem
> >>
> >> With this uploading from the same directory where ca.pem is located
> >> will work. If you want
> >> 

[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Marcin Sobczyk



On 11/5/20 9:09 AM, Yedidyah Bar David wrote:

On Wed, Nov 4, 2020 at 9:49 PM Nir Soffer  wrote:

I want to share useful info from the OST hackathon we had this week.

Image transfer must work with real hostnames to allow server
certificate verification.
Inside the OST environment, engine and hosts names are resolvable, but
on the host
(or vm) running OST, the names are not available.
Do we really need this? Can't we execute those image transfers on the 
host VMs instead?




This can be fixed by adding the engine and hosts to /etc/hosts like this:

$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.200.2 engine
192.168.200.3 lago-basic-suite-master-host-0
192.168.200.4 lago-basic-suite-master-host-1
Modifying '/etc/hosts' requires root privileges - it will work in mock, 
but nowhere else and IMO is a bad idea.



Are these addresses guaranteed to be static?

Where are they defined?
No, they're not and I think it's a good thing - if we end up assigning 
them statically
sooner or later we will stumble upon "this always worked because we 
always used the

same ip addresses" bugs.

Libvirt runs 'dnsmasq' that the VMs used. The XML definition for DNS is 
done by lago [1].





It would be if this was automated by OST. You can get the details using:

$ cd src/ovirt-system-tests/deployment-xxx
$ lago status

It would have been even nicer if it was possible/easy to have this working
dynamically without user intervention.

Thought about and searched for ways to achieve this, failed to find something
simple.

Closest options I found, in case someone feels like playing with this:

1. Use HOSTALIASES. 'man 7 hostname' for details, or e.g.:

https://blog.tremily.us/posts/HOSTALIASES/

With this, if indeed the addresses are static, but you do not want to have
them hardcoded in /etc (say, because you want different ones per different
runs/needs/whatever), you can add them hardcoded there with some longer
name, and have a process-specific HOSTALIASES file mapping e.g. 'engine'
to the engine of this specific run.

'HOSTALIASES' is really awesome, but it doesn't always work.
I found that for machines connected to VPNs, the overriden DNS
servers will have the priority in responding to name resolution
and 'HOSTALIASES' definitions won't have effect.



2. https://github.com/fritzw/ld-preload-open

With this, you can have a process-specific /etc/resolv.conf, pointing
this specific process to the internal nameserver inside lago/OST.
This requires building this small C library. Didn't try it or check
its code. Also can't find it pre-built in copr (or anywhere).

(
Along the way, if you like such tricks, found this:

https://github.com/gaul/awesome-ld-preload
)

This sounds really complex. As mentioned before I would prefer
if things could be done on the host VMs instead.


OST keeps the deployment directory in the source directory. Be careful if you
like to "git clean -dxf' since it will delete all the deployment and
you will have to
kill the vms manually later.

This is true, but there are reasons behind that - the way mock works
and libvirt permissions that are needed to operate on VMs images.



The next thing we need is the engine ca cert. It can be fetched like this:

$ curl -k 
'https://engine/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA'

ca.pem

I would expect OST to do this and put the file in the deployment directory.

We have that already [2].



To upload or download images, backup vms or use other modern examples from
the sdk, you need to have a configuration file like this:

$ cat ~/.config/ovirt.conf
[engine]
engine_url = https://engine
username = admin@internal
password = 123
cafile = ca.pem

With this uploading from the same directory where ca.pem is located
will work. If you want
it to work from any directory, use absolute path to the file.

I created a test image using qemu-img and qemu-io:

$ qemu-img create -f qcow2 test.qcow2 1g

To write some data to the test image we can use qemu-io. This writes 64k of data
(b"\xf0" * 64 * 1024) to offset 1 MiB.

$ qemu-io -f qcow2 -c "write -P 240 1m 64k" test.qcow2

Never heard about qemu-io. Nice to know. Seems like it does not have a manpage,
in el8, although I can find such a manpage elsewhere on the net.


Since this image contains only 64k of data, uploading it should be instant.

The last part we need is the imageio client package:

$ dnf install ovirt-imageio-client

To upload the image, we need at least one host up and storage domains
created. I did not find a way to prepare OST, so simply run this after
run_tests completed. It took about an hour.

To upload the image to raw sparse disk we can use:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
-c engine --sd-name nfs --disk-sparse --disk-format raw test.qcow2
[   0.0 ] Checking image...
[   0.0 ] Image format: qcow2
[   

[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Yedidyah Bar David
On Wed, Nov 4, 2020 at 9:49 PM Nir Soffer  wrote:
>
> I want to share useful info from the OST hackathon we had this week.
>
> Image transfer must work with real hostnames to allow server
> certificate verification.
> Inside the OST environment, engine and hosts names are resolvable, but
> on the host
> (or vm) running OST, the names are not available.
>
> This can be fixed by adding the engine and hosts to /etc/hosts like this:
>
> $ cat /etc/hosts
> 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>
> 192.168.200.2 engine
> 192.168.200.3 lago-basic-suite-master-host-0
> 192.168.200.4 lago-basic-suite-master-host-1

Are these addresses guaranteed to be static?

Where are they defined?

>
> It would be if this was automated by OST. You can get the details using:
>
> $ cd src/ovirt-system-tests/deployment-xxx
> $ lago status

It would have been even nicer if it was possible/easy to have this working
dynamically without user intervention.

Thought about and searched for ways to achieve this, failed to find something
simple.

Closest options I found, in case someone feels like playing with this:

1. Use HOSTALIASES. 'man 7 hostname' for details, or e.g.:

https://blog.tremily.us/posts/HOSTALIASES/

With this, if indeed the addresses are static, but you do not want to have
them hardcoded in /etc (say, because you want different ones per different
runs/needs/whatever), you can add them hardcoded there with some longer
name, and have a process-specific HOSTALIASES file mapping e.g. 'engine'
to the engine of this specific run.

2. https://github.com/fritzw/ld-preload-open

With this, you can have a process-specific /etc/resolv.conf, pointing
this specific process to the internal nameserver inside lago/OST.
This requires building this small C library. Didn't try it or check
its code. Also can't find it pre-built in copr (or anywhere).

(
Along the way, if you like such tricks, found this:

https://github.com/gaul/awesome-ld-preload
)

>
> OST keeps the deployment directory in the source directory. Be careful if you
> like to "git clean -dxf' since it will delete all the deployment and
> you will have to
> kill the vms manually later.
>
> The next thing we need is the engine ca cert. It can be fetched like this:
>
> $ curl -k 
> 'https://engine/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA'
> > ca.pem
>
> I would expect OST to do this and put the file in the deployment directory.
>
> To upload or download images, backup vms or use other modern examples from
> the sdk, you need to have a configuration file like this:
>
> $ cat ~/.config/ovirt.conf
> [engine]
> engine_url = https://engine
> username = admin@internal
> password = 123
> cafile = ca.pem
>
> With this uploading from the same directory where ca.pem is located
> will work. If you want
> it to work from any directory, use absolute path to the file.
>
> I created a test image using qemu-img and qemu-io:
>
> $ qemu-img create -f qcow2 test.qcow2 1g
>
> To write some data to the test image we can use qemu-io. This writes 64k of 
> data
> (b"\xf0" * 64 * 1024) to offset 1 MiB.
>
> $ qemu-io -f qcow2 -c "write -P 240 1m 64k" test.qcow2

Never heard about qemu-io. Nice to know. Seems like it does not have a manpage,
in el8, although I can find such a manpage elsewhere on the net.

>
> Since this image contains only 64k of data, uploading it should be instant.
>
> The last part we need is the imageio client package:
>
> $ dnf install ovirt-imageio-client
>
> To upload the image, we need at least one host up and storage domains
> created. I did not find a way to prepare OST, so simply run this after
> run_tests completed. It took about an hour.
>
> To upload the image to raw sparse disk we can use:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
> -c engine --sd-name nfs --disk-sparse --disk-format raw test.qcow2
> [   0.0 ] Checking image...
> [   0.0 ] Image format: qcow2
> [   0.0 ] Disk format: raw
> [   0.0 ] Disk content type: data
> [   0.0 ] Disk provisioned size: 1073741824
> [   0.0 ] Disk initial size: 1073741824
> [   0.0 ] Disk name: test.raw
> [   0.0 ] Disk backup: False
> [   0.0 ] Connecting...
> [   0.0 ] Creating disk...
> [  36.3 ] Disk ID: 26df08cf-3dec-47b9-b776-0e2bc564b6d5
> [  36.3 ] Creating image transfer...
> [  38.2 ] Transfer ID: de8cfac9-ead2-4304-b18b-a1779d647716
> [  38.2 ] Transfer host name: lago-basic-suite-master-host-1
> [  38.2 ] Uploading image...
> [ 100.00% ] 1.00 GiB, 1.79 seconds, 571.50 MiB/s
> [  40.0 ] Finalizing image transfer...
> [  44.1 ] Upload completed successfully
>
> I uploaded this before I added the hosts to /etc/hosts, so the upload
> was done via the proxy.
>
> Yes, it took 36 seconds to create the disk.
>
> To download the disk use:
>
> $ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine