[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-11-10 Thread Murilo Morais
Matthew, good morning!

If using iSCSI there is no need to copy /etc/ceph/.

Configure the credentials through the Dashboard, do not use the Mutual
User/Password, you can leave it blank, in the Target configuration, uncheck
the "ACL authentication" option.

Em qui., 10 de nov. de 2022 às 05:20, Matthew J Black <
matt...@peregrineit.net> escreveu:

> So, a follow-up (now that I'm in an actual position to go ahead and
> implement this):
>
> In the Blog post it says to:
>
> ~~~
> 1) Copy /etc/ceph directory from your ceph node to ovirt-engine host.
> 2) Change ownership of the files in /etc/ceph on the ovirt-engine host
> making them readable from the engine process:
>  # chown ovirt /etc/ceph/*
> ~~~
>
> I can discover the three Ceph iSCSI Gateways when I go to set up the
> storage, but I can't log into them (yes, I am using the correct CHAP
> username and p/word)
>
> The "ovirt" user does not exist on the host (pre- or post- engine install)
> - so my question is: Which user *should* own that folder once it is copied
> to the host?
>
> Or am I backing up the wrong tree?
>
> Anyone else using Ceph iSCSI Gateways with oVirt?
>
> Cheers
>
> Dulux-Oz
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7C4RIHBR3EUPLNS54SHETZJ3AFTRGIJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OQGTRMLAWF6VDGDO7L47WJB7WWCBG2K6/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-11-10 Thread Matthew J Black
So, a follow-up (now that I'm in an actual position to go ahead and implement 
this):

In the Blog post it says to:

~~~
1) Copy /etc/ceph directory from your ceph node to ovirt-engine host.
2) Change ownership of the files in /etc/ceph on the ovirt-engine host making 
them readable from the engine process:
 # chown ovirt /etc/ceph/*
~~~

I can discover the three Ceph iSCSI Gateways when I go to set up the storage, 
but I can't log into them (yes, I am using the correct CHAP username and p/word)

The "ovirt" user does not exist on the host (pre- or post- engine install) - so 
my question is: Which user *should* own that folder once it is copied to the 
host?

Or am I backing up the wrong tree?

Anyone else using Ceph iSCSI Gateways with oVirt?

Cheers

Dulux-Oz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7C4RIHBR3EUPLNS54SHETZJ3AFTRGIJ/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Matthew J Black

@Didi

All right, I take your point.

So when I've got everything working I'll do a write up and submit it as 
a "bug", as you suggested - but you *are* going to rue the day you 
suggested it, because I'm now going to be "darkening your email" 
(darkening your door) like you won't believe, requesting help and 
running ideas past you - so you *have* been warned! 


Cheers

Dulux-Oz

(aka Matthew J Black)

On 13/09/2022 20:26, Yedidyah Bar David wrote:


Hi Matthew,

On Tue, Sep 13, 2022 at 12:26 PM Matthew J Black
 wrote:

Well, if I can put my $0.02 worth in...

What I've been trying to do is set up an oVirt cluster (v4.5.X) to use a Ceph 
(Quincey) cluster as the back-end via iSCSI. One thing I found was that 
up-to-date, relevant information from both the Ceph-side *and* the oVirt side 
on how to do this was... hard to find, not explained very well, and often out 
of date (like this relevant Blog post, if it is now out of date, and based on 
the posts of this thread that is what it appears to be) - this also applies to 
pre-installing / not pre-installing OpenVSwitch (see my other thread from 
today).

I agree.

And, let me take back my previous reply, about updating the blog post.

A blog post is, by definition, out-of-date, very soon after it's
published. It's inside a blog, right? A kind of diary. You don't
update your paper diary after you wrote some entry in it, right?

Project/product Documentation, OTOH, is supposed/expected to be kept
up-to-date over time.

If a doc/guide is out-of-date, you'd naturally consider this a bug.
Not so for a blog post.

In oVirt, it's basically the same.

Blog posts, here, are mainly POCs - demonstrations that something is doable.

The fact that you do not find oVirt-on-Ceph in the main documentation
is not a mistake - it's simply not considered (yet? See below)
stable/supportable enough to enter that space.


So I've been experimenting in a test environment (using Rocky Linux - initially v9 but 
now v8.6), tearing down and re-building (physical) boxes, and making notes for myself as 
I go. And, as may be implied from this and my other thread from today, the types of 
problems and issues I'm encountering are relatively trivial and easily answered **once I 
can get on to someone who knows** (those issues that aren't "self-inflicted", 
of course).

And for what it is worth, I am extremely grateful for the help I've received 
today - thank you all!

So if people are talking about doco, etc, then this might be worth considering 
as well (ie, how to go about doing what I've been doing).

I'm reluctant to write this up myself for a number of reasons, including (but not limited to) the 
issue of maintainability, the fact that I'm not experienced enough with oVirt to hold myself out as 
an "expert", and because of an incident in the past where I ended up taking a lot of 
flack that wasn't really my fault (the old "once bitten, twice shy").

I understand very well.

The fact is, that no-one else did, right? If no-one does, it will never happen.

What you can do:
- Create a ticket/bug/issue for tracking this. Despite what perhaps
some people might think, this isn't useless, even if you are not going
to handle it yourself, nor know about anyone that is.
- Include there what you already know and had to do. This most
definitely does not put you in any position of authority - I think
no-one will expect you to keep a comment in an issue up-to-date. It's
less authoritative than a blog post, right? Just a comment. But it's
extremely helpful, for both people that want to do what you want to
do, those that want to actually handle the issue (by writing docs),
and those wanting to review the eventual doc patches.
- It also makes it much easier to find, link, etc., so will likely get
more traction than a thread like current.

I'd like to use this opportunity to add some more thoughts,
at-most-tangentially related to the current thread.

Speaking only for myself, not for Red Hat.

Red Hat already decided that the future lies in containers, and people
that still need VMs for their legacy stuff (as considered by Red Hat)
should handle that inside OpenShift using CNV. See also e.g. [1] for
what might eventually, when it matures enough, be a more-or-less
replacement for oVirt's functionality, although definitely not for
oVirt's behavior. This means, in particular, that if Red Hat decides
to support so-called Hyper Converged Infrastructure (HCI) setups (or
it might already have done, no idea), it will be based on
OpenShift/CNV + Ceph, not RHV. AFAIU, IMHO, etc. But this does not
mean that oVirt-on-Ceph HCI is impossible - it means that for this to
happen, someone else should do most of the work. We (as in, Red Hat
employees working on oVirt) will definitely be able to help if/where
needed, but can't be expected to do the bulk of the work.

I personally still think that oVirt is most probably the best
small-/medium-scale Open Source clustered virtualization system. But
to keep it thriving, more 

[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Yedidyah Bar David
Hi Matthew,

On Tue, Sep 13, 2022 at 12:26 PM Matthew J Black
 wrote:
>
> Well, if I can put my $0.02 worth in...
>
> What I've been trying to do is set up an oVirt cluster (v4.5.X) to use a Ceph 
> (Quincey) cluster as the back-end via iSCSI. One thing I found was that 
> up-to-date, relevant information from both the Ceph-side *and* the oVirt side 
> on how to do this was... hard to find, not explained very well, and often out 
> of date (like this relevant Blog post, if it is now out of date, and based on 
> the posts of this thread that is what it appears to be) - this also applies 
> to pre-installing / not pre-installing OpenVSwitch (see my other thread from 
> today).

I agree.

And, let me take back my previous reply, about updating the blog post.

A blog post is, by definition, out-of-date, very soon after it's
published. It's inside a blog, right? A kind of diary. You don't
update your paper diary after you wrote some entry in it, right?

Project/product Documentation, OTOH, is supposed/expected to be kept
up-to-date over time.

If a doc/guide is out-of-date, you'd naturally consider this a bug.
Not so for a blog post.

In oVirt, it's basically the same.

Blog posts, here, are mainly POCs - demonstrations that something is doable.

The fact that you do not find oVirt-on-Ceph in the main documentation
is not a mistake - it's simply not considered (yet? See below)
stable/supportable enough to enter that space.

>
> So I've been experimenting in a test environment (using Rocky Linux - 
> initially v9 but now v8.6), tearing down and re-building (physical) boxes, 
> and making notes for myself as I go. And, as may be implied from this and my 
> other thread from today, the types of problems and issues I'm encountering 
> are relatively trivial and easily answered **once I can get on to someone who 
> knows** (those issues that aren't "self-inflicted", of course).
>
> And for what it is worth, I am extremely grateful for the help I've received 
> today - thank you all!
>
> So if people are talking about doco, etc, then this might be worth 
> considering as well (ie, how to go about doing what I've been doing).
>
> I'm reluctant to write this up myself for a number of reasons, including (but 
> not limited to) the issue of maintainability, the fact that I'm not 
> experienced enough with oVirt to hold myself out as an "expert", and because 
> of an incident in the past where I ended up taking a lot of flack that wasn't 
> really my fault (the old "once bitten, twice shy").

I understand very well.

The fact is, that no-one else did, right? If no-one does, it will never happen.

What you can do:
- Create a ticket/bug/issue for tracking this. Despite what perhaps
some people might think, this isn't useless, even if you are not going
to handle it yourself, nor know about anyone that is.
- Include there what you already know and had to do. This most
definitely does not put you in any position of authority - I think
no-one will expect you to keep a comment in an issue up-to-date. It's
less authoritative than a blog post, right? Just a comment. But it's
extremely helpful, for both people that want to do what you want to
do, those that want to actually handle the issue (by writing docs),
and those wanting to review the eventual doc patches.
- It also makes it much easier to find, link, etc., so will likely get
more traction than a thread like current.

I'd like to use this opportunity to add some more thoughts,
at-most-tangentially related to the current thread.

Speaking only for myself, not for Red Hat.

Red Hat already decided that the future lies in containers, and people
that still need VMs for their legacy stuff (as considered by Red Hat)
should handle that inside OpenShift using CNV. See also e.g. [1] for
what might eventually, when it matures enough, be a more-or-less
replacement for oVirt's functionality, although definitely not for
oVirt's behavior. This means, in particular, that if Red Hat decides
to support so-called Hyper Converged Infrastructure (HCI) setups (or
it might already have done, no idea), it will be based on
OpenShift/CNV + Ceph, not RHV. AFAIU, IMHO, etc. But this does not
mean that oVirt-on-Ceph HCI is impossible - it means that for this to
happen, someone else should do most of the work. We (as in, Red Hat
employees working on oVirt) will definitely be able to help if/where
needed, but can't be expected to do the bulk of the work.

I personally still think that oVirt is most probably the best
small-/medium-scale Open Source clustered virtualization system. But
to keep it thriving, more people should help. Including those that
think that they are not experienced enough :-)

>
> "Anyway, it's just a thought - you all have a good day." - Beau Of The Fifth 
> Column

Thanks for your message. I think it was helpful.

[1] https://okd-virtualization.github.io/

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send 

[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Matthew J Black
Well, if I can put my $0.02 worth in...

What I've been trying to do is set up an oVirt cluster (v4.5.X) to use a Ceph 
(Quincey) cluster as the back-end via iSCSI. One thing I found was that 
up-to-date, relevant information from both the Ceph-side *and* the oVirt side 
on how to do this was... hard to find, not explained very well, and often out 
of date (like this relevant Blog post, if it is now out of date, and based on 
the posts of this thread that is what it appears to be) - this also applies to 
pre-installing / not pre-installing OpenVSwitch (see my other thread from 
today).

So I've been experimenting in a test environment (using Rocky Linux - initially 
v9 but now v8.6), tearing down and re-building (physical) boxes, and making 
notes for myself as I go. And, as may be implied from this and my other thread 
from today, the types of problems and issues I'm encountering are relatively 
trivial and easily answered **once I can get on to someone who knows** (those 
issues that aren't "self-inflicted", of course).

And for what it is worth, I am extremely grateful for the help I've received 
today - thank you all!

So if people are talking about doco, etc, then this might be worth considering 
as well (ie, how to go about doing what I've been doing).

I'm reluctant to write this up myself for a number of reasons, including (but 
not limited to) the issue of maintainability, the fact that I'm not experienced 
enough with oVirt to hold myself out as an "expert", and because of an incident 
in the past where I ended up taking a lot of flack that wasn't really my fault 
(the old "once bitten, twice shy").

"Anyway, it's just a thought - you all have a good day." - Beau Of The Fifth 
Column

Cheers

Dulux-Oz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CZ7OVUFDMNBTKCJRVI5UEIYVKBTFHJ65/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Yedidyah Bar David
No idea about ceph/storage, but the cockpit deployment guide was
removed because it's deprecated:

https://bugzilla.redhat.com/show_bug.cgi?id=2020448

We also cleaned up various links to that guide [1], but apparently not
in the blog - no idea how that one is maintained. Sandro? Perhaps this
(how the blog is maintained) should also be mentioned in one of the
top-level md files (README*, CONTRIBUTING.md, not sure).

[1] https://github.com/oVirt/ovirt-site/issues?q=cockpit+

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V3R45YYRF6V7SBRY2RMMRQF4EM2WLV4A/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Matthew J Black
Cool - thank you  :-)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/75A2BHC7TWOEZHGAW3GJRURM2FTCJMDO/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Benny Zlotnik
It's not needed for 4.5 because Managed Block Storage is enabled by default
and the packages are installed automatically

On Tue, Sep 13, 2022 at 10:37 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno mar 13 set 2022 alle ore 08:59 Matthew J Black <
> matt...@peregrineit.net> ha scritto:
>
>> Hi All,
>>
>> In the above mentioned blog post (
>> https://blogs.ovirt.org/2021/07/using-ceph-only-storage-for-ovirt-datacenter/)
>> in mentions the line: "Follow oVirt documentation for setting up Cinderlib"
>> with a link to this URL:
>> https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/index.html#Set_up_Cinderlib
>>
>> This link is broken/obsolete/no longer available, so my question(s)
>> is/are: Where can I obtain this information? Is there a new URL? Are these
>> instructions no-longer required with the new oVirt v4.5.X? Can someone who
>> has these instructions post/email them, please?
>>
>
> Looks like the instructions got removed from documentation. I'll let
> storage team elaborate on  its removal but I can provide link to archived
> documentation:
>
> http://web.archive.org/web/20210625073909/https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/index.html#Set_up_Cinderlib
>
>
>
>
>
>>
>> Thanks in advance
>>
>> Dulux-Oz
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TY4MWE4EVXSYL6V4OA6KAX57FDLHQ7M/
>>
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D PERFORMANCE & SCALE
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADYEZYLLP2LR4DDD5HD5AJPNCFIRDKKU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VB5R6YFTCSPFIA7JRTI6WNVU5GNZYO5Q/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Matthew J Black
Thanks for that Sandro,

Two follow-up questions:
- Is this doco still relevant for oVirt 4.5.X and Ceph Quincy?
- If the oVirt Engine is *not* using a Ceph (iSCSI) block does this section of 
the doco need to be followed?

Thanks in advance

Dulux-Oz
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PYFRVXQKNNEOHCJIUA6A44NXZHCA5OJ7/


[ovirt-users]Re: Blog Post - Using Ceph Only Storage For oVirt Datacenter by Sandro Bonazzola – Wednesday 14 July 2021

2022-09-13 Thread Sandro Bonazzola
Il giorno mar 13 set 2022 alle ore 08:59 Matthew J Black <
matt...@peregrineit.net> ha scritto:

> Hi All,
>
> In the above mentioned blog post (
> https://blogs.ovirt.org/2021/07/using-ceph-only-storage-for-ovirt-datacenter/)
> in mentions the line: "Follow oVirt documentation for setting up Cinderlib"
> with a link to this URL:
> https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/index.html#Set_up_Cinderlib
>
> This link is broken/obsolete/no longer available, so my question(s)
> is/are: Where can I obtain this information? Is there a new URL? Are these
> instructions no-longer required with the new oVirt v4.5.X? Can someone who
> has these instructions post/email them, please?
>

Looks like the instructions got removed from documentation. I'll let
storage team elaborate on  its removal but I can provide link to archived
documentation:
http://web.archive.org/web/20210625073909/https://ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interface/index.html#Set_up_Cinderlib





>
> Thanks in advance
>
> Dulux-Oz
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TY4MWE4EVXSYL6V4OA6KAX57FDLHQ7M/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D PERFORMANCE & SCALE

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ADYEZYLLP2LR4DDD5HD5AJPNCFIRDKKU/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Anatoliy Radchenko
Hi all,
does it mean that it will be possible to use the drbd storage in the ovirt
environment?
Thanks

Il giorno mer 14 lug 2021 alle ore 12:39 Konstantin Shalygin 
ha scritto:

> I'm mean not this. BLL.Storage currently remove 'standard' Ceph
> integration for libvirt and use kernel mounts instead this. Again: removed,
> because in legacy cinder integration works in qemu process without any
> mounts.
> Any plans to add libvirt librbd variant to oVirt back?
>
> Thanks,
> k
>
> Sent from my iPhone
>
> > On 14 Jul 2021, at 11:40, Benny Zlotnik  wrote:
> >
> > Not currently, we do want to support this using rbd-nbd
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KWIXTNR5GAZHJEOMVTZFUNEPMAQW7W67/
>


-- 
_

Radchenko Anatolii
via Manoppello, 83 - 00132 Roma
tel.   06 96044328
cel.  329 6030076

Nota di riservatezza : ai sensi e per gli effetti della Legge sulla Tutela
della Riservatezza Personale (Legge 196/03) si precisa che il presente
messaggio, corredato dei relativi allegati, contiene informazioni da
considerarsi strettamente riservate, ed è destinato esclusivamente al
destinatario sopra indicato, il quale è l'unico autorizzato ad usarlo,
copiarlo e, sotto la propria responsabilità, diffonderlo. Chiunque
ricevesse questo messaggio per errore o comunque lo leggesse senza esserne
legittimato è avvertito che trattenerlo, copiarlo, divulgarlo, distribuirlo
a persone diverse dal destinatario è severamente proibito,ed è pregato di
rinviarlo al mittente.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SUA2PETHHKF5WEVPPNUQ4PWDHNCZOS3L/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Konstantin Shalygin
I'm mean not this. BLL.Storage currently remove 'standard' Ceph integration for 
libvirt and use kernel mounts instead this. Again: removed, because in legacy 
cinder integration works in qemu process without any mounts.
Any plans to add libvirt librbd variant to oVirt back?

Thanks,
k

Sent from my iPhone

> On 14 Jul 2021, at 11:40, Benny Zlotnik  wrote:
> 
> Not currently, we do want to support this using rbd-nbd
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KWIXTNR5GAZHJEOMVTZFUNEPMAQW7W67/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Benny Zlotnik
Not currently, we do want to support this using rbd-nbd

On Wed, Jul 14, 2021 at 11:26 AM Konstantin Shalygin  wrote:

> It's possible to use librbd instead kernel mount like in OpenStack?
>
> Sent from my iPhone
>
> > On 14 Jul 2021, at 10:41, Sandro Bonazzola  wrote:
> >
> > They are mounted as block storage
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5WEGIVAF57ATAGKA4TXITFZFOPCNIGH6/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Benny Zlotnik
In 4.4.6 Copying from regular Storage Domains to Managed Block Storage
Domains was added

On Wed, Jul 14, 2021 at 10:34 AM Sandro Bonazzola 
wrote:

>
>
> Il giorno mer 14 lug 2021 alle ore 08:53 Konstantin Shalygin <
> k0...@k0ste.ru> ha scritto:
>
>> Hi Sandro,
>>
>> - How this image is mounted on oVirt host?
>>
>
> They are mounted as block storage
>
> /rhev/
> `-- data-center
> |-- b55ef7a8-da51-11eb-b619-5254001ce0e4
> |   |-- 1996dc3b-d33f-49cb-b32a-8f7b1d50af5e ->
> /rhev/data-center/mnt/blockSD/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
> |   `-- mastersd ->
> /rhev/data-center/mnt/blockSD/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
> `-- mnt
> `-- blockSD
> `-- 1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
> |-- dom_md
> |   |-- ids ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/ids
> |   |-- inbox ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/inbox
> |   |-- leases ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/leases
> |   |-- master ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/master
> |   |-- metadata ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/metadata
> |   |-- outbox ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/outbox
> |   `-- xleases ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/xleases
> |-- ha_agent
> |   |-- hosted-engine.lockspace ->
> /run/vdsm/storage/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/ac3a245f-e6fe-4159-b0ee-be08d4048bb7/8b4bddc1-1602-45d7-854c-eaeac9549617
> |   `-- hosted-engine.metadata ->
> /run/vdsm/storage/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/dc77bfc2-cecd-4ab5-81f7-e15b81e45994/1927372e-019b-448a-8645-697b8b8ed42a
> `-- images
> |-- 10af85ab-434d-4104-800d-099e05a3653e
> |   `-- 08ad02fc-6bfc-40ab-9c3d-24e0f1ac6689 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/08ad02fc-6bfc-40ab-9c3d-24e0f1ac6689
> |-- ac3a245f-e6fe-4159-b0ee-be08d4048bb7
> |   `-- 8b4bddc1-1602-45d7-854c-eaeac9549617 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/8b4bddc1-1602-45d7-854c-eaeac9549617
> |-- bb667f95-bbb0-41a4-ad15-66f1b9bdda59
> |   `-- 5abcb5f0-2c28-41b4-bfcc-bd41ef730d35 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/5abcb5f0-2c28-41b4-bfcc-bd41ef730d35
> |-- cccd50f6-6e47-43ab-9075-1bbd31d5e3b7
> |   `-- 169eacc2-584c-47ee-a295-ad3aa9c811c5 ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/169eacc2-584c-47ee-a295-ad3aa9c811c5
> |-- dc77bfc2-cecd-4ab5-81f7-e15b81e45994
> |   `-- 1927372e-019b-448a-8645-697b8b8ed42a ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/1927372e-019b-448a-8645-697b8b8ed42a
> `-- fc6b0b84-17fa-42e9-80ae-97cf50e8b74d
> `-- 3eaeb1ba-2b36-4c29-b721-da19d3e5784e ->
> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/3eaeb1ba-2b36-4c29-b721-da19d3e5784e
>
>
>> - How to change image features?
>> - How to add upmap option to libvirt domain?
>> - How libvirt domain looks like?
>> - How snapshots works?
>>
>
> Snapshot works fine, going to VM tab and creating snapshot as usual.
>
>
>> - How clones works?
>>
>
> Disk copy can be done from the engine storage -> disk tab.
> VM cloning failed for me, opened *Bug 1982083*
>  - Cloning VM with
> managed block storage raise a NPE
>
>
>
>> - How to migrate images from one domain to another?
>>
>
> I would let the storage team answer these questions in detail, +Benny
> Zlotnik  ?
>
>
> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
>
> Red Hat EMEA 
>
> sbona...@redhat.com
> 
>
> *Red Hat respects your work life balance. Therefore there is no need to
> answer this email out of your office hours.*
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SBKM3RWJMEWTBQ3456EDFWNLPGCDVKXO/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Konstantin Shalygin
It's possible to use librbd instead kernel mount like in OpenStack?

Sent from my iPhone

> On 14 Jul 2021, at 10:41, Sandro Bonazzola  wrote:
> 
> They are mounted as block storage
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NIBVZ5ONYWHT5Z6IJMETBVDK36U7ONQR/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-14 Thread Sandro Bonazzola
Il giorno mer 14 lug 2021 alle ore 08:53 Konstantin Shalygin 
ha scritto:

> Hi Sandro,
>
> - How this image is mounted on oVirt host?
>

They are mounted as block storage

/rhev/
`-- data-center
|-- b55ef7a8-da51-11eb-b619-5254001ce0e4
|   |-- 1996dc3b-d33f-49cb-b32a-8f7b1d50af5e ->
/rhev/data-center/mnt/blockSD/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
|   `-- mastersd ->
/rhev/data-center/mnt/blockSD/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
`-- mnt
`-- blockSD
`-- 1996dc3b-d33f-49cb-b32a-8f7b1d50af5e
|-- dom_md
|   |-- ids -> /dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/ids
|   |-- inbox ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/inbox
|   |-- leases ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/leases
|   |-- master ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/master
|   |-- metadata ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/metadata
|   |-- outbox ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/outbox
|   `-- xleases ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/xleases
|-- ha_agent
|   |-- hosted-engine.lockspace ->
/run/vdsm/storage/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/ac3a245f-e6fe-4159-b0ee-be08d4048bb7/8b4bddc1-1602-45d7-854c-eaeac9549617
|   `-- hosted-engine.metadata ->
/run/vdsm/storage/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/dc77bfc2-cecd-4ab5-81f7-e15b81e45994/1927372e-019b-448a-8645-697b8b8ed42a
`-- images
|-- 10af85ab-434d-4104-800d-099e05a3653e
|   `-- 08ad02fc-6bfc-40ab-9c3d-24e0f1ac6689 ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/08ad02fc-6bfc-40ab-9c3d-24e0f1ac6689
|-- ac3a245f-e6fe-4159-b0ee-be08d4048bb7
|   `-- 8b4bddc1-1602-45d7-854c-eaeac9549617 ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/8b4bddc1-1602-45d7-854c-eaeac9549617
|-- bb667f95-bbb0-41a4-ad15-66f1b9bdda59
|   `-- 5abcb5f0-2c28-41b4-bfcc-bd41ef730d35 ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/5abcb5f0-2c28-41b4-bfcc-bd41ef730d35
|-- cccd50f6-6e47-43ab-9075-1bbd31d5e3b7
|   `-- 169eacc2-584c-47ee-a295-ad3aa9c811c5 ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/169eacc2-584c-47ee-a295-ad3aa9c811c5
|-- dc77bfc2-cecd-4ab5-81f7-e15b81e45994
|   `-- 1927372e-019b-448a-8645-697b8b8ed42a ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/1927372e-019b-448a-8645-697b8b8ed42a
`-- fc6b0b84-17fa-42e9-80ae-97cf50e8b74d
`-- 3eaeb1ba-2b36-4c29-b721-da19d3e5784e ->
/dev/1996dc3b-d33f-49cb-b32a-8f7b1d50af5e/3eaeb1ba-2b36-4c29-b721-da19d3e5784e


> - How to change image features?
> - How to add upmap option to libvirt domain?
> - How libvirt domain looks like?
> - How snapshots works?
>

Snapshot works fine, going to VM tab and creating snapshot as usual.


> - How clones works?
>

Disk copy can be done from the engine storage -> disk tab.
VM cloning failed for me, opened *Bug 1982083*
 - Cloning VM with
managed block storage raise a NPE



> - How to migrate images from one domain to another?
>

I would let the storage team answer these questions in detail, +Benny
Zlotnik  ?


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RMLJLTZYLMOCQHRP2YRR3PEN5ASQ46IS/


[ovirt-users] Re: Blog post - Using Ceph only storage for oVirt datacenter

2021-07-13 Thread Konstantin Shalygin
Hi Sandro,

- How this image is mounted on oVirt host?
- How to change image features?
- How to add upmap option to libvirt domain?
- How libvirt domain looks like?
- How snapshots works?
- How clones works?
- How to migrate images from one domain to another?


Thanks,
k

Sent from my iPhone

> On 14 Jul 2021, at 09:31, Sandro Bonazzola  wrote:
> 
> 
> Hi, I just published the result of the testing of oVirt 4.4.7 with Ceph 
> Pacific and RDO Victoria for deploying an oVirt datacenter using Ceph as 
> storage with a blog post at 
> https://blogs.ovirt.org/2021/07/using-ceph-only-storage-for-ovirt-datacenter/
> Comments and suggestions are welcome.
> 
> Thanks,
> -- 
> Sandro Bonazzola
> MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
> Red Hat EMEA
> sbona...@redhat.com   
>   
> Red Hat respects your work life balance. Therefore there is no need to answer 
> this email out of your office hours.
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/5QAON3RURXAYWYV3XDE7FYF3VDV2U7NA/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TJMG7UJKSXU2U7KUMSA2Z3DLB5DCTCIV/