Re: [ovirt-users] hosted-engine VM and services not working

2017-06-20 Thread Andrew Dent

Hi Yaniv

I found a solution.
Our Ovirt 3.6 AIO box was still running and had those VMs still 
configured in their pre exported and switch off state.
I removed any snap shots I found from those pre exported VMs, then 
copied the disk image files and other bits from host01 (Ovirt v 4.1) 
back into the Ovirt 3.6 AIO box, and were needed fixing the relevent IDs 
to be what the Engine in the Ovirt 3.6 box expected.
The VMs then started up properly again without hassle and with the 
latest files on the Ovirt 3.6 AIO box.


So now in the progress of rebuilding host01 with hosted-engine v4.1

Kind regards



Andrew

-- Original Message --
From: "Yaniv Kaul" 
To: "Andrew Dent" 
Cc: "users" 
Sent: 18/06/2017 6:00:09 PM
Subject: Re: [ovirt-users] hosted-engine VM and services not working




On Sat, Jun 17, 2017 at 12:50 AM,  wrote:
If I reinstall and the rerun the hosted-engine setup how do I get the 
VMs in their current running state back into and being recognised by 
the new hosted engine?


Current running state is again quite challenging. You'll need to fix 
the hosted-engine.


Can import the storage domain? (not for running VMs)
Y.



Kind regards

Andrew

On 17 Jun 2017, at 6:54 AM, Yaniv Kaul  wrote:




On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent  
wrote:

Hi

Well I've got myself into a fine mess.

host01 was setup with hosted-engine v4.1. This was successful.
Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is 
still running with more VMs on it)
Tried to add host02 to the new Ovirt 4.1 setup. This partially 
succeeded but I couldn't add any storage domains to it. Cannot 
remember why.

In Ovirt engine UI I removed host02.
I reinstalled host02 with Centos7, tried to add it and Ovirt UI told 
me it was already there (but it wasn't listed in the UI).
Renamed the reinstalled host02 to host03, changed the ipaddress, 
reconfig the DNS server and added host03 into the Ovirt Engine UI.

All good, and I was able to import more VMs to it.
I was also able to shutdown a VM on host01 assign it to host03 and 
start the VM. Cool, everything working.

The above was all last couple of weeks.

This week I performed some yum updates on the Engine VM. No reboot.
Today noticed that the Ovirt services in the Engine VM were in a 
endless restart loop. They would be up for a 5 minutes and then die.
Looking into /var/log/ovirt-engine/engine.log and I could only see 
errors relating to host02. Ovirt was trying to find it and failing. 
Then falling over.
I ran "hosted-engine --clean-metadata" thinking it would cleanup and 
remove bad references to hosts, but now realise that was a really 
bad idea as it didn't do what I'd hoped.
At this point the sequence below worked, I could login to Ovirt UI 
but after 5 minutes the services would be off

service ovirt-engine restart
service ovirt-websocket-proxy restart
service httpd restart

I saw some reference to having to remove hosts from the database by 
hand in situations where under the hood of Ovirt a decommission host 
was still listed, but wasn't showing in the GUI.
So I removed reference to host02 (vds_id and host_id) in the 
following tables in this order.

vds_dynamic
vds_statistics
vds_static
host_device

Now when I try to start ovirt-websocket it will not start
service ovirt-websocket start
Redirecting to /bin/systemctl start  ovirt-websocket.service
Failed to start ovirt-websocket.service: Unit not found.

I'm now thinking that I need to do the following in the engine VM
# engine-cleanup # yum remove ovirt-engine # yum install 
ovirt-engine # engine-setup
But to run engine-cleanup I need to put the engine-vm into 
maintenance mode and because of the --clean-metadata that I ran 
earlier on host01 I cannot do that.


What is the best course of action from here?


To be honest, with all the steps taken above, I'd install everything 
(including OS) from scratch...

There's a bit too much mess to try to clean up properly here.
Y.



Cheers



Andrew


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users 





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?

2017-06-20 Thread Edward Haas
On Tue, Jun 20, 2017 at 10:22 PM, Nir Soffer  wrote:

>
> בתאריך יום ג׳, 20 ביוני 2017, 13:10, מאת Martin Sivak ‏ >:
>
>> Hi,
>>
>> I think what Edy did here makes sense. We do not need anything fancy
>> for technical documentation and design. This would also be easy to
>> maintain or integrate to the main website (git submodules will help).
>>
>> I have two basic requirements for design space:
>>
>> - commenting so devs can discuss the design
>> - ease of update so we can respond to comments
>>
>> A plain markdown repo would work well for this and both points are
>> possible using github or gerrit workflows.
>>
>> I would actually prefer if we had something that is directly part of
>> the source repositories so we could review code updates and docs
>> updates together. Unfortunately that it is hard to do when we have
>> multiple different components to update. So this proposal is probably
>> the next best thing.
>>
>
> I think we need a wiki for this, instead of reinventing one :-)
>
> We have a builtin markdown based wiki in the ovirt site github project.
>
> For discussion, we have the mailing list and other channels like bluejeans
> and irc.
>

Discussing reviews through emails?? Good luck with that.
What's next? Sending patches through emails?

I want the power of the review (gerrit/github), not a poor alternative.
The only advantage of github over gerrit in this respect is the already
existing rendering
of the md files.


>
>
> Nir
>
>
>> --
>> Martin Sivak
>> SLA
>>
>>
>> On Thu, Jun 15, 2017 at 8:11 PM, Edward Haas  wrote:
>> > Hi all,
>> >
>> > Came back to this thread due to a need to post some design
>> documentation.
>> > After fetching the ovirt-site and looking up where to start the
>> document, I
>> > remembered why I stopped using it.
>> >
>> > After exploring several options, including the GitHub wiki, I think
>> that for
>> > the development documentation we can just go with the minimum:
>> > Use a repo to just post markdown and image files, letting GitHub
>> > rendering/view of such files to do the job for us.
>> > We can still review the documents and have discussions on the content,
>> and
>> > provide access to all who wants to use it (to perform the merges).
>> > The fact it uses markdown and images, can allow its content to be
>> relocated
>> > to any other solutions that will come later on, including adding the
>> content
>> > back on ovirt-site.
>> >
>> > Here is a simple example:
>> > https://github.com/EdDev/ovirt-devwiki/blob/initial-structure/index.md
>> >
>> > it uses simple markdown md files with relative links to other pages.
>> Adding
>> > images is also simple.
>> >
>> > What do you think?
>> >
>> > Thanks,
>> > Edy.
>> >
>> >
>> >
>> > On Tue, Feb 7, 2017 at 12:42 PM, Michal Skrivanek
>> >  wrote:
>> >>
>> >>
>> >> On 16 Jan 2017, at 11:13, Roy Golan  wrote:
>> >>
>> >>
>> >>
>> >> On 11 January 2017 at 17:06, Marc Dequènes (Duck) 
>> wrote:
>> >>>
>> >>> Quack,
>> >>>
>> >>> On 01/08/2017 06:39 PM, Barak Korren wrote:
>> >>> > On 8 January 2017 at 10:17, Roy Golan  wrote:
>> >>> >> Adding infra which I forgot to add from the beginning
>> >>>
>> >>> Thanks.
>> >>>
>> >>> > I don't think this is an infra issue, more of a community/working
>> >>> > procedures one.
>> >>>
>> >>> I do thin it is. We are involved in the tooling, for their
>> maintenance,
>> >>> for documenting where things are, for suggesting better solutions,
>> >>> ensuring security…
>> >>>
>> >>> > On the one hand, the developers need a place where they create and
>> >>> > discuss design documents and road maps. That please needs to be as
>> >>> > friction-free as possible to allow developers to work on the code
>> >>> > instead of on the documentation tools.
>> >>>
>> >>> As for code, I think there is need for review, even more for design
>> >>> documents, so I don't see why people are bothered by PRs, which is a
>> >>> tool they already know fairly well.
>> >>
>> >>
>> >> because it takes ages to get attention and get it in, even in cases
>> when
>> >> the text/update is more of an FYI and doesn’t require feedback.
>> >> That leads to frustration, and that leads to loss of any motivation to
>> >> contribute anything at all.
>> >> You can see people posting on their own platforms, blogs, just to run
>> away
>> >> from this one
>> >>
>> >>>
>> >>> For people with few git knowledge, the GitHub web interface allows to
>> >>> edit files.
>> >>>
>> >>> > On the other hand, the user community needs a good, up to date
>> source
>> >>> > of information about oVirt and how to use it.
>> >>>
>> >>> Yes, this official entry point and it needs to be clean.
>> >>
>> >>
>> >> yep, you’re right about the entry point -like pages
>> >>
>> >>>
>> >>> > Having said the above, I don't think the site project's wiki is the
>> >>> > best place for this. The individual project mirrors on 

Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?

2017-06-20 Thread Nir Soffer
בתאריך יום ג׳, 20 ביוני 2017, 13:10, מאת Martin Sivak ‏:

> Hi,
>
> I think what Edy did here makes sense. We do not need anything fancy
> for technical documentation and design. This would also be easy to
> maintain or integrate to the main website (git submodules will help).
>
> I have two basic requirements for design space:
>
> - commenting so devs can discuss the design
> - ease of update so we can respond to comments
>
> A plain markdown repo would work well for this and both points are
> possible using github or gerrit workflows.
>
> I would actually prefer if we had something that is directly part of
> the source repositories so we could review code updates and docs
> updates together. Unfortunately that it is hard to do when we have
> multiple different components to update. So this proposal is probably
> the next best thing.
>

I think we need a wiki for this, instead of reinventing one :-)

We have a builtin markdown based wiki in the ovirt site github project.

For discussion, we have the mailing list and other channels like bluejeans
and irc.

Nir


> --
> Martin Sivak
> SLA
>
>
> On Thu, Jun 15, 2017 at 8:11 PM, Edward Haas  wrote:
> > Hi all,
> >
> > Came back to this thread due to a need to post some design documentation.
> > After fetching the ovirt-site and looking up where to start the
> document, I
> > remembered why I stopped using it.
> >
> > After exploring several options, including the GitHub wiki, I think that
> for
> > the development documentation we can just go with the minimum:
> > Use a repo to just post markdown and image files, letting GitHub
> > rendering/view of such files to do the job for us.
> > We can still review the documents and have discussions on the content,
> and
> > provide access to all who wants to use it (to perform the merges).
> > The fact it uses markdown and images, can allow its content to be
> relocated
> > to any other solutions that will come later on, including adding the
> content
> > back on ovirt-site.
> >
> > Here is a simple example:
> > https://github.com/EdDev/ovirt-devwiki/blob/initial-structure/index.md
> >
> > it uses simple markdown md files with relative links to other pages.
> Adding
> > images is also simple.
> >
> > What do you think?
> >
> > Thanks,
> > Edy.
> >
> >
> >
> > On Tue, Feb 7, 2017 at 12:42 PM, Michal Skrivanek
> >  wrote:
> >>
> >>
> >> On 16 Jan 2017, at 11:13, Roy Golan  wrote:
> >>
> >>
> >>
> >> On 11 January 2017 at 17:06, Marc Dequènes (Duck) 
> wrote:
> >>>
> >>> Quack,
> >>>
> >>> On 01/08/2017 06:39 PM, Barak Korren wrote:
> >>> > On 8 January 2017 at 10:17, Roy Golan  wrote:
> >>> >> Adding infra which I forgot to add from the beginning
> >>>
> >>> Thanks.
> >>>
> >>> > I don't think this is an infra issue, more of a community/working
> >>> > procedures one.
> >>>
> >>> I do thin it is. We are involved in the tooling, for their maintenance,
> >>> for documenting where things are, for suggesting better solutions,
> >>> ensuring security…
> >>>
> >>> > On the one hand, the developers need a place where they create and
> >>> > discuss design documents and road maps. That please needs to be as
> >>> > friction-free as possible to allow developers to work on the code
> >>> > instead of on the documentation tools.
> >>>
> >>> As for code, I think there is need for review, even more for design
> >>> documents, so I don't see why people are bothered by PRs, which is a
> >>> tool they already know fairly well.
> >>
> >>
> >> because it takes ages to get attention and get it in, even in cases when
> >> the text/update is more of an FYI and doesn’t require feedback.
> >> That leads to frustration, and that leads to loss of any motivation to
> >> contribute anything at all.
> >> You can see people posting on their own platforms, blogs, just to run
> away
> >> from this one
> >>
> >>>
> >>> For people with few git knowledge, the GitHub web interface allows to
> >>> edit files.
> >>>
> >>> > On the other hand, the user community needs a good, up to date source
> >>> > of information about oVirt and how to use it.
> >>>
> >>> Yes, this official entry point and it needs to be clean.
> >>
> >>
> >> yep, you’re right about the entry point -like pages
> >>
> >>>
> >>> > Having said the above, I don't think the site project's wiki is the
> >>> > best place for this. The individual project mirrors on GitHub may be
> >>> > better for this
> >>>
> >>> We could indeed split the technical documentation. If people want to
> >>> experiment with the GH wiki pages, I won't interfere.
> >>>
> >>> I read several people in this thread really miss the old wiki, so I
> >>> think it is time to remember why we did not stay in paradise. I was not
> >>> there at the time, but I know the wiki was not well maintained. People
> >>> are comparing our situation to the MediaWiki site, but the workforce is
> >>> nowhere to be 

[ovirt-users] Additional user in ovirt-node-ng 4.x

2017-06-20 Thread Luca 'remix_tj' Lorenzetto
Hello,

my colleague is asking if is possible to define a certain user on our hosts.
I know that the rule is to avoid installations and changes to hosts
running ovirt-node, but in this case the user creation should have a
very minimum impact.
He's running HP uCMDB that makes an ssh connection to the host for
grabbing some system information to populate it's DB.

Is possible to create a new user? Does this user is preserved with
upgrades? Is possible also to allow this new user to run 4 (exactly 4)
commands through sudo?

Luca

-- 
"E' assurdo impiegare gli uomini di intelligenza eccellente per fare
calcoli che potrebbero essere affidati a chiunque se si usassero delle
macchine"
Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716)

"Internet è la più grande biblioteca del mondo.
Ma il problema è che i libri sono tutti sparsi sul pavimento"
John Allen Paulos, Matematico (1945-vivente)

Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Win server 2016 guest tools

2017-06-20 Thread Abi Askushi
Thank you Lev.
I confirm that these tools install successfully.

Alex

On Tue, Jun 20, 2017 at 4:04 PM, Lev Veyde  wrote:

> Hi,
>
> You can use the tools from: http://plain.resources.ovirt.
> org/pub/ovirt-4.1/iso/oVirt-toolsSetup/oVirt-toolsSetup-4.1-5.fc24.iso
>
> It should support MS Windows Server 2016.
>
> Thanks in advance,
>
> On Tue, Jun 20, 2017 at 11:26 AM, Abi Askushi 
> wrote:
>
>> Hi all,
>>
>> Are there any windows server 2016 guest tools? At the site it doesn't
>> list this OS and during installation from ISO that is included in
>> ovirt-guest-tools-iso package i receive the error that the OS is not
>> supported.
>>
>> If these tools are not installed and I install only virtio tools are
>> there any performance implications for the VM?
>>
>> Thanx,
>> Alex
>>
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> 
>
> l...@redhat.com | lve...@redhat.com
> 
> TRIED. TESTED. TRUSTED. 
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] cloud init hostname from pools

2017-06-20 Thread Paul
Hi Barak,
Thanks for the explanation. I added my request to your bug.

In the meantime I created this workaround for making pool hostnames unique 
based on the last octet of ip-address from de DHCP after setting it to static. 
Not sure if it is safe/durable but seems to work enough for me and has at least 
some logic in the hostname.

initscript:
runcmd:
- ip=$(ip route get 8.8.8.8 | awk '{print $NF;exit}')
- nmcli con mod eth0 ipv4.addresses $ip"/24" ipv4.dns x.x.x.x ipv4.gateway 
x.x.x.x ipv4.method manual
- hostnamectl set-hostname testpool-"${ip##*.}".example.com

-Original Message-
From: Barak Korren [mailto:bkor...@redhat.com] 
Sent: dinsdag 20 juni 2017 09:47
To: Paul 
Cc: users 
Subject: Re: [ovirt-users] cloud init hostname from pools

On 19 June 2017 at 17:16, Paul  wrote:
>
> I would like to automatically set the hostname of a VM to be the same 
> as the ovirt machine name seen in the portal.
>
> This can be done by creating a template and activating cloud-init in 
> the initial run tab.
>
> A new VM named “test” based in this template is created and the 
> hostname is “test”, works perfect!
>
> But when I create a pool (i.e. “testpool”) based on this template I 
> get machines with names “testpool-1”, “testpool-2”, etc. but the 
> machine name is not present in the metadata and cannot be set as 
> hostname. This is probably due to the fact that the machine names are auto 
> generated by the oVirt Pool.
>
> Is this expected/desired behavior for cloud-init from pools?
>
> If so, what would be the best way to retrieve the machine name (as 
> seen in the portal) and manually set it to be hostname via cloud-init 
> (i.e. runcmd – hostnamectl set-hostname $(hostname))


I've opened a bug about this a while ago:
https://bugzilla.redhat.com/show_bug.cgi?id=1298232

Maybe go ahead and write your use case there to get some attention to it...

An alternative is to not care about the VM names in engine and use DNS/DHCP to 
set VM host names. But then you`ll hit:
https://bugzilla.redhat.com/show_bug.cgi?id=1298235

Our current solution is to just assign names and addresses randomly to POOL VMs 
and make them report their existence to a central system before use.

--
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Win server 2016 guest tools

2017-06-20 Thread Lev Veyde
Hi,

You can use the tools from:
http://plain.resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/oVirt-toolsSetup-4.1-5.fc24.iso

It should support MS Windows Server 2016.

Thanks in advance,

On Tue, Jun 20, 2017 at 11:26 AM, Abi Askushi 
wrote:

> Hi all,
>
> Are there any windows server 2016 guest tools? At the site it doesn't list
> this OS and during installation from ISO that is included in
> ovirt-guest-tools-iso package i receive the error that the OS is not
> supported.
>
> If these tools are not installed and I install only virtio tools are there
> any performance implications for the VM?
>
> Thanx,
> Alex
>
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>


-- 

Lev Veyde

Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Very poor GlusterFS performance

2017-06-20 Thread Krutika Dhananjay
Couple of things:

1. Like Darrell suggested, you should enable stat-prefetch and increase
client and server event threads to 4.
# gluster volume set  performance.stat-prefetch on
# gluster volume set  client.event-threads 4
# gluster volume set  server.event-threads 4

2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
https://review.gluster.org/#/c/16966/

With these two changes, we saw great improvement in performance in our
internal testing.

Do you mind trying these two options above?

-Krutika

On Tue, Jun 20, 2017 at 1:00 PM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> Have you tried with:
>
> performance.strict-o-direct : off
> performance.strict-write-ordering : off
>
> They can be changed dynamically.
>
>
> On 20 June 2017 at 17:21, Sahina Bose  wrote:
>
>> [Adding gluster-users]
>>
>> On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot  wrote:
>>
>>> Hi folks,
>>>
>>> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
>>> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
>>> 6 bricks, which themselves live on two SSDs in each of the servers (one
>>> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
>>> SSDs. Connectivity is 10G Ethernet.
>>>
>>> Performance within the VMs is pretty terrible. I experience very low
>>> throughput and random IO is really bad: it feels like a latency issue.
>>> On my oVirt nodes the SSDs are not generally very busy. The 10G network
>>> seems to run without errors (iperf3 gives bandwidth measurements of >=
>>> 9.20 Gbits/sec between the three servers).
>>>
>>> To put this into perspective: I was getting better behaviour from NFS4
>>> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
>>> feel right at all.
>>>
>>> My volume configuration looks like this:
>>>
>>> Volume Name: vmssd
>>> Type: Distributed-Replicate
>>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 2 x (2 + 1) = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>>> Options Reconfigured:
>>> nfs.disable: on
>>> transport.address-family: inet6
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> performance.low-prio-threads: 32
>>> network.remote-dio: off
>>> cluster.eager-lock: enable
>>> cluster.quorum-type: auto
>>> cluster.server-quorum-type: server
>>> cluster.data-self-heal-algorithm: full
>>> cluster.locking-scheme: granular
>>> cluster.shd-max-threads: 8
>>> cluster.shd-wait-qlength: 1
>>> features.shard: on
>>> user.cifs: off
>>> storage.owner-uid: 36
>>> storage.owner-gid: 36
>>> features.shard-block-size: 128MB
>>> performance.strict-o-direct: on
>>> network.ping-timeout: 30
>>> cluster.granular-entry-heal: enable
>>>
>>> I would really appreciate some guidance on this to try to improve things
>>> because at this rate I will need to reconsider using GlusterFS
>>> altogether.
>>>
>>
>>
>> Could you provide the gluster volume profile output while you're running
>> your I/O tests.
>>
>> # gluster volume profile  start
>> to start profiling
>>
>> # gluster volume profile  info
>>
>> for the profile output.
>>
>>
>>>
>>> Cheers,
>>> Chris
>>>
>>> --
>>> Chris Boot
>>> bo...@bootc.net
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>>
>> ___
>> Gluster-users mailing list
>> gluster-us...@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Lindsay
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Access VM Console on a Smart Phone with User Permission

2017-06-20 Thread Tomas Jelinek
On Fri, Jun 16, 2017 at 6:14 AM, Jerome Roque 
wrote:

> Good day oVirt Users,
>
> I need some little help. I have a KVM and used oVirt for the management of
> VMs. What I want is that my client will log on to their account and access
> their virtual machine using their Smart phone. I tried to install mOvirt
> and yes can connect to the console of my machine, but it is only accessible
> for admin console.
>

moVirt originally worked both with admin and user permissions. We had to
remove the support for user permissions since the oVirt API did not provide
all features moVirt needed for user permissions (search for example). But
the API moved significantly since then (the search works also for users now
for one) so we can move it back. I have opened an issue about it:
https://github.com/oVirt/moVirt/issues/282 - we can try to do it in next
version.


> Tried to use web console, it downloaded console.vv but can't open it. By
> any chance could make this thing possible?
>

If you want to use a web console for users, I would suggest to try the new
ovirt-web-ui [1] - you have a link to it from oVirt landing page and since
4.1 it is installed by default with oVirt.

The .vv file can not be opened using aSPICE AFAIK - adding Iordan as the
author of aSPICE to comment on this.

[1]: https://github.com/oVirt/ovirt-web-ui


>
> Thank you,
> Jerome
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] bug disk upload

2017-06-20 Thread Nikolai Kuzmich
Description of problem:
Try to upload an image using the GUI, disk upload succefully, but it's empty

ovirt-engine.noarch   4.1.2.2-1.el7.centos
ovirt-imageio-common.noarch  1.0.0-1.el7
ovirt-imageio-proxy.noarch   1.0.0-0.201701151456.git89ae3b4.el7.centos
vdsm.x86_644.19.15-1.el7.centos

before upload
md5sum manageiq-ovirt-fine-2.ova
e0585fb92301dd676cb22ba1df9e5477  manageiq-ovirt-fine-2.ova
qemu-img info manageiq-ovirt-fine-2.ova
image: manageiq-ovirt-fine-2.ova
file format: raw
virtual size: 1.3G (1359610880 bytes)
disk size: 1.3G


Steps to Reproduce:

1.   Upload disk manageiq with the above parameters

2.   After upload

qemu-img info c31bac64-09d5-43a6-bf4c-c502b55cf997

image: c31bac64-09d5-43a6-bf4c-c502b55cf997

file format: raw

virtual size: 2.0G (2147483648 bytes)

disk size: 1.3G

3.   Create VM and running

4.   Disk not bootable and empty

If  tar -xvf manageiq-ovirt-fine-2.ova
Before upload:
qemu-img info f6054145-e0d3-4e43-a034-92b00b1e31f8
image: f6054145-e0d3-4e43-a034-92b00b1e31f8
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 4.0G
cluster_size: 65536
Format specific information:
compat: 0.10
refcount bits: 16
after upload:
qemu-img info 4a7f6769-5ce7-4428-8f51-604042c0d424
image: 4a7f6769-5ce7-4428-8f51-604042c0d424
file format: qcow2
virtual size: 50G (53687091200 bytes)
disk size: 4.1G
cluster_size: 65536
Format specific information:
compat: 0.10
refcount bits: 16
Upload successfully, disk bootable, vm starts

Kind regards,
Nikolai Kuzmich
ARTEZIO
System Administrator

e-mail: nikolai.kuzm...@artezio.com
skype: artezio_nkuzmich_1
address: Dzyarzhynskaga str, 8.
Minsk, 220036, Belarus

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?

2017-06-20 Thread Martin Sivak
Hi,

I think what Edy did here makes sense. We do not need anything fancy
for technical documentation and design. This would also be easy to
maintain or integrate to the main website (git submodules will help).

I have two basic requirements for design space:

- commenting so devs can discuss the design
- ease of update so we can respond to comments

A plain markdown repo would work well for this and both points are
possible using github or gerrit workflows.

I would actually prefer if we had something that is directly part of
the source repositories so we could review code updates and docs
updates together. Unfortunately that it is hard to do when we have
multiple different components to update. So this proposal is probably
the next best thing.

--
Martin Sivak
SLA


On Thu, Jun 15, 2017 at 8:11 PM, Edward Haas  wrote:
> Hi all,
>
> Came back to this thread due to a need to post some design documentation.
> After fetching the ovirt-site and looking up where to start the document, I
> remembered why I stopped using it.
>
> After exploring several options, including the GitHub wiki, I think that for
> the development documentation we can just go with the minimum:
> Use a repo to just post markdown and image files, letting GitHub
> rendering/view of such files to do the job for us.
> We can still review the documents and have discussions on the content, and
> provide access to all who wants to use it (to perform the merges).
> The fact it uses markdown and images, can allow its content to be relocated
> to any other solutions that will come later on, including adding the content
> back on ovirt-site.
>
> Here is a simple example:
> https://github.com/EdDev/ovirt-devwiki/blob/initial-structure/index.md
>
> it uses simple markdown md files with relative links to other pages. Adding
> images is also simple.
>
> What do you think?
>
> Thanks,
> Edy.
>
>
>
> On Tue, Feb 7, 2017 at 12:42 PM, Michal Skrivanek
>  wrote:
>>
>>
>> On 16 Jan 2017, at 11:13, Roy Golan  wrote:
>>
>>
>>
>> On 11 January 2017 at 17:06, Marc Dequènes (Duck)  wrote:
>>>
>>> Quack,
>>>
>>> On 01/08/2017 06:39 PM, Barak Korren wrote:
>>> > On 8 January 2017 at 10:17, Roy Golan  wrote:
>>> >> Adding infra which I forgot to add from the beginning
>>>
>>> Thanks.
>>>
>>> > I don't think this is an infra issue, more of a community/working
>>> > procedures one.
>>>
>>> I do thin it is. We are involved in the tooling, for their maintenance,
>>> for documenting where things are, for suggesting better solutions,
>>> ensuring security…
>>>
>>> > On the one hand, the developers need a place where they create and
>>> > discuss design documents and road maps. That please needs to be as
>>> > friction-free as possible to allow developers to work on the code
>>> > instead of on the documentation tools.
>>>
>>> As for code, I think there is need for review, even more for design
>>> documents, so I don't see why people are bothered by PRs, which is a
>>> tool they already know fairly well.
>>
>>
>> because it takes ages to get attention and get it in, even in cases when
>> the text/update is more of an FYI and doesn’t require feedback.
>> That leads to frustration, and that leads to loss of any motivation to
>> contribute anything at all.
>> You can see people posting on their own platforms, blogs, just to run away
>> from this one
>>
>>>
>>> For people with few git knowledge, the GitHub web interface allows to
>>> edit files.
>>>
>>> > On the other hand, the user community needs a good, up to date source
>>> > of information about oVirt and how to use it.
>>>
>>> Yes, this official entry point and it needs to be clean.
>>
>>
>> yep, you’re right about the entry point -like pages
>>
>>>
>>> > Having said the above, I don't think the site project's wiki is the
>>> > best place for this. The individual project mirrors on GitHub may be
>>> > better for this
>>>
>>> We could indeed split the technical documentation. If people want to
>>> experiment with the GH wiki pages, I won't interfere.
>>>
>>> I read several people in this thread really miss the old wiki, so I
>>> think it is time to remember why we did not stay in paradise. I was not
>>> there at the time, but I know the wiki was not well maintained. People
>>> are comparing our situation to the MediaWiki site, but the workforce is
>>> nowhere to be compared. There is already no community manager, and noone
>>> is in charge of any part really, whereas Mediawiki has people in charge
>>> of every corner of the wiki. Also they developed tools over years to
>>> monitor, correct, revert… and we don't have any of this. So without any
>>> process then it was a total mess. More than one year later there was
>>> still much cleanup to do, and having contributed to it a little bit, I
>>> fear a sentimental rush to go back to a solution that was abandoned.
>>
>>
>> it was also a bit difficult to edit, plus a barrier 

Re: [ovirt-users] moVirt 2.0 RC 1 released!

2017-06-20 Thread Tomas Jelinek
On Thu, Jun 15, 2017 at 5:27 PM, Tomas Jelinek  wrote:

> you can have only one version installed...
>

...but Filip have opened an issue so you can have the non-stable releases
installed next to stable ones:
https://github.com/oVirt/moVirt/issues/280

We can do this for the next release.


>
> On 15 Jun 2017 5:25 pm, "Gianluca Cecchi" 
> wrote:
>
>> On Thu, Jun 15, 2017 at 4:06 PM, Filip Krepinsky 
>> wrote:
>>
>>> Hia,
>>>
>>> the first RC of moVirt 2.0 has been released!
>>>
>>> You can get it from our GitHub [1]; the play store will be upgraded
>>> after considered stable.
>>>
>>> The main feature of this release is a support for managing multiple
>>> oVirt installations from one moVirt.
>>>
>>
>> Nice!
>> Do I have to deinstall current one to test it or can I install both
>> versions together?
>>
>
btw have you tried it? any feedback?


>
>> Gianluca
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Win server 2016 guest tools

2017-06-20 Thread Abi Askushi
Hi all,

Are there any windows server 2016 guest tools? At the site it doesn't list
this OS and during installation from ISO that is included in
ovirt-guest-tools-iso package i receive the error that the OS is not
supported.

If these tools are not installed and I install only virtio tools are there
any performance implications for the VM?

Thanx,
Alex
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] cloud init hostname from pools

2017-06-20 Thread Barak Korren
On 19 June 2017 at 17:16, Paul  wrote:
>
> I would like to automatically set the hostname of a VM to be the same as the
> ovirt machine name seen in the portal.
>
> This can be done by creating a template and activating cloud-init in the
> initial run tab.
>
> A new VM named “test” based in this template is created and the hostname is
> “test”, works perfect!
>
> But when I create a pool (i.e. “testpool”) based on this template I get
> machines with names “testpool-1”, “testpool-2”, etc. but the machine name is
> not present in the metadata and cannot be set as hostname. This is probably
> due to the fact that the machine names are auto generated by the oVirt Pool.
>
> Is this expected/desired behavior for cloud-init from pools?
>
> If so, what would be the best way to retrieve the machine name (as seen in
> the portal) and manually set it to be hostname via cloud-init (i.e. runcmd –
> hostnamectl set-hostname $(hostname))


I've opened a bug about this a while ago:
https://bugzilla.redhat.com/show_bug.cgi?id=1298232

Maybe go ahead and write your use case there to get some attention to it...

An alternative is to not care about the VM names in engine and use DNS/DHCP
to set VM host names. But then you`ll hit:
https://bugzilla.redhat.com/show_bug.cgi?id=1298235

Our current solution is to just assign names and addresses randomly to POOL
VMs and make them report their existence to a central system before use.

-- 
Barak Korren
RHV DevOps team , RHCE, RHCi
Red Hat EMEA
redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Very poor GlusterFS performance

2017-06-20 Thread Lindsay Mathieson
Have you tried with:

performance.strict-o-direct : off
performance.strict-write-ordering : off

They can be changed dynamically.


On 20 June 2017 at 17:21, Sahina Bose  wrote:

> [Adding gluster-users]
>
> On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot  wrote:
>
>> Hi folks,
>>
>> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
>> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
>> 6 bricks, which themselves live on two SSDs in each of the servers (one
>> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
>> SSDs. Connectivity is 10G Ethernet.
>>
>> Performance within the VMs is pretty terrible. I experience very low
>> throughput and random IO is really bad: it feels like a latency issue.
>> On my oVirt nodes the SSDs are not generally very busy. The 10G network
>> seems to run without errors (iperf3 gives bandwidth measurements of >=
>> 9.20 Gbits/sec between the three servers).
>>
>> To put this into perspective: I was getting better behaviour from NFS4
>> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
>> feel right at all.
>>
>> My volume configuration looks like this:
>>
>> Volume Name: vmssd
>> Type: Distributed-Replicate
>> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 2 x (2 + 1) = 6
>> Transport-type: tcp
>> Bricks:
>> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
>> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
>> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
>> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
>> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
>> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet6
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> performance.low-prio-threads: 32
>> network.remote-dio: off
>> cluster.eager-lock: enable
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>> cluster.data-self-heal-algorithm: full
>> cluster.locking-scheme: granular
>> cluster.shd-max-threads: 8
>> cluster.shd-wait-qlength: 1
>> features.shard: on
>> user.cifs: off
>> storage.owner-uid: 36
>> storage.owner-gid: 36
>> features.shard-block-size: 128MB
>> performance.strict-o-direct: on
>> network.ping-timeout: 30
>> cluster.granular-entry-heal: enable
>>
>> I would really appreciate some guidance on this to try to improve things
>> because at this rate I will need to reconsider using GlusterFS altogether.
>>
>
>
> Could you provide the gluster volume profile output while you're running
> your I/O tests.
>
> # gluster volume profile  start
> to start profiling
>
> # gluster volume profile  info
>
> for the profile output.
>
>
>>
>> Cheers,
>> Chris
>>
>> --
>> Chris Boot
>> bo...@bootc.net
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>
>
> ___
> Gluster-users mailing list
> gluster-us...@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Lindsay
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-06-20 Thread Sahina Bose
[Adding gluster-users]

On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot  wrote:

> Hi folks,
>
> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
> 6 bricks, which themselves live on two SSDs in each of the servers (one
> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
> SSDs. Connectivity is 10G Ethernet.
>
> Performance within the VMs is pretty terrible. I experience very low
> throughput and random IO is really bad: it feels like a latency issue.
> On my oVirt nodes the SSDs are not generally very busy. The 10G network
> seems to run without errors (iperf3 gives bandwidth measurements of >=
> 9.20 Gbits/sec between the three servers).
>
> To put this into perspective: I was getting better behaviour from NFS4
> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
> feel right at all.
>
> My volume configuration looks like this:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet6
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 128MB
> performance.strict-o-direct: on
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
>
> I would really appreciate some guidance on this to try to improve things
> because at this rate I will need to reconsider using GlusterFS altogether.
>


Could you provide the gluster volume profile output while you're running
your I/O tests.

# gluster volume profile  start
to start profiling

# gluster volume profile  info

for the profile output.


>
> Cheers,
> Chris
>
> --
> Chris Boot
> bo...@bootc.net
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Very poor GlusterFS performance

2017-06-20 Thread Yaniv Kaul
On Mon, Jun 19, 2017 at 7:32 PM, Ralf Schenk  wrote:

> Hello,
>
> Gluster-Performance is bad. Thats why I asked for native qemu-libgfapi
> access for Ovirt-VM's to gluster volumes which I thought to be possible
> since 3.6.x. Documentation is misleading and still in 4.1.2 Ovirt is using
> fuse to mount gluster-based VM-Disks.
>

Can you please open a bug to fix documentation? We are working on libgfapi,
but it's indeed not in yet.
Y.


> Bye
>
> Am 19.06.2017 um 17:23 schrieb Darrell Budic:
>
> Chris-
>
> You probably need to head over to gluster-us...@gluster.org for help with
> performance issues.
>
> That said, what kind of performance are you getting, via some form or
> testing like bonnie++ or even dd runs? Raw bricks vs gluster performance is
> useful to determine what kind of performance you’re actually getting.
>
> Beyond that, I’d recommend dropping the arbiter bricks and re-adding them
> as full replicas, they can’t serve distributed data in this configuration
> and may be slowing things down on you. If you’ve got a storage network
> setup, make sure it’s using the largest MTU it can, and consider
> adding/testing these settings that I use on my main storage volume:
>
> performance.io-thread-count: 32
> client.event-threads: 8
> server.event-threads: 3
> performance.stat-prefetch: on
>
> Good luck,
>
>   -Darrell
>
>
> On Jun 19, 2017, at 9:46 AM, Chris Boot  wrote:
>
> Hi folks,
>
> I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
> configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
> 6 bricks, which themselves live on two SSDs in each of the servers (one
> brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
> SSDs. Connectivity is 10G Ethernet.
>
> Performance within the VMs is pretty terrible. I experience very low
> throughput and random IO is really bad: it feels like a latency issue.
> On my oVirt nodes the SSDs are not generally very busy. The 10G network
> seems to run without errors (iperf3 gives bandwidth measurements of >=
> 9.20 Gbits/sec between the three servers).
>
> To put this into perspective: I was getting better behaviour from NFS4
> on a gigabit connection than I am with GlusterFS on 10G: that doesn't
> feel right at all.
>
> My volume configuration looks like this:
>
> Volume Name: vmssd
> Type: Distributed-Replicate
> Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 2 x (2 + 1) = 6
> Transport-type: tcp
> Bricks:
> Brick1: ovirt3:/gluster/ssd0_vmssd/brick
> Brick2: ovirt1:/gluster/ssd0_vmssd/brick
> Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
> Brick4: ovirt3:/gluster/ssd1_vmssd/brick
> Brick5: ovirt1:/gluster/ssd1_vmssd/brick
> Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet6
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> performance.low-prio-threads: 32
> network.remote-dio: off
> cluster.eager-lock: enable
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
> cluster.data-self-heal-algorithm: full
> cluster.locking-scheme: granular
> cluster.shd-max-threads: 8
> cluster.shd-wait-qlength: 1
> features.shard: on
> user.cifs: off
> storage.owner-uid: 36
> storage.owner-gid: 36
> features.shard-block-size: 128MB
> performance.strict-o-direct: on
> network.ping-timeout: 30
> cluster.granular-entry-heal: enable
>
> I would really appreciate some guidance on this to try to improve things
> because at this rate I will need to reconsider using GlusterFS altogether.
>
> Cheers,
> Chris
>
> --
> Chris Boot
> bo...@bootc.net
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> ___
> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>
>
> --
>
>
> *Ralf Schenk*
> fon +49 (0) 24 05 / 40 83 70 <+49%202405%20408370>
> fax +49 (0) 24 05 / 40 83 759 <+49%202405%204083759>
> mail *r...@databay.de* 
>
> *Databay AG*
> Jens-Otto-Krag-Straße 11
> D-52146 Würselen
> *www.databay.de* 
>
> Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
> Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
> Philipp Hermanns
> Aufsichtsratsvorsitzender: Wilhelm Dohmen
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users