Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?

2017-06-21 Thread Duck
Quack,

On 06/21/2017 05:29 PM, Martin Sivak wrote:
>> I think we need a wiki for this, instead of reinventing one :-)

Well, the only feature which seem to be missing (at least for some
people) is to comment alongside the pages, but you can still make
reviews in PRs.

I think these last few month there is more people around and it is more
reactive, and there's much less lingering PRs.

Also I have no idea about how promoting people to merge power is done. I
think there is quite an unfair side of the story: if you're from RH then
you can get admin rights on the spot but external contributors I guess
don't guess this very easily. I didn't any procedure to apply for this.

> Other big ans successful [1] (single component) projects have docs/
> directory and documentation and design reviews are integral part of
> code review. That way you can atomically reject/accept changes to code
> and docs together. We can't easily do it this way as we have multiple
> cooperating components, but we should try to get close.

That's a very good point. A colleague working on OpenStack stuff was
telling me this works much better to commit doc alongside doc in the
same PR.

Also the project promotion and technical docs would probably benefit
from being separated, so that giving permissions to people would not
affect the main portal and messaging if they just need to write on
features/config/install/…

> Yes we do, but we do not have commit rights. And internal technical
> documentation and _design_ pages need to be a bit closer to the source
> otherwise nobody will want to touch them.

Same.

> And don't let me started on the theoretical open aspect of our
> project.. do we want more contributors or not? Can we afford
> artificial barriers? Is somebody from general public allowed to
> contribute ideas?

With GH and Gerrit using external auth I think anyone is able to contribute.



So to come back to Markdown stuff. People in charge of messaging do not
want an ugly portal, and even a simple one means a little bit of CSS.
Also there are news/blog entries, so another feature which needs
slightly more power. Also search is a nice feature too. So we're using
Middleman, that's not perfect at all, version 3 is a failure, and we
were looking at a new tool, Jekyll, and it's being tested on another
project.

Jekyll is the tool developed by GH and this rejoins Edward's proposal.

Nevertheless I totally disagree about going to the bare minimum. If you
want to get a clean, readable and coherent documentation you will still
end-up having a few editorial rules, templates… to follow and yes you
need to get up to it and learn. With content of this size if anyone just
decide to have his own flavor it will be a mess very soon.

Also we already have piles of content so this means some work to
migrate. Markdown is not a standard so depending on the generator you
use the syntax may differ slightly. There's some custom Ruby code we
need to get rid of too. So that's a real project and unfortunately the
person who was working in this direction is no more in the project. Once
I'm more acquainted to Jekyll and i still think this could be a nice
replacement, then I'll try to get some time to help on this front.

Anyway to get out of the current system we first need to remove all the
custom Ruby code around Middleman and maybe other ugly things, so…
patches are welcome!

\_o<



signature.asc
Description: OpenPGP digital signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Expand Hosted-Engine Disk Space

2017-06-21 Thread Adam Chesterton
Hi All,

I am encountering a situation where the hosted engine for my test setup has
run out of space. It only has 10GB configured, which was an oversight on my
part when first deploying it. All storage is Gluster replica 3 (4 volumes;
engine, vm, iso, export) and I have plenty of available space. My oVirt
version is currently at 4.1.1.

I've spent time looking for a solution to expand the disk size, but without
luck so far. Most items I've found from previous list emails are from the
3.5 days, and seem to indicate that by 3.6+ you should be able to modify
the hosted engine from within the oVirt Admin Web Console. However, when I
try to expand the hosted engine disk I get an error that the VM is not
managed by the engine, so obviously that functionality is not yet
implemented.

So, my question is; what is the current process to expand the hosted engine
disk?

Regards,
Adam Chesterton
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] vm freezes when using yum update

2017-06-21 Thread M Mahboubian
 blockquote, div.yahoo_quoted { margin-left: 0 !important; border-left:1px 
#715FFA solid !important; padding-left:1ex !important; background-color:white 
!important; } Dear all,I appreciate if anybody could possibly help with the 
issue I am facing.
In our environment we have 2 hosts 1 NFS server and 1 ovirt engine server. The 
NFS server provides storage to the VMs in the hosts.
I can create new VMs and install os but once i do something like yum update the 
VM freezes. I can reproduce this every single time I do yum update.
what information/log files should I provide you to trubleshoot this?
 Regards___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-21 Thread cmc
Hi Jenny/Martin,

Any idea what I can do here? The hosted engine VM has no log on any
host in /var/log/libvirt/qemu, and I fear that if I need to put the
host into maintenance, e.g., to upgrade it that I created it on (which
I think is hosting it), or if it fails for any reason, it won't get
migrated to another host, and I will not be able to manage the
cluster. It seems to be a very dangerous position to be in.

Thanks,

Cam

On Wed, Jun 21, 2017 at 11:48 AM, cmc  wrote:
> Thanks Martin. The hosts are all part of the same cluster.
>
> I get these errors in the engine.log on the engine:
>
> 2017-06-19 03:28:05,030Z WARN
> [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> (org.ovirt.thread.pool-6-thread-23) [] Validation of action 'ImportVm'
> failed for user SYST
> EM. Reasons: 
> VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_SUPPORTED_BY_OS
> 2017-06-19 03:28:05,030Z INFO
> [org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
> (org.ovirt.thread.pool-6-thread-23) [] Lock freed to object
> 'EngineLock:{exclusiveLocks='[a
> 79e6b0e-fff4-4cba-a02c-4c00be151300= ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>,
> HostedEngine=]',
> sharedLocks=
> '[a79e6b0e-fff4-4cba-a02c-4c00be151300= ACTION_TYPE_FAILED_VM_IS_BEING_IMPORTED$VmName HostedEngine>]'}'
> 2017-06-19 03:28:05,030Z ERROR
> [org.ovirt.engine.core.bll.HostedEngineImporter]
> (org.ovirt.thread.pool-6-thread-23) [] Failed importing the Hosted
> Engine VM
>
> The sanlock.log reports conflicts on that same host, and a different
> error on the other hosts, not sure if they are related.
>
> And this in the /var/log/ovirt-hosted-engine-ha/agent log on the host
> which I deployed the hosted engine VM on:
>
> MainThread::ERROR::2017-06-19
> 13:09:49,743::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
> Unable to extract HEVM OVF
> MainThread::ERROR::2017-06-19
> 13:09:49,743::config::445::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
> Failed extracting VM OVF from the OVF_STORE volume, falling back to
> initial vm.conf
>
> I've seen some of these issues reported in bugzilla, but they were for
> older versions of oVirt (and appear to be resolved).
>
> I will install that package on the other two hosts, for which I will
> put them in maintenance as vdsm is installed as an upgrade. I guess
> restarting vdsm is a good idea after that?
>
> Thanks,
>
> Campbell
>
> On Wed, Jun 21, 2017 at 10:51 AM, Martin Sivak  wrote:
>> Hi,
>>
>> you do not have to install it on all hosts. But you should have more
>> than one and ideally all hosted engine enabled nodes should belong to
>> the same engine cluster.
>>
>> Best regards
>>
>> Martin Sivak
>>
>> On Wed, Jun 21, 2017 at 11:29 AM, cmc  wrote:
>>> Hi Jenny,
>>>
>>> Does ovirt-hosted-engine-ha need to be installed across all hosts?
>>> Could that be the reason it is failing to see it properly?
>>>
>>> Thanks,
>>>
>>> Cam
>>>
>>> On Mon, Jun 19, 2017 at 1:27 PM, cmc  wrote:
 Hi Jenny,

 Logs are attached. I can see errors in there, but am unsure how they arose.

 Thanks,

 Campbell

 On Mon, Jun 19, 2017 at 12:29 PM, Evgenia Tokar  wrote:
> From the output it looks like the agent is down, try starting it by 
> running:
> systemctl start ovirt-ha-agent.
>
> The engine is supposed to see the hosted engine storage domain and import 
> it
> to the system, then it should import the hosted engine vm.
>
> Can you attach the agent log from the host
> (/var/log/ovirt-hosted-engine-ha/agent.log)
> and the engine log from the engine vm (/var/log/ovirt-engine/engine.log)?
>
> Thanks,
> Jenny
>
>
> On Mon, Jun 19, 2017 at 12:41 PM, cmc  wrote:
>>
>>  Hi Jenny,
>>
>> > What version are you running?
>>
>> 4.1.2.2-1.el7.centos
>>
>> > For the hosted engine vm to be imported and displayed in the engine, 
>> > you
>> > must first create a master storage domain.
>>
>> To provide a bit more detail: this was a migration of a bare-metal
>> engine in an existing cluster to a hosted engine VM for that cluster.
>> As part of this migration, I built an entirely new host and ran
>> 'hosted-engine --deploy' (followed these instructions:
>>
>> http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/).
>> I restored the backup from the engine and it completed without any
>> errors. I didn't see any instructions regarding a master storage
>> domain in the page above. The cluster has two existing master storage
>> domains, one is fibre channel, which is up, and one ISO domain, 

Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-21 Thread cmc
Thanks Martin. The hosts are all part of the same cluster.

I get these errors in the engine.log on the engine:

2017-06-19 03:28:05,030Z WARN
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-23) [] Validation of action 'ImportVm'
failed for user SYST
EM. Reasons: 
VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_ILLEGAL_VM_DISPLAY_TYPE_IS_NOT_SUPPORTED_BY_OS
2017-06-19 03:28:05,030Z INFO
[org.ovirt.engine.core.bll.exportimport.ImportVmCommand]
(org.ovirt.thread.pool-6-thread-23) [] Lock freed to object
'EngineLock:{exclusiveLocks='[a
79e6b0e-fff4-4cba-a02c-4c00be151300=,
HostedEngine=]',
sharedLocks=
'[a79e6b0e-fff4-4cba-a02c-4c00be151300=]'}'
2017-06-19 03:28:05,030Z ERROR
[org.ovirt.engine.core.bll.HostedEngineImporter]
(org.ovirt.thread.pool-6-thread-23) [] Failed importing the Hosted
Engine VM

The sanlock.log reports conflicts on that same host, and a different
error on the other hosts, not sure if they are related.

And this in the /var/log/ovirt-hosted-engine-ha/agent log on the host
which I deployed the hosted engine VM on:

MainThread::ERROR::2017-06-19
13:09:49,743::ovf_store::124::ovirt_hosted_engine_ha.lib.ovf.ovf_store.OVFStore::(getEngineVMOVF)
Unable to extract HEVM OVF
MainThread::ERROR::2017-06-19
13:09:49,743::config::445::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine.config::(_get_vm_conf_content_from_ovf_store)
Failed extracting VM OVF from the OVF_STORE volume, falling back to
initial vm.conf

I've seen some of these issues reported in bugzilla, but they were for
older versions of oVirt (and appear to be resolved).

I will install that package on the other two hosts, for which I will
put them in maintenance as vdsm is installed as an upgrade. I guess
restarting vdsm is a good idea after that?

Thanks,

Campbell

On Wed, Jun 21, 2017 at 10:51 AM, Martin Sivak  wrote:
> Hi,
>
> you do not have to install it on all hosts. But you should have more
> than one and ideally all hosted engine enabled nodes should belong to
> the same engine cluster.
>
> Best regards
>
> Martin Sivak
>
> On Wed, Jun 21, 2017 at 11:29 AM, cmc  wrote:
>> Hi Jenny,
>>
>> Does ovirt-hosted-engine-ha need to be installed across all hosts?
>> Could that be the reason it is failing to see it properly?
>>
>> Thanks,
>>
>> Cam
>>
>> On Mon, Jun 19, 2017 at 1:27 PM, cmc  wrote:
>>> Hi Jenny,
>>>
>>> Logs are attached. I can see errors in there, but am unsure how they arose.
>>>
>>> Thanks,
>>>
>>> Campbell
>>>
>>> On Mon, Jun 19, 2017 at 12:29 PM, Evgenia Tokar  wrote:
 From the output it looks like the agent is down, try starting it by 
 running:
 systemctl start ovirt-ha-agent.

 The engine is supposed to see the hosted engine storage domain and import 
 it
 to the system, then it should import the hosted engine vm.

 Can you attach the agent log from the host
 (/var/log/ovirt-hosted-engine-ha/agent.log)
 and the engine log from the engine vm (/var/log/ovirt-engine/engine.log)?

 Thanks,
 Jenny


 On Mon, Jun 19, 2017 at 12:41 PM, cmc  wrote:
>
>  Hi Jenny,
>
> > What version are you running?
>
> 4.1.2.2-1.el7.centos
>
> > For the hosted engine vm to be imported and displayed in the engine, you
> > must first create a master storage domain.
>
> To provide a bit more detail: this was a migration of a bare-metal
> engine in an existing cluster to a hosted engine VM for that cluster.
> As part of this migration, I built an entirely new host and ran
> 'hosted-engine --deploy' (followed these instructions:
>
> http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/).
> I restored the backup from the engine and it completed without any
> errors. I didn't see any instructions regarding a master storage
> domain in the page above. The cluster has two existing master storage
> domains, one is fibre channel, which is up, and one ISO domain, which
> is currently offline.
>
> > What do you mean the hosted engine commands are failing? What happens
> > when
> > you run hosted-engine --vm-status now?
>
> Interestingly, whereas when I ran it before, it exited with no output
> and a return code of '1', it now reports:
>
> --== Host 1 status ==--
>
> conf_on_shared_storage : True
> Status up-to-date  : False
> Hostname   : kvm-ldn-03.ldn.fscfc.co.uk
> Host ID: 1
> Engine status  : unknown stale-data
> Score  : 

Re: [ovirt-users] [Gluster-users] Very poor GlusterFS performance

2017-06-21 Thread Krutika Dhananjay
No, you don't need to do any of that. Just executing volume-set commands is
sufficient for the changes to take effect.


-Krutika

On Wed, Jun 21, 2017 at 3:48 PM, Chris Boot  wrote:

> [replying to lists this time]
>
> On 20/06/17 11:23, Krutika Dhananjay wrote:
> > Couple of things:
> >
> > 1. Like Darrell suggested, you should enable stat-prefetch and increase
> > client and server event threads to 4.
> > # gluster volume set  performance.stat-prefetch on
> > # gluster volume set  client.event-threads 4
> > # gluster volume set  server.event-threads 4
> >
> > 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
> > https://review.gluster.org/#/c/16966/
> >
> > With these two changes, we saw great improvement in performance in our
> > internal testing.
>
> Hi Krutika,
>
> Thanks for your input. I have yet to run any benchmarks, but I'll do
> that once I have a bit more time to work on this.
>
> I've tweaked the options as you suggest, but that doesn't seem to have
> made an appreciable difference. I admit that without benchmarks it's a
> bit like sticking your finger in the air, though. Do I need to restart
> my bricks and/or remount the volumes for these to take effect?
>
> I'm actually running GlusterFS 3.10.2-1. This is all coming from the
> CentOS Storage SIG's centos-release-gluster310 repository.
>
> Thanks again.
>
> Chris
>
> --
> Chris Boot
> bo...@bootc.net
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Very poor GlusterFS performance

2017-06-21 Thread Krutika Dhananjay
No. It's just that in the internal testing that was done here, increasing
the thread count beyond 4 did not improve the performance any further.

-Krutika

On Tue, Jun 20, 2017 at 11:30 PM, mabi  wrote:

> Dear Krutika,
>
> Sorry for asking so naively but can you tell me on what factor do you base
> that the client and server event-threads parameters for a volume should be
> set to 4?
>
> Is this metric for example based on the number of cores a GlusterFS server
> has?
>
> I am asking because I saw my GlusterFS volumes are set to 2 and would like
> to set these parameters to something meaningful for performance tuning. My
> setup is a two node replica with GlusterFS 3.8.11.
>
> Best regards,
> M.
>
>
>
>  Original Message 
> Subject: Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance
> Local Time: June 20, 2017 12:23 PM
> UTC Time: June 20, 2017 10:23 AM
> From: kdhan...@redhat.com
> To: Lindsay Mathieson 
> gluster-users , oVirt users 
>
> Couple of things:
> 1. Like Darrell suggested, you should enable stat-prefetch and increase
> client and server event threads to 4.
> # gluster volume set  performance.stat-prefetch on
> # gluster volume set  client.event-threads 4
> # gluster volume set  server.event-threads 4
>
> 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
> https://review.gluster.org/#/c/16966/
>
> With these two changes, we saw great improvement in performance in our
> internal testing.
>
> Do you mind trying these two options above?
> -Krutika
>
> On Tue, Jun 20, 2017 at 1:00 PM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> Have you tried with:
>>
>> performance.strict-o-direct : off
>> performance.strict-write-ordering : off
>> They can be changed dynamically.
>>
>>
>> On 20 June 2017 at 17:21, Sahina Bose  wrote:
>>
>>> [Adding gluster-users]
>>>
>>> On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot  wrote:
>>>
 Hi folks,

 I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
 configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
 6 bricks, which themselves live on two SSDs in each of the servers (one
 brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
 SSDs. Connectivity is 10G Ethernet.

 Performance within the VMs is pretty terrible. I experience very low
 throughput and random IO is really bad: it feels like a latency issue.
 On my oVirt nodes the SSDs are not generally very busy. The 10G network
 seems to run without errors (iperf3 gives bandwidth measurements of >=
 9.20 Gbits/sec between the three servers).

 To put this into perspective: I was getting better behaviour from NFS4
 on a gigabit connection than I am with GlusterFS on 10G: that doesn't
 feel right at all.

 My volume configuration looks like this:

 Volume Name: vmssd
 Type: Distributed-Replicate
 Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 2 x (2 + 1) = 6
 Transport-type: tcp
 Bricks:
 Brick1: ovirt3:/gluster/ssd0_vmssd/brick
 Brick2: ovirt1:/gluster/ssd0_vmssd/brick
 Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
 Brick4: ovirt3:/gluster/ssd1_vmssd/brick
 Brick5: ovirt1:/gluster/ssd1_vmssd/brick
 Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
 Options Reconfigured:
 nfs.disable: on
 transport.address-family: inet6
 performance.quick-read: off
 performance.read-ahead: off
 performance.io-cache: off
 performance.stat-prefetch: off
 performance.low-prio-threads: 32
 network.remote-dio: off
 cluster.eager-lock: enable
 cluster.quorum-type: auto
 cluster.server-quorum-type: server
 cluster.data-self-heal-algorithm: full
 cluster.locking-scheme: granular
 cluster.shd-max-threads: 8
 cluster.shd-wait-qlength: 1
 features.shard: on
 user.cifs: off
 storage.owner-uid: 36
 storage.owner-gid: 36
 features.shard-block-size: 128MB
 performance.strict-o-direct: on
 network.ping-timeout: 30
 cluster.granular-entry-heal: enable

 I would really appreciate some guidance on this to try to improve things
 because at this rate I will need to reconsider using GlusterFS
 altogether.

>>>
>>> Could you provide the gluster volume profile output while you're running
>>> your I/O tests.
>>> # gluster volume profile  start
>>> to start profiling
>>> # gluster volume profile  info
>>> for the profile output.
>>>
>>>

 Cheers,
 Chris

 --
 Chris Boot
 bo...@bootc.net
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

>>>
>>>
>>> 

Re: [ovirt-users] [Gluster-users] Very poor GlusterFS performance

2017-06-21 Thread Chris Boot
[replying to lists this time]

On 20/06/17 11:23, Krutika Dhananjay wrote:
> Couple of things:
>
> 1. Like Darrell suggested, you should enable stat-prefetch and increase
> client and server event threads to 4.
> # gluster volume set  performance.stat-prefetch on
> # gluster volume set  client.event-threads 4
> # gluster volume set  server.event-threads 4
>
> 2. Also glusterfs-3.10.1 and above has a shard performance bug fix -
> https://review.gluster.org/#/c/16966/
>
> With these two changes, we saw great improvement in performance in our
> internal testing.

Hi Krutika,

Thanks for your input. I have yet to run any benchmarks, but I'll do
that once I have a bit more time to work on this.

I've tweaked the options as you suggest, but that doesn't seem to have
made an appreciable difference. I admit that without benchmarks it's a
bit like sticking your finger in the air, though. Do I need to restart
my bricks and/or remount the volumes for these to take effect?

I'm actually running GlusterFS 3.10.2-1. This is all coming from the
CentOS Storage SIG's centos-release-gluster310 repository.

Thanks again.

Chris

-- 
Chris Boot
bo...@bootc.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine VM and services not working

2017-06-21 Thread Yaniv Kaul
On Wed, Jun 21, 2017 at 5:20 AM, Andrew Dent  wrote:

> Hi Yaniv
>
> I found a solution.
> Our Ovirt 3.6 AIO box was still running and had those VMs still configured
> in their pre exported and switch off state.
> I removed any snap shots I found from those pre exported VMs, then copied
> the disk image files and other bits from host01 (Ovirt v 4.1) back into
> the Ovirt 3.6 AIO box, and were needed fixing the relevent IDs to be what
> the Engine in the Ovirt 3.6 box expected.
> The VMs then started up properly again without hassle and with the latest
> files on the Ovirt 3.6 AIO box.
>

Well done and kudos for the resourcefulness!
Y.


>
> So now in the progress of rebuilding host01 with hosted-engine v4.1
>
> Kind regards
>
>
> Andrew
> -- Original Message --
> From: "Yaniv Kaul" 
> To: "Andrew Dent" 
> Cc: "users" 
> Sent: 18/06/2017 6:00:09 PM
> Subject: Re: [ovirt-users] hosted-engine VM and services not working
>
>
>
> On Sat, Jun 17, 2017 at 12:50 AM,  wrote:
>
>> If I reinstall and the rerun the hosted-engine setup how do I get the VMs
>> in their current running state back into and being recognised by the new
>> hosted engine?
>>
>
> Current running state is again quite challenging. You'll need to fix the
> hosted-engine.
>
> Can import the storage domain? (not for running VMs)
> Y.
>
>
>> Kind regards
>>
>> Andrew
>>
>> On 17 Jun 2017, at 6:54 AM, Yaniv Kaul  wrote:
>>
>>
>>
>> On Fri, Jun 16, 2017 at 9:11 AM, Andrew Dent 
>> wrote:
>>
>>> Hi
>>>
>>> Well I've got myself into a fine mess.
>>>
>>> host01 was setup with hosted-engine v4.1. This was successful.
>>> Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still
>>> running with more VMs on it)
>>> Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded
>>> but I couldn't add any storage domains to it. Cannot remember why.
>>> In Ovirt engine UI I removed host02.
>>> I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me
>>> it was already there (but it wasn't listed in the UI).
>>> Renamed the reinstalled host02 to host03, changed the ipaddress,
>>> reconfig the DNS server and added host03 into the Ovirt Engine UI.
>>> All good, and I was able to import more VMs to it.
>>> I was also able to shutdown a VM on host01 assign it to host03 and start
>>> the VM. Cool, everything working.
>>> The above was all last couple of weeks.
>>>
>>> This week I performed some yum updates on the Engine VM. No reboot.
>>> Today noticed that the Ovirt services in the Engine VM were in a endless
>>> restart loop. They would be up for a 5 minutes and then die.
>>> Looking into /var/log/ovirt-engine/engine.log and I could only see
>>> errors relating to host02. Ovirt was trying to find it and failing. Then
>>> falling over.
>>> I ran "hosted-engine --clean-metadata" thinking it would cleanup and
>>> remove bad references to hosts, but now realise that was a really bad idea
>>> as it didn't do what I'd hoped.
>>> At this point the sequence below worked, I could login to Ovirt UI but
>>> after 5 minutes the services would be off
>>> service ovirt-engine restart
>>> service ovirt-websocket-proxy restart
>>> service httpd restart
>>>
>>> I saw some reference to having to remove hosts from the database by hand
>>> in situations where under the hood of Ovirt a decommission host was still
>>> listed, but wasn't showing in the GUI.
>>> So I removed reference to host02 (vds_id and host_id) in the following
>>> tables in this order.
>>> vds_dynamic
>>> vds_statistics
>>> vds_static
>>> host_device
>>>
>>> Now when I try to start ovirt-websocket it will not start
>>> service ovirt-websocket start
>>> Redirecting to /bin/systemctl start  ovirt-websocket.service
>>> Failed to start ovirt-websocket.service: Unit not found.
>>>
>>> I'm now thinking that I need to do the following in the engine VM
>>>
>>> # engine-cleanup
>>> # yum remove ovirt-engine
>>> # yum install ovirt-engine
>>> # engine-setup
>>>
>>> But to run engine-cleanup I need to put the engine-vm into maintenance
>>> mode and because of the --clean-metadata that I ran earlier on host01 I
>>> cannot do that.
>>>
>>> What is the best course of action from here?
>>>
>>
>> To be honest, with all the steps taken above, I'd install everything
>> (including OS) from scratch...
>> There's a bit too much mess to try to clean up properly here.
>> Y.
>>
>>
>>>
>>> Cheers
>>>
>>>
>>> Andrew
>>>
>>> ___
>>> Users mailing list
>>> Users@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Very poor GlusterFS performance

2017-06-21 Thread mabi
Dear Krutika,

Sorry for asking so naively but can you tell me on what factor do you base that 
the client and server event-threads parameters for a volume should be set to 4?

Is this metric for example based on the number of cores a GlusterFS server has?

I am asking because I saw my GlusterFS volumes are set to 2 and would like to 
set these parameters to something meaningful for performance tuning. My setup 
is a two node replica with GlusterFS 3.8.11.

Best regards,
M.

 Original Message 
Subject: Re: [Gluster-users] [ovirt-users] Very poor GlusterFS performance
Local Time: June 20, 2017 12:23 PM
UTC Time: June 20, 2017 10:23 AM
From: kdhan...@redhat.com
To: Lindsay Mathieson 
gluster-users , oVirt users 

Couple of things:
1. Like Darrell suggested, you should enable stat-prefetch and increase client 
and server event threads to 4.
# gluster volume set  performance.stat-prefetch on
# gluster volume set  client.event-threads 4
# gluster volume set  server.event-threads 4

2. Also glusterfs-3.10.1 and above has a shard performance bug fix - 
https://review.gluster.org/#/c/16966/

With these two changes, we saw great improvement in performance in our internal 
testing.

Do you mind trying these two options above?
-Krutika

On Tue, Jun 20, 2017 at 1:00 PM, Lindsay Mathieson 
 wrote:

Have you tried with:

performance.strict-o-direct : off
performance.strict-write-ordering : off
They can be changed dynamically.

On 20 June 2017 at 17:21, Sahina Bose  wrote:

[Adding gluster-users]

On Mon, Jun 19, 2017 at 8:16 PM, Chris Boot  wrote:
Hi folks,

I have 3x servers in a "hyper-converged" oVirt 4.1.2 + GlusterFS 3.10
configuration. My VMs run off a replica 3 arbiter 1 volume comprised of
6 bricks, which themselves live on two SSDs in each of the servers (one
brick per SSD). The bricks are XFS on LVM thin volumes straight onto the
SSDs. Connectivity is 10G Ethernet.

Performance within the VMs is pretty terrible. I experience very low
throughput and random IO is really bad: it feels like a latency issue.
On my oVirt nodes the SSDs are not generally very busy. The 10G network
seems to run without errors (iperf3 gives bandwidth measurements of >=
9.20 Gbits/sec between the three servers).

To put this into perspective: I was getting better behaviour from NFS4
on a gigabit connection than I am with GlusterFS on 10G: that doesn't
feel right at all.

My volume configuration looks like this:

Volume Name: vmssd
Type: Distributed-Replicate
Volume ID: d5a5ddd1-a140-4e0d-b514-701cfe464853
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: ovirt3:/gluster/ssd0_vmssd/brick
Brick2: ovirt1:/gluster/ssd0_vmssd/brick
Brick3: ovirt2:/gluster/ssd0_vmssd/brick (arbiter)
Brick4: ovirt3:/gluster/ssd1_vmssd/brick
Brick5: ovirt1:/gluster/ssd1_vmssd/brick
Brick6: ovirt2:/gluster/ssd1_vmssd/brick (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet6
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
performance.low-prio-threads: 32
network.remote-dio: off
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 1
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
features.shard-block-size: 128MB
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable

I would really appreciate some guidance on this to try to improve things
because at this rate I will need to reconsider using GlusterFS altogether.

Could you provide the gluster volume profile output while you're running your 
I/O tests.
# gluster volume profile  start

to start profiling

# gluster volume profile  info
for the profile output.

Cheers,
Chris

--
Chris Boot
bo...@bootc.net
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Gluster-users mailing list
gluster-us...@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

--
Lindsay

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] HostedEngine VM not visible, but running

2017-06-21 Thread cmc
Hi Jenny,

Does ovirt-hosted-engine-ha need to be installed across all hosts?
Could that be the reason it is failing to see it properly?

Thanks,

Cam

On Mon, Jun 19, 2017 at 1:27 PM, cmc  wrote:
> Hi Jenny,
>
> Logs are attached. I can see errors in there, but am unsure how they arose.
>
> Thanks,
>
> Campbell
>
> On Mon, Jun 19, 2017 at 12:29 PM, Evgenia Tokar  wrote:
>> From the output it looks like the agent is down, try starting it by running:
>> systemctl start ovirt-ha-agent.
>>
>> The engine is supposed to see the hosted engine storage domain and import it
>> to the system, then it should import the hosted engine vm.
>>
>> Can you attach the agent log from the host
>> (/var/log/ovirt-hosted-engine-ha/agent.log)
>> and the engine log from the engine vm (/var/log/ovirt-engine/engine.log)?
>>
>> Thanks,
>> Jenny
>>
>>
>> On Mon, Jun 19, 2017 at 12:41 PM, cmc  wrote:
>>>
>>>  Hi Jenny,
>>>
>>> > What version are you running?
>>>
>>> 4.1.2.2-1.el7.centos
>>>
>>> > For the hosted engine vm to be imported and displayed in the engine, you
>>> > must first create a master storage domain.
>>>
>>> To provide a bit more detail: this was a migration of a bare-metal
>>> engine in an existing cluster to a hosted engine VM for that cluster.
>>> As part of this migration, I built an entirely new host and ran
>>> 'hosted-engine --deploy' (followed these instructions:
>>>
>>> http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/).
>>> I restored the backup from the engine and it completed without any
>>> errors. I didn't see any instructions regarding a master storage
>>> domain in the page above. The cluster has two existing master storage
>>> domains, one is fibre channel, which is up, and one ISO domain, which
>>> is currently offline.
>>>
>>> > What do you mean the hosted engine commands are failing? What happens
>>> > when
>>> > you run hosted-engine --vm-status now?
>>>
>>> Interestingly, whereas when I ran it before, it exited with no output
>>> and a return code of '1', it now reports:
>>>
>>> --== Host 1 status ==--
>>>
>>> conf_on_shared_storage : True
>>> Status up-to-date  : False
>>> Hostname   : kvm-ldn-03.ldn.fscfc.co.uk
>>> Host ID: 1
>>> Engine status  : unknown stale-data
>>> Score  : 0
>>> stopped: True
>>> Local maintenance  : False
>>> crc32  : 0217f07b
>>> local_conf_timestamp   : 2911
>>> Host timestamp : 2897
>>> Extra metadata (valid at timestamp):
>>> metadata_parse_version=1
>>> metadata_feature_version=1
>>> timestamp=2897 (Thu Jun 15 16:22:54 2017)
>>> host-id=1
>>> score=0
>>> vm_conf_refresh_time=2911 (Thu Jun 15 16:23:08 2017)
>>> conf_on_shared_storage=True
>>> maintenance=False
>>> state=AgentStopped
>>> stopped=True
>>>
>>> Yet I can login to the web GUI fine. I guess it is not HA due to being
>>> in an unknown state currently? Does the hosted-engine-ha rpm need to
>>> be installed across all nodes in the cluster, btw?
>>>
>>> Thanks for the help,
>>>
>>> Cam
>>>
>>> >
>>> > Jenny Tokar
>>> >
>>> >
>>> > On Thu, Jun 15, 2017 at 6:32 PM, cmc  wrote:
>>> >>
>>> >> Hi,
>>> >>
>>> >> I've migrated from a bare-metal engine to a hosted engine. There were
>>> >> no errors during the install, however, the hosted engine did not get
>>> >> started. I tried running:
>>> >>
>>> >> hosted-engine --status
>>> >>
>>> >> on the host I deployed it on, and it returns nothing (exit code is 1
>>> >> however). I could not ping it either. So I tried starting it via
>>> >> 'hosted-engine --vm-start' and it returned:
>>> >>
>>> >> Virtual machine does not exist
>>> >>
>>> >> But it then became available. I logged into it successfully. It is not
>>> >> in the list of VMs however.
>>> >>
>>> >> Any ideas why the hosted-engine commands fail, and why it is not in
>>> >> the list of virtual machines?
>>> >>
>>> >> Thanks for any help,
>>> >>
>>> >> Cam
>>> >> ___
>>> >> Users mailing list
>>> >> Users@ovirt.org
>>> >> http://lists.ovirt.org/mailman/listinfo/users
>>> >
>>> >
>>
>>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] hosted-engine network

2017-06-21 Thread Arsène Gschwind

Hi Yanir,

We had our oVirt Engine running on a HW server so we decided to move it 
to hosted-engine. For this I've followed the Howto at 
http://www.ovirt.org/documentation/self-hosted/chap-Migrating_from_Bare_Metal_to_an_EL-Based_Self-Hosted_Environment/.


The Hosted-Storage is located on a FC SAN LUN.

Please find attached the setup log.

Tanks a lot.

Regards,
Arsène


On 06/21/2017 10:14 AM, Yanir Quinn wrote:

HI Arsene

Just to be clear, can you write down the steps to reproduce ? (the 
migration procedure . and if possible the state before and after)


Thanks

On Mon, Jun 19, 2017 at 8:34 PM, Arsène Gschwind 
> wrote:


Hi Jenny,

Thanks for the explanations..

Please find vm.conf attached, it looks like the ovirtmgmt network
is defined

Regards,
Arsène


On 06/19/2017 01:46 PM, Evgenia Tokar wrote:

Hi,

It should be in one of the directories on your storage domain:
/cd1f6775-61e9-4d04-b41c-c64925d5a905/images//

To see which one you can run the following command:

vdsm-client Volume getInfo volumeID= imageID=
storagedomainID= storagepoolID=

the storage domain id is: cd1f6775-61e9-4d04-b41c-c64925d5a905
the storage pool id can be found using: vdsm-client StorageDomain
getInfo storagedomainID=cd1f6775-61e9-4d04-b41c-c64925d5a905

The volume that has "description":
"HostedEngineConfigurationImage" is the one you are looking for.
Untar it and it should contain the original vm.conf which was
used to start the hosted engine.

Jenny Tokar


On Mon, Jun 19, 2017 at 12:59 PM, Arsène Gschwind
> wrote:

Hi Jenny,

1. I couldn't locate any tar file containing vm.conf, do you
know the exact place where it is stored?

2. The ovirtmgmt appears in the network dropdown but I'm not
able to change since it complains about locked values.

Thanks a lot for your help.

Regards,
Arsène



On 06/14/2017 01:26 PM, Evgenia Tokar wrote:

Hi Arseny,

Looking at the log the ovf doesn't contain the ovirtmgmt
network.

1. Can you provide the original vm.conf file the engine was
started with? It is located in a tar archive on your storage
domain.
2. It's uncelar from the screenshot, in the network dropdown
do you have an option to add a ovirtmgmt network?

Thanks,
Jenny


On Tue, Jun 13, 2017 at 11:19 AM, Arsène Gschwind
> wrote:

Sorry for that, I haven't checked.

I've replaced the log file with a new version which
should work i hope.

Many Thanks.

Regards,
Arsène


On 06/12/2017 02:33 PM, Martin Sivak wrote:

I am sorry to say so, but it seems the log archive is corrupted. I
can't open it.

Regards

Martin Sivak

On Mon, Jun 12, 2017 at 12:47 PM, Arsène Gschwind

  wrote:

Please find the logs here:

https://www.dropbox.com/sh/k2zk7ig4tbd9tnj/AAB2NKjVk2z6lVPQ15NIeAtCa?dl=0



Thanks.

Regards,
Arsène

Hi,

Sorry for this, it seems that the attachment have been detached.

So let's try again

Regards,
Arsène


On 06/12/2017 11:59 AM, Martin Sivak wrote:

Hi,

I would love to help you, but I didn't get the log file..

Regards

Martin Sivak

On Mon, Jun 12, 2017 at 11:49 AM, Arsène Gschwind

  wrote:

Hi all,

Any chance to get help or a hint to solve my Problem, I have no 
idea how to
change this configuration since it is not possible using the WebUI.

Thanks a lot.

Regards,
Arsène


On 06/07/2017 11:50 AM, Arsène Gschwind wrote:

Hi all,

Please find attached the agent.log DEBUG and a screenshot from webui

Thanks a lot

Best regards,

Arsène


On 06/07/2017 11:27 AM, Martin Sivak wrote:

Hi all,

Yanir is right, the local vm.conf is just a cache of what was
retrieved from the engine.

I might be interesting to check what the configuration of the engine
VM shows when edited using the webadmin. Or enable debug logging [1]
for hosted engine and add the OVF dump we send 

[ovirt-users] Hosted Engine stopped on Good Agent -> 4.1.1

2017-06-21 Thread Matt .
Hi Guys,


I have moved my environment from datacenter to datacenter and it seems
that my agents are doing "strange" after it.

The engine starts but stops after being initialized and the agent.log
doesn't show any errors. This happens on 4.1.1

One of the hosts which is a HE host has been upgraded to 4.1.2 to see
what happens there and when I want to start the hosted engine there I
get:

# hosted-engine --vm-start
Unable to read vm.conf, please check ovirt-ha-agent logs


The HE storage is mounted well on the 4.1.1 host which doesn't mount
the other NFS storages, but the 4.1.2 host has all storages (NFS)
mounted which is kinda strange if you ask me.

Where should I start as this cluster has been moved earlier without any issue.


Thanks,

Matt
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] moVirt 2.0 RC 1 released!

2017-06-21 Thread Gianluca Cecchi
On Tue, Jun 20, 2017 at 12:07 PM, Tomas Jelinek  wrote:

>
>
> On Thu, Jun 15, 2017 at 5:27 PM, Tomas Jelinek 
> wrote:
>
>> you can have only one version installed...
>>
>
> ...but Filip have opened an issue so you can have the non-stable releases
> installed next to stable ones:
> https://github.com/oVirt/moVirt/issues/280
>

>
> We can do this for the next release.
>

OK. I will see and try



>
>>
>> On 15 Jun 2017 5:25 pm, "Gianluca Cecchi" 
>> wrote:
>>
>>> On Thu, Jun 15, 2017 at 4:06 PM, Filip Krepinsky 
>>> wrote:
>>>
 Hia,

 the first RC of moVirt 2.0 has been released!

 You can get it from our GitHub [1]; the play store will be upgraded
 after considered stable.

 The main feature of this release is a support for managing multiple
 oVirt installations from one moVirt.

>>>
>>> Nice!
>>> Do I have to deinstall current one to test it or can I install both
>>> versions together?
>>>
>>
> btw have you tried it? any feedback?
>
>
>>
>>>
Right now I installed it only on my smartphone (Samsung S7 with Android
7.0) and was able to connect to a single host, hosted engine environment
based on oVirt 4.1.
The speed of actions inside application seems quite good! Even connected
through VPN to a slow link, but passing from one information to the other
was quite immediate... very nice!
What tested:
- VM powered off, I power on it and I see its events section populated only
by the new power on event
- verified spice console (both with a VM in text mode and one with graphic
display)
- verified create snapshot and delete snapshot for a powered on VM

So far so good.
I'm going to install also to a tablet with access to more than one
environment and test connection to 2 different infrastructures...
Thanks,
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] Lowering the bar for wiki contribution?

2017-06-21 Thread Martin Sivak
> I think we need a wiki for this, instead of reinventing one :-)

Really? And ending in the same mess we had before? No thanks.

Other big ans successful [1] (single component) projects have docs/
directory and documentation and design reviews are integral part of
code review. That way you can atomically reject/accept changes to code
and docs together. We can't easily do it this way as we have multiple
cooperating components, but we should try to get close.

> We have a builtin markdown based wiki in the ovirt site github project.

Yes we do, but we do not have commit rights. And internal technical
documentation and _design_ pages need to be a bit closer to the source
otherwise nobody will want to touch them.

> For discussion, we have the mailing list and other channels like bluejeans
> and irc.

For informal yes. But formal proposals and final design documentation
are something different.

And don't let me started on the theoretical open aspect of our
project.. do we want more contributors or not? Can we afford
artificial barriers? Is somebody from general public allowed to
contribute ideas?

Gerrit / Github give everybody the power to easily see all currently
considered (open) projects and review them using the same interface we
use for our daily work! This way any team can catch conceptual issues
with other teams' projects. Searching through email threads is nowhere
near the same experience.

[1] kubernetes and Linux kernel just to name two

--
Martin Sivak
SLA / oVirt


On Tue, Jun 20, 2017 at 9:22 PM, Nir Soffer  wrote:
>
> בתאריך יום ג׳, 20 ביוני 2017, 13:10, מאת Martin Sivak ‏:
>>
>> Hi,
>>
>> I think what Edy did here makes sense. We do not need anything fancy
>> for technical documentation and design. This would also be easy to
>> maintain or integrate to the main website (git submodules will help).
>>
>> I have two basic requirements for design space:
>>
>> - commenting so devs can discuss the design
>> - ease of update so we can respond to comments
>>
>> A plain markdown repo would work well for this and both points are
>> possible using github or gerrit workflows.
>>
>> I would actually prefer if we had something that is directly part of
>> the source repositories so we could review code updates and docs
>> updates together. Unfortunately that it is hard to do when we have
>> multiple different components to update. So this proposal is probably
>> the next best thing.
>
>
> I think we need a wiki for this, instead of reinventing one :-)
>
> We have a builtin markdown based wiki in the ovirt site github project.
>
> For discussion, we have the mailing list and other channels like bluejeans
> and irc.
>
> Nir
>
>>
>> --
>> Martin Sivak
>> SLA
>>
>>
>> On Thu, Jun 15, 2017 at 8:11 PM, Edward Haas  wrote:
>> > Hi all,
>> >
>> > Came back to this thread due to a need to post some design
>> > documentation.
>> > After fetching the ovirt-site and looking up where to start the
>> > document, I
>> > remembered why I stopped using it.
>> >
>> > After exploring several options, including the GitHub wiki, I think that
>> > for
>> > the development documentation we can just go with the minimum:
>> > Use a repo to just post markdown and image files, letting GitHub
>> > rendering/view of such files to do the job for us.
>> > We can still review the documents and have discussions on the content,
>> > and
>> > provide access to all who wants to use it (to perform the merges).
>> > The fact it uses markdown and images, can allow its content to be
>> > relocated
>> > to any other solutions that will come later on, including adding the
>> > content
>> > back on ovirt-site.
>> >
>> > Here is a simple example:
>> > https://github.com/EdDev/ovirt-devwiki/blob/initial-structure/index.md
>> >
>> > it uses simple markdown md files with relative links to other pages.
>> > Adding
>> > images is also simple.
>> >
>> > What do you think?
>> >
>> > Thanks,
>> > Edy.
>> >
>> >
>> >
>> > On Tue, Feb 7, 2017 at 12:42 PM, Michal Skrivanek
>> >  wrote:
>> >>
>> >>
>> >> On 16 Jan 2017, at 11:13, Roy Golan  wrote:
>> >>
>> >>
>> >>
>> >> On 11 January 2017 at 17:06, Marc Dequènes (Duck) 
>> >> wrote:
>> >>>
>> >>> Quack,
>> >>>
>> >>> On 01/08/2017 06:39 PM, Barak Korren wrote:
>> >>> > On 8 January 2017 at 10:17, Roy Golan  wrote:
>> >>> >> Adding infra which I forgot to add from the beginning
>> >>>
>> >>> Thanks.
>> >>>
>> >>> > I don't think this is an infra issue, more of a community/working
>> >>> > procedures one.
>> >>>
>> >>> I do thin it is. We are involved in the tooling, for their
>> >>> maintenance,
>> >>> for documenting where things are, for suggesting better solutions,
>> >>> ensuring security…
>> >>>
>> >>> > On the one hand, the developers need a place where they create and
>> >>> > discuss design documents and road maps. That please needs to be as
>> >>> > friction-free 

Re: [ovirt-users] hosted-engine network

2017-06-21 Thread Yanir Quinn
HI Arsene

Just to be clear, can you write down the steps to reproduce ? (the
migration procedure . and if possible the state before and after)

Thanks

On Mon, Jun 19, 2017 at 8:34 PM, Arsène Gschwind 
wrote:

> Hi Jenny,
>
> Thanks for the explanations..
>
> Please find vm.conf attached, it looks like the ovirtmgmt network is
> defined
>
> Regards,
> Arsène
>
> On 06/19/2017 01:46 PM, Evgenia Tokar wrote:
>
> Hi,
>
> It should be in one of the directories on your storage domain:
> /cd1f6775-61e9-4d04-b41c-c64925d5a905/images//
>
> To see which one you can run the following command:
>
> vdsm-client Volume getInfo volumeID= imageID=
> storagedomainID= storagepoolID=
>
> the storage domain id is: cd1f6775-61e9-4d04-b41c-c64925d5a905
> the storage pool id can be found using: vdsm-client StorageDomain getInfo
> storagedomainID=cd1f6775-61e9-4d04-b41c-c64925d5a905
>
> The volume that has "description": "HostedEngineConfigurationImage" is
> the one you are looking for.
> Untar it and it should contain the original vm.conf which was used to
> start the hosted engine.
>
> Jenny Tokar
>
>
> On Mon, Jun 19, 2017 at 12:59 PM, Arsène Gschwind <
> arsene.gschw...@unibas.ch> wrote:
>
>> Hi Jenny,
>>
>> 1. I couldn't locate any tar file containing vm.conf, do you know the
>> exact place where it is stored?
>>
>> 2. The ovirtmgmt appears in the network dropdown but I'm not able to
>> change since it complains about locked values.
>>
>> Thanks a lot for your help.
>>
>> Regards,
>> Arsène
>>
>>
>>
>> On 06/14/2017 01:26 PM, Evgenia Tokar wrote:
>>
>> Hi Arseny,
>>
>> Looking at the log the ovf doesn't contain the ovirtmgmt network.
>>
>> 1. Can you provide the original vm.conf file the engine was started with?
>> It is located in a tar archive on your storage domain.
>> 2. It's uncelar from the screenshot, in the network dropdown do you have
>> an option to add a ovirtmgmt network?
>>
>> Thanks,
>> Jenny
>>
>>
>> On Tue, Jun 13, 2017 at 11:19 AM, Arsène Gschwind <
>> arsene.gschw...@unibas.ch> wrote:
>>
>>> Sorry for that, I haven't checked.
>>>
>>> I've replaced the log file with a new version which should work i hope.
>>>
>>> Many Thanks.
>>>
>>> Regards,
>>> Arsène
>>>
>>> On 06/12/2017 02:33 PM, Martin Sivak wrote:
>>>
>>> I am sorry to say so, but it seems the log archive is corrupted. I
>>> can't open it.
>>>
>>> Regards
>>>
>>> Martin Sivak
>>>
>>> On Mon, Jun 12, 2017 at 12:47 PM, Arsène 
>>> Gschwind  wrote:
>>>
>>> Please find the logs 
>>> here:https://www.dropbox.com/sh/k2zk7ig4tbd9tnj/AAB2NKjVk2z6lVPQ15NIeAtCa?dl=0
>>>
>>> Thanks.
>>>
>>> Regards,
>>> Arsène
>>>
>>> Hi,
>>>
>>> Sorry for this, it seems that the attachment have been detached.
>>>
>>> So let's try again
>>>
>>> Regards,
>>> Arsène
>>>
>>>
>>> On 06/12/2017 11:59 AM, Martin Sivak wrote:
>>>
>>> Hi,
>>>
>>> I would love to help you, but I didn't get the log file..
>>>
>>> Regards
>>>
>>> Martin Sivak
>>>
>>> On Mon, Jun 12, 2017 at 11:49 AM, Arsène 
>>> Gschwind  wrote:
>>>
>>> Hi all,
>>>
>>> Any chance to get help or a hint to solve my Problem, I have no idea how to
>>> change this configuration since it is not possible using the WebUI.
>>>
>>> Thanks a lot.
>>>
>>> Regards,
>>> Arsène
>>>
>>>
>>> On 06/07/2017 11:50 AM, Arsène Gschwind wrote:
>>>
>>> Hi all,
>>>
>>> Please find attached the agent.log DEBUG and a screenshot from webui
>>>
>>> Thanks a lot
>>>
>>> Best regards,
>>>
>>> Arsène
>>>
>>>
>>> On 06/07/2017 11:27 AM, Martin Sivak wrote:
>>>
>>> Hi all,
>>>
>>> Yanir is right, the local vm.conf is just a cache of what was
>>> retrieved from the engine.
>>>
>>> I might be interesting to check what the configuration of the engine
>>> VM shows when edited using the webadmin. Or enable debug logging [1]
>>> for hosted engine and add the OVF dump we send there now and then (the
>>> xml representation of the VM).
>>>
>>> [1] See /etc/ovirt-hosted-engine-ha/agent-log.conf and change the
>>> level for root logger to DEBUG
>>>
>>> Best regards
>>>
>>> Martin Sivak
>>>
>>> On Wed, Jun 7, 2017 at 11:12 AM, Yanir Quinn  
>>>  wrote:
>>>
>>> If im not mistaken the values of vm.conf are repopulated from the database ,
>>> but i wouldn't recommend meddling with DB data.
>>> maybe the network device wasn't set properly during the hosted engine setup
>>> ?
>>>
>>> On Wed, Jun 7, 2017 at 11:47 AM, Arsène Gschwind 
>>>  
>>> wrote:
>>>
>>> Hi,
>>>
>>> Any chance to get a hint how to change the vm.conf file so it will not be
>>> overwritten constantly?
>>>
>>> Thanks a lot.
>>>
>>> Arsène
>>>
>>>
>>> On 06/06/2017 09:50 AM, Arsène Gschwind wrote:
>>>
>>> Hi,
>>>
>>> I've migrated our oVirt engine to hosted-engine located on a FC storage
>>> LUN, so far so good.
>>> For some reason I'm not able to start the hosted-engine VM,