[ovirt-users] hosted-engine setup/migration features for 3.6

2014-12-03 Thread Yedidyah Bar David
Hi all,

We already have quite a lot of open ovirt-hosted-engine-setup bugs for 3.6 [1].

Yesterday I tried helping someone on irc who planned to migrate to hosted-engine
manually, and without knowing (so it seems) that such a feature exists. He had
an engine set up on a physical host, prepared a VM for it, and asked about 
migrating
the engine to the VM. In principle this works, but the final result will be a
hosted-engine, where the engine manages a VM the runs itself, without knowing 
it,
and without HA.

The current recommended migration flow is described in [2]. This page is perhaps
a bit outdated, perhaps missing some details etc., but principally works. The 
main
issue with it, AFAICT after discussing this a bit with few people, is that it
requires a new clean host.

I'd like to hear what people here think about such and similar flows.

If you already had an engine and migrated to hosted-engine, what was good, what
was bad, what would you like to change?

If you plan such a migration, what do you find missing currently?

[1] http://red.ht/1vle8Vv
[2] http://www.ovirt.org/Migrate_to_Hosted_Engine

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Replace Crashed Server

2014-12-03 Thread Punit Dambiwal
Hi,

I have the following architecture :-

1. 4*Ovirt Host node with gluster (Distributed replicated
storage...replica=2)..with 8 bricks on each server

i.e I am using same host for compute as well as storage purpose...

My question is if any day one of my node completely dead...and seems i have
one spare node with same configuration and disk space

1. How i can replaced this new host with failed host...
2. How the gluster bricks replaced ??

I found one url
http://gluster.org/community/documentation/index.php/Gluster_3.2:_Brick_Restoration_-_Replace_Crashed_Server

But what the OVirt side...because every node has it's own key and UUID...

Thanks,
Punit
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [QE][ACTION REQUIRED] oVirt 3.5.1 RC status

2014-12-03 Thread Sandro Bonazzola
Hi,
We're going to start composing oVirt 3.5.1 RC on *2014-12-09 08:00 UTC* from 
3.5 branch.
In order to stabilize the release a new branch ovirt-engine-3.5.1 will be 
created from the same git hash used for composing the RC.

The bug tracker [1] shows 1 open blocker:
Bug ID  Whiteboard  Status  Summary
1160846 sla NEW Can't add disk to VM without specifying disk 
profile when the storage domain has more than one disk profile

In order to stabilize the release a new branch ovirt-engine-3.5.1 will be 
created from the same git hash used for composing the RC.

Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs before *2014-12-08 15:00 
UTC*
- Please be sure that no pending patches are going to block the release before 
*2014-12-08 15:00 UTC*
- If any patch must block the RC release please raise the issue as soon as 
possible.
- Please provide an ETA for the pending blockers as soon as possible.

Infra:
- Please check Jenkins status for 3.5 jobs and sync with relevant maintainers 
if there are issues.

There are still 68 bugs [2] targeted to 3.5.1.
Excluding node and documentation bugs we still have 45 bugs [3] targeted to 
3.5.1.

Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.5.1 should not be 
released without them fixed.
- Please update the target to 3.5.2 or later for bugs that won't be in 3.5.1:
  it will ease gathering the blocking bugs for next releases.
- Please fill release notes, the page has been created here [4]

Community:
- If you're testing oVirt 3.5 nightly snapshot, please add yourself to the test 
page [5]


[1] http://bugzilla.redhat.com/1155170
[2] http://goo.gl/7G0PDV
[3] http://goo.gl/6gUbVr
[4] http://www.ovirt.org/OVirt_3.5.1_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] supervdsmServer consumes 30% of host memory

2014-12-03 Thread Sandro Bonazzola
Il 02/12/2014 22:15, Dan Kenigsberg ha scritto:
> On Tue, Dec 02, 2014 at 08:28:55PM +0100, Demeter Tibor wrote:
>> Hi, 
>>
>> We have a productive ovirt 3.5 cluster with 3 nodes. 
>>
>> The first node has 40 gb of memory and running only one VM that use 8 gb of 
>> ram. 
>> But the supervdsmserver consumes 20% of host memory. 
>> On the second node has 32 GB of memory with 5 VMs. On this node the 
>> supervdsmserver using 30%(!) of memory! 
>> On the node3 there are 6 VMs with 72 GB of memory. But the supervdsm server 
>> use only 1.7% of memory. 
>>
>> Does anyone know why? 
> 
> most probably
> 
> Bug 1142647 - supervdsm leaks memory when using glusterfs
> 
> which would be fixed in ovirt-3.5.1. Unfortunately, it won't be released
> today - the released date has been postponed to Dec 9th.
> 
> Since the two glusterfs issues (memleak and segfault) repeat so often,
> I've tagged vdsm-4.16.8 as a release candidate for ovirt-3.5.1.
> 
> Sandro, could you help in building it and placing it somewhere for
> people to try it out? After all, it has 87 (!) patches since 3.5.0, so
> testing is due.

Dan, 4.16.8-0 has been already built in jenkins[1][2][3][4] and it has been 
already published in 3.5 nightly snapshot[5].
Whoever want to test it is more than welcome.
Instructions for using nightly are on the wiki [6].
Please add yourself to the testing report page [7] if you're going to test it.


[1] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-el6-x86_64_merged/133/
[2] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-el7-x86_64_merged/130/
[3] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-fc19-x86_64_merged/130/
[4] http://jenkins.ovirt.org/job/vdsm_3.5_create-rpms-fc20-x86_64_merged/130/
[5] 
http://jenkins.ovirt.org/view/Publishers/job/publish_ovirt_rpms_nightly_3.5/202/
[6] http://www.ovirt.org/Install_nightly_snapshot
[7] http://www.ovirt.org/Testing/oVirt_3.5.1_Testing

Thanks,



> 
> Regards,
> Dan.
> 


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] [QE][ACTION REQUIRED] oVirt 3.6.0 status

2014-12-03 Thread Sandro Bonazzola
Hi,

Release criteria discussion has been closed with last week oVirt sync meeting 
[1].

Release management for 3.6.0 [2] will soon be updated with the accepted changes 
in release criteria.
The remaining key milestones for this release must now be scheduled.

For reference, external project schedules we're tracking are:
Fedora 21: 2014-12-09
Fedora 22: 2015-XX-XX
GlusterFS 3.7: 2015-04-29
OpenStack Kilo: 2015-04-30


Two different proposals have been made about above scheduling [3]:
1) extend the cycle to 10 months for allowing to include a large feature set
2) reduce the cycle to less than 6 months and split features over 3.6 and 3.7

Feature proposed for 3.6.0 must now be collected in the 3.6 Google doc [4]
and reviewed by maintainers.

The tracker bug for 3.6.0 [5] currently shows no blockers.

There are 453 bugs [6] targeted to 3.6.0.
Excluding node and documentation bugs we have 430 bugs [7] targeted to 3.6.0.


[1] 
http://resources.ovirt.org/meetings/ovirt/2014/ovirt.2014-11-26-15.07.log.html
[2] http://www.ovirt.org/OVirt_3.6_Release_Management
[3] http://lists.ovirt.org/pipermail/users/2014-November/028875.html
[4] http://goo.gl/9X3G49
[5] https://bugzilla.redhat.com/show_bug.cgi?id=1155425
[6] http://goo.gl/zwkF3r
[7] http://goo.gl/ZbUiMc


-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] supervdsmServer consumes 30% of host memory

2014-12-03 Thread Daniel Helgenberger


On 02.12.2014 22:16, Dan Kenigsberg wrote:
> On Tue, Dec 02, 2014 at 08:28:55PM +0100, Demeter Tibor wrote:
>> Hi,
>>
>> We have a productive ovirt 3.5 cluster with 3 nodes.
>>
>> The first node has 40 gb of memory and running only one VM that use 8 gb of 
>> ram.
>> But the supervdsmserver consumes 20% of host memory.
>> On the second node has 32 GB of memory with 5 VMs. On this node the 
>> supervdsmserver using 30%(!) of memory!
>> On the node3 there are 6 VMs with 72 GB of memory. But the supervdsm server 
>> use only 1.7% of memory.
>>
>> Does anyone know why?
>
> most probably
>
>  Bug 1142647 - supervdsm leaks memory when using glusterfs
>
> which would be fixed in ovirt-3.5.1. Unfortunately, it won't be released
> today - the released date has been postponed to Dec 9th.
>
> Since the two glusterfs issues (memleak and segfault) repeat so often,
> I've tagged vdsm-4.16.8 as a release candidate for ovirt-3.5.1.
>
> Sandro, could you help in building it and placing it somewhere for
> people to try it out? After all, it has 87 (!) patches since 3.5.0, so
> testing is due.
Good to know! Giving it a shot, upgrading went smoothly so far. I'll 
report back if something comes up.

>
> Regards,
> Dan.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>

-- 
Daniel Helgenberger
m box bewegtbild GmbH

P: +49/30/2408781-22
F: +49/30/2408781-10

ACKERSTR. 19
D-10115 BERLIN


www.m-box.de  www.monkeymen.tv

Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] ovirt-guest-agent on windows : what Python env. needed?

2014-12-03 Thread Nicolas Ecarnot

Hello,

I read the following page :
http://www.ovirt.org/OVirt_Guest_Agent_For_Windows
and applied it on a server, and it ran very well.

I obtained the two executables, copied them into "program files" 
according to the doc, along with the .ini as stated here :

https://www.mail-archive.com/users@ovirt.org/msg18561.html

- the "-install", the start, and the enabling went fine
- rebooting the server runs OK too, and the agent is seen by oVirt

What I don't understand is the following sentence of
https://github.com/oVirt/ovirt-guest-agent/blob/master/ovirt-guest-agent/README-windows.txt

"Optionally install py2exe if you want to build an executable file which
doesn't require Python installation for running"

As I don't know python at all, I thought this was building some sort of 
"self-executable" binary that I could copy-paste into another VM, and do 
the same install/enable/run.


And WITHOUT installing any Python environnement.

I'm sorry for such a weak question, but if this is not the case, does 
that mean I have to install a Pyhton env on each of my windows VMs?


BTW, I tried to copy-paste the programfiles/guestagent... into another 
server, and when running the install, it gives a message

[in french, :(  ]
L'application n'a pas pu démarrer car sa configuration côte-à-côte est 
incorrecte.

That could be translated by :
The application could not start because its side-by-side (?) config is 
incorrect.


PS : Time is not anymore at launching windows_vs_linux war, but I just 
installed the guest agent on 17 _linux_ VMs in 3 minutes...


--
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt-guest-agent on windows : what Python env. needed?

2014-12-03 Thread Lev Veyde
Hi Nicolas,

If the agent is compiled with py2exe (and as you got .exe files it means it was 
compiled with py2exe) then the executables are self contained, and you don't 
need to install Python separately in each VM.

All you need is to download and install VC runtime, which you can download from 
here:
http://www.microsoft.com/en-us/download/details.aspx?id=5582

That should resolve the issue.

BTW, we have oVirt WGT (Windows Guest Tools) RPM, with ISO which contains the 
installer that will install the oVirt Guest Agent (including VC Runtime), as 
well as drivers etc. automatically for you.

Thanks in advance,
Lev Veyde.

- Original Message -
From: "Sandro Bonazzola" 
To: "Lev Veyde" 
Sent: Wednesday, December 3, 2014 1:49:00 PM
Subject: Fwd: [ovirt-users] ovirt-guest-agent on windows : what Python env. 
needed?




  Messaggio Inoltrato 
Oggetto: [ovirt-users] ovirt-guest-agent on windows : what Python env. needed?
Data: Wed, 03 Dec 2014 11:49:06 +0100
Mittente: Nicolas Ecarnot 
Organizzazione: Si peu...
A: Users@ovirt.org 

Hello,

I read the following page :
http://www.ovirt.org/OVirt_Guest_Agent_For_Windows
and applied it on a server, and it ran very well.

I obtained the two executables, copied them into "program files"
according to the doc, along with the .ini as stated here :
https://www.mail-archive.com/users@ovirt.org/msg18561.html

- the "-install", the start, and the enabling went fine
- rebooting the server runs OK too, and the agent is seen by oVirt

What I don't understand is the following sentence of
https://github.com/oVirt/ovirt-guest-agent/blob/master/ovirt-guest-agent/README-windows.txt

"Optionally install py2exe if you want to build an executable file which
doesn't require Python installation for running"

As I don't know python at all, I thought this was building some sort of
"self-executable" binary that I could copy-paste into another VM, and do
the same install/enable/run.

And WITHOUT installing any Python environnement.

I'm sorry for such a weak question, but if this is not the case, does
that mean I have to install a Pyhton env on each of my windows VMs?

BTW, I tried to copy-paste the programfiles/guestagent... into another
server, and when running the install, it gives a message
[in french, :(  ]
L'application n'a pas pu démarrer car sa configuration côte-à-côte est
incorrecte.
That could be translated by :
The application could not start because its side-by-side (?) config is
incorrect.

PS : Time is not anymore at launching windows_vs_linux war, but I just
installed the guest agent on 17 _linux_ VMs in 3 minutes...

-- 
Nicolas Ecarnot
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Gluster command [] failed on server...

2014-12-03 Thread Kaushal M
I don't know much about how the network target is brought up in
CentOS7, but I'll try as much as I can.

It seems to me that, after the network has been brought up and by the
time GlusterD is started,
a. The machine hasn't yet recieved it's hostname, or
b. It hasn't yet registered with the name server.

This is causing name resolution failures.

I don't know if the network target could come up without the machine
getting its hostname, so I'm pretty sure it's not a.

So it seems to be b. But these kind of signing in happens only in DDNS
systems, which doesn't seem to be the case for you.

Both of these reasons might be wrong (most likely wrong). You'd do
good if you could ask for help from someone with more experience in
systemd + networking.

~kaushal

On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal  wrote:
> Hi Kaushal,
>
> This is the host...which i rebooted...would you mind to let me know how i
> can make the glusterd sevice come up after network...i am using centos7...if
> network is the issue...
>
> On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M  wrote:
>>
>> This peer cannot be identified.
>>
>> " [2014-12-03 02:29:25.998153] D
>> [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management:
>> Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
>>
>> I don't know why this address is not being resolved during boot time. If
>> this is a valid peer, the the only reason I can think of this that the
>> network is not up.
>>
>> If you had previously detached the peer forcefully, the that could have
>> left stale entries in some volumes. In this case as well, GlusterD will fail
>> to identify the peer.
>>
>> Do either of these reasons seem a possibility to you?
>>
>> On Dec 3, 2014 8:07 AM, "Punit Dambiwal"  wrote:
>>>
>>> Hi Kaushal,
>>>
>>> Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed
>>>
>>> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M  wrote:

 Hey Punit,
 In the logs you've provided, GlusterD appears to be running correctly.
 Could you provide the logs for the time period when GlusterD attempts to
 start but fails.

 ~kaushal

 On Dec 2, 2014 8:03 PM, "Punit Dambiwal"  wrote:
>
> Hi Kaushal,
>
> Please find the logs here :- http://ur1.ca/iyhs5 and
> http://ur1.ca/iyhue
>
> Thanks,
> punit
>
>
> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M  wrote:
>>
>> Hey Punit,
>> Could you start Glusterd in debug mode and provide the logs here?
>> To start it in debug mode, append '-LDEBUG' to the ExecStart line in
>> the service file.
>>
>> ~kaushal
>>
>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal 
>> wrote:
>> > Hi,
>> >
>> > Can Any body help me on this ??
>> >
>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal 
>> > wrote:
>> >>
>> >> Hi Kaushal,
>> >>
>> >> Thanks for the detailed replylet me explain my setup first :-
>> >>
>> >> 1. Ovirt Engine
>> >> 2. 4* host as well as storage machine (Host and gluster combined)
>> >> 3. Every host has 24 bricks...
>> >>
>> >> Now whenever the host machine reboot...it can come up but can not
>> >> join the
>> >> cluster again and through the following error "Gluster command
>> >> []
>> >> failed on server.."
>> >>
>> >> Please check my comment in line :-
>> >>
>> >> 1. Use the same string for doing the peer probe and for the brick
>> >> address
>> >> during volume create/add-brick. Ideally, we suggest you use
>> >> properly
>> >> resolvable FQDNs everywhere. If that is not possible, then use only
>> >> IP
>> >> addresses. Try to avoid short names.
>> >> ---
>> >> [root@cpu05 ~]# gluster peer status
>> >> Number of Peers: 3
>> >>
>> >> Hostname: cpu03.stack.com
>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb
>> >> State: Peer in Cluster (Connected)
>> >>
>> >> Hostname: cpu04.stack.com
>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0
>> >> State: Peer in Cluster (Connected)
>> >> Other names:
>> >> 10.10.0.8
>> >>
>> >> Hostname: cpu02.stack.com
>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25
>> >> State: Peer in Cluster (Connected)
>> >> [root@cpu05 ~]#
>> >> 
>> >> 2. During boot up, make sure to launch glusterd only after the
>> >> network is
>> >> up. This will allow the new peer identification mechanism to do its
>> >> job correctly.
>> >> >> I think the service itself doing the same job
>> >>
>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service
>> >> [Unit]
>> >> Description=GlusterFS, a clustered file-system server
>> >> After=network.target rpcbind.service
>> >> Before=network-online.target
>> >>
>> >> [Service]
>> >> Type=forking
>> >> PIDFile=/var/run/glusterd.pid
>> >> LimitNOFILE=65536
>> 

Re: [ovirt-users] [ovirt-devel] hosted-engine setup/migration features for 3.6

2014-12-03 Thread Bob Doolittle
Resending - inadvertently dropped CCs.

On Wed, Dec 3, 2014 at 7:50 AM, Bob Doolittle  wrote:

> Another issue with that page is that it assumes a remote database. I am
> not sure what percentage of cases have remote databases but clearly many
> (most?) do not, since that's not default behavior. So that page definitely
> needs attention. See:
> https://bugzilla.redhat.com/show_bug.cgi?id=105
> https://bugzilla.redhat.com/show_bug.cgi?id=108
>
> Some of us have wanted to disable global maintenance upon bootup by adding
> a systemd service on Fedora 20 (since you must enable global maintenance to
> shut it down cleanly), and have found it impossible to create the necessary
> systemd dependencies. It seems that (at least with 3.4) hosted-engine
> --set-maintenance --mode=none will return an error for several seconds
> after all other services have started and it's not clear what can be waited
> upon in order to issue the command with assurance it will complete
> successfully. This isn't strictly a setup/migration issue but it is an
> issue with setting up a desired configuration with hosted-engine. The way
> to reproduce this is simply to wait until gdm-greeter displays the login
> prompt, ssh into the system and execute hosted-engine --set-maintenance
> --mode=none and observe the error. Or create a systemd service that depends
> upon (waits for) the latest-possible service, try executing the command
> there, and observe the error. Ideally there would be some external
> observable event which a systemd service could depend upon, when
> hosted-engine is ready to do its thing.
>
> Regards,
> Bob
>
>
> On Wed, Dec 3, 2014 at 2:59 AM, Yedidyah Bar David 
> wrote:
>
>> Hi all,
>>
>> We already have quite a lot of open ovirt-hosted-engine-setup bugs for
>> 3.6 [1].
>>
>> Yesterday I tried helping someone on irc who planned to migrate to
>> hosted-engine
>> manually, and without knowing (so it seems) that such a feature exists.
>> He had
>> an engine set up on a physical host, prepared a VM for it, and asked
>> about migrating
>> the engine to the VM. In principle this works, but the final result will
>> be a
>> hosted-engine, where the engine manages a VM the runs itself, without
>> knowing it,
>> and without HA.
>>
>> The current recommended migration flow is described in [2]. This page is
>> perhaps
>> a bit outdated, perhaps missing some details etc., but principally works.
>> The main
>> issue with it, AFAICT after discussing this a bit with few people, is
>> that it
>> requires a new clean host.
>>
>> I'd like to hear what people here think about such and similar flows.
>>
>> If you already had an engine and migrated to hosted-engine, what was
>> good, what
>> was bad, what would you like to change?
>>
>> If you plan such a migration, what do you find missing currently?
>>
>> [1] http://red.ht/1vle8Vv
>> [2] http://www.ovirt.org/Migrate_to_Hosted_Engine
>>
>> Best,
>> --
>> Didi
>> ___
>> Devel mailing list
>> de...@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [Gluster-users] Gluster command [] failed on server...

2014-12-03 Thread Kaushal M
I just remembered this.

There was another user having a similar issue of GlusterD failing to
start on the mailing list a while back. The cause of his problem was
the way his network was brought up.
IIRC, he was using a static network configuration. The problem
vanished when he began using dhcp. Or it might have been he was using
dhcp.service and it got solved after switching to NetworkManager.

This could be one more thing you could look at.

I'll try to find the mail thread to see if it was the same problem as you.

~kaushal

On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M  wrote:
> I don't know much about how the network target is brought up in
> CentOS7, but I'll try as much as I can.
>
> It seems to me that, after the network has been brought up and by the
> time GlusterD is started,
> a. The machine hasn't yet recieved it's hostname, or
> b. It hasn't yet registered with the name server.
>
> This is causing name resolution failures.
>
> I don't know if the network target could come up without the machine
> getting its hostname, so I'm pretty sure it's not a.
>
> So it seems to be b. But these kind of signing in happens only in DDNS
> systems, which doesn't seem to be the case for you.
>
> Both of these reasons might be wrong (most likely wrong). You'd do
> good if you could ask for help from someone with more experience in
> systemd + networking.
>
> ~kaushal
>
> On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal  wrote:
>> Hi Kaushal,
>>
>> This is the host...which i rebooted...would you mind to let me know how i
>> can make the glusterd sevice come up after network...i am using centos7...if
>> network is the issue...
>>
>> On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M  wrote:
>>>
>>> This peer cannot be identified.
>>>
>>> " [2014-12-03 02:29:25.998153] D
>>> [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management:
>>> Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
>>>
>>> I don't know why this address is not being resolved during boot time. If
>>> this is a valid peer, the the only reason I can think of this that the
>>> network is not up.
>>>
>>> If you had previously detached the peer forcefully, the that could have
>>> left stale entries in some volumes. In this case as well, GlusterD will fail
>>> to identify the peer.
>>>
>>> Do either of these reasons seem a possibility to you?
>>>
>>> On Dec 3, 2014 8:07 AM, "Punit Dambiwal"  wrote:

 Hi Kaushal,

 Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed

 On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M  wrote:
>
> Hey Punit,
> In the logs you've provided, GlusterD appears to be running correctly.
> Could you provide the logs for the time period when GlusterD attempts to
> start but fails.
>
> ~kaushal
>
> On Dec 2, 2014 8:03 PM, "Punit Dambiwal"  wrote:
>>
>> Hi Kaushal,
>>
>> Please find the logs here :- http://ur1.ca/iyhs5 and
>> http://ur1.ca/iyhue
>>
>> Thanks,
>> punit
>>
>>
>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M  wrote:
>>>
>>> Hey Punit,
>>> Could you start Glusterd in debug mode and provide the logs here?
>>> To start it in debug mode, append '-LDEBUG' to the ExecStart line in
>>> the service file.
>>>
>>> ~kaushal
>>>
>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal 
>>> wrote:
>>> > Hi,
>>> >
>>> > Can Any body help me on this ??
>>> >
>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal 
>>> > wrote:
>>> >>
>>> >> Hi Kaushal,
>>> >>
>>> >> Thanks for the detailed replylet me explain my setup first :-
>>> >>
>>> >> 1. Ovirt Engine
>>> >> 2. 4* host as well as storage machine (Host and gluster combined)
>>> >> 3. Every host has 24 bricks...
>>> >>
>>> >> Now whenever the host machine reboot...it can come up but can not
>>> >> join the
>>> >> cluster again and through the following error "Gluster command
>>> >> []
>>> >> failed on server.."
>>> >>
>>> >> Please check my comment in line :-
>>> >>
>>> >> 1. Use the same string for doing the peer probe and for the brick
>>> >> address
>>> >> during volume create/add-brick. Ideally, we suggest you use
>>> >> properly
>>> >> resolvable FQDNs everywhere. If that is not possible, then use only
>>> >> IP
>>> >> addresses. Try to avoid short names.
>>> >> ---
>>> >> [root@cpu05 ~]# gluster peer status
>>> >> Number of Peers: 3
>>> >>
>>> >> Hostname: cpu03.stack.com
>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb
>>> >> State: Peer in Cluster (Connected)
>>> >>
>>> >> Hostname: cpu04.stack.com
>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0
>>> >> State: Peer in Cluster (Connected)
>>> >> Other names:
>>> >> 10.10.0.8
>>> >>
>>> >> Hostname: cpu02.stack.com
>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25

Re: [ovirt-users] [ovirt-devel] hosted-engine setup/migration features for 3.6

2014-12-03 Thread Yedidyah Bar David
- Original Message -
> From: "Bob Doolittle" 
> To: "Yedidyah Bar David" 
> Sent: Wednesday, December 3, 2014 2:50:12 PM
> Subject: Re: [ovirt-devel] hosted-engine setup/migration features for 3.6
> 
> Another issue with that page is that it assumes a remote database. I am not
> sure what percentage of cases have remote databases but clearly many
> (most?) do not, since that's not default behavior.

I agree.

> So that page definitely
> needs attention. See:
> https://bugzilla.redhat.com/show_bug.cgi?id=105

Indeed. Note that this isn't specific to hosted-engine, it's the same for
any migration using engine-backup to backup/restore, therefore there is
a link to its page in the top, where this is more detailed. We also have
a bug [3] to automate this.

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1064503

> https://bugzilla.redhat.com/show_bug.cgi?id=108
> 
> Some of us have wanted to disable global maintenance upon bootup by adding
> a systemd service on Fedora 20 (since you must enable global maintenance to
> shut it down cleanly), and have found it impossible to create the necessary
> systemd dependencies. It seems that (at least with 3.4) hosted-engine
> --set-maintenance --mode=none will return an error for several seconds
> after all other services have started and it's not clear what can be waited
> upon in order to issue the command with assurance it will complete
> successfully. This isn't strictly a setup/migration issue but it is an
> issue with setting up a desired configuration with hosted-engine. The way
> to reproduce this is simply to wait until gdm-greeter displays the login
> prompt, ssh into the system and execute hosted-engine --set-maintenance
> --mode=none and observe the error. Or create a systemd service that depends
> upon (waits for) the latest-possible service, try executing the command
> there, and observe the error. Ideally there would be some external
> observable event which a systemd service could depend upon, when
> hosted-engine is ready to do its thing.

Adding Jiri for that. Do you have an open bug?

Thanks,

> 
> Regards,
> Bob
> 
> 
> On Wed, Dec 3, 2014 at 2:59 AM, Yedidyah Bar David  wrote:
> 
> > Hi all,
> >
> > We already have quite a lot of open ovirt-hosted-engine-setup bugs for 3.6
> > [1].
> >
> > Yesterday I tried helping someone on irc who planned to migrate to
> > hosted-engine
> > manually, and without knowing (so it seems) that such a feature exists. He
> > had
> > an engine set up on a physical host, prepared a VM for it, and asked about
> > migrating
> > the engine to the VM. In principle this works, but the final result will
> > be a
> > hosted-engine, where the engine manages a VM the runs itself, without
> > knowing it,
> > and without HA.
> >
> > The current recommended migration flow is described in [2]. This page is
> > perhaps
> > a bit outdated, perhaps missing some details etc., but principally works.
> > The main
> > issue with it, AFAICT after discussing this a bit with few people, is that
> > it
> > requires a new clean host.
> >
> > I'd like to hear what people here think about such and similar flows.
> >
> > If you already had an engine and migrated to hosted-engine, what was good,
> > what
> > was bad, what would you like to change?
> >
> > If you plan such a migration, what do you find missing currently?
> >
> > [1] http://red.ht/1vle8Vv
> > [2] http://www.ovirt.org/Migrate_to_Hosted_Engine
> >
> > Best,
> > --
> > Didi
> > ___
> > Devel mailing list
> > de...@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> 

-- 
Didi

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [ovirt-devel] hosted-engine setup/migration features for 3.6

2014-12-03 Thread Yedidyah Bar David
- Original Message -
> From: "Yedidyah Bar David" 
> To: "Bob Doolittle" 
> Cc: "users" , "devel" 
> Sent: Wednesday, December 3, 2014 3:04:17 PM
> Subject: Re: [ovirt-devel] hosted-engine setup/migration features for 3.6
> 
> - Original Message -
> > From: "Bob Doolittle" 
> > To: "Yedidyah Bar David" 
> > Sent: Wednesday, December 3, 2014 2:50:12 PM
> > Subject: Re: [ovirt-devel] hosted-engine setup/migration features for 3.6
> > 
> > Another issue with that page is that it assumes a remote database. I am not
> > sure what percentage of cases have remote databases but clearly many
> > (most?) do not, since that's not default behavior.
> 
> I agree.
> 
> > So that page definitely
> > needs attention. See:
> > https://bugzilla.redhat.com/show_bug.cgi?id=105
> 
> Indeed. Note that this isn't specific to hosted-engine, it's the same for
> any migration using engine-backup to backup/restore, therefore there is
> a link to its page in the top, where this is more detailed.

I now added another note next to the restore text.

> We also have
> a bug [3] to automate this.
> 
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1064503
> 
> > https://bugzilla.redhat.com/show_bug.cgi?id=108
> > 
> > Some of us have wanted to disable global maintenance upon bootup by adding
> > a systemd service on Fedora 20 (since you must enable global maintenance to
> > shut it down cleanly), and have found it impossible to create the necessary
> > systemd dependencies. It seems that (at least with 3.4) hosted-engine
> > --set-maintenance --mode=none will return an error for several seconds
> > after all other services have started and it's not clear what can be waited
> > upon in order to issue the command with assurance it will complete
> > successfully. This isn't strictly a setup/migration issue but it is an
> > issue with setting up a desired configuration with hosted-engine. The way
> > to reproduce this is simply to wait until gdm-greeter displays the login
> > prompt, ssh into the system and execute hosted-engine --set-maintenance
> > --mode=none and observe the error. Or create a systemd service that depends
> > upon (waits for) the latest-possible service, try executing the command
> > there, and observe the error. Ideally there would be some external
> > observable event which a systemd service could depend upon, when
> > hosted-engine is ready to do its thing.
> 
> Adding Jiri for that. Do you have an open bug?
> 
> Thanks,
> 
> > 
> > Regards,
> > Bob
> > 
> > 
> > On Wed, Dec 3, 2014 at 2:59 AM, Yedidyah Bar David  wrote:
> > 
> > > Hi all,
> > >
> > > We already have quite a lot of open ovirt-hosted-engine-setup bugs for
> > > 3.6
> > > [1].
> > >
> > > Yesterday I tried helping someone on irc who planned to migrate to
> > > hosted-engine
> > > manually, and without knowing (so it seems) that such a feature exists.
> > > He
> > > had
> > > an engine set up on a physical host, prepared a VM for it, and asked
> > > about
> > > migrating
> > > the engine to the VM. In principle this works, but the final result will
> > > be a
> > > hosted-engine, where the engine manages a VM the runs itself, without
> > > knowing it,
> > > and without HA.
> > >
> > > The current recommended migration flow is described in [2]. This page is
> > > perhaps
> > > a bit outdated, perhaps missing some details etc., but principally works.
> > > The main
> > > issue with it, AFAICT after discussing this a bit with few people, is
> > > that
> > > it
> > > requires a new clean host.
> > >
> > > I'd like to hear what people here think about such and similar flows.
> > >
> > > If you already had an engine and migrated to hosted-engine, what was
> > > good,
> > > what
> > > was bad, what would you like to change?
> > >
> > > If you plan such a migration, what do you find missing currently?
> > >
> > > [1] http://red.ht/1vle8Vv
> > > [2] http://www.ovirt.org/Migrate_to_Hosted_Engine
> > >
> > > Best,
> > > --
> > > Didi
> > > ___
> > > Devel mailing list
> > > de...@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/devel
> > >
> > 
> 
> --
> Didi
> 
> ___
> Devel mailing list
> de...@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
> 

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] is it possible to run ovirt node on Diskless HW?

2014-12-03 Thread Arman Khalatyan
Hello,

Doing steps in:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/diskless-nfs-config.html

I would like to know is some one succeeded to run the host on a  diskless
machine?
i am using Centos6.6 node with ovirt 3.5.
Thanks,
Arman.




***

Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany

***
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hypervisor won't assign default gateway

2014-12-03 Thread Aslam, Usman
I have a couple of bonded nics with two bridged vlans. Ovirtmgmt and Primary.
Primary vlan also  has a gateway setup (all using the ovirt UI). The gateway 
shows up in the ifcfg file and I've added it to /etc/sysconfig/network as well.

Problem: Upon a reboot the default gateway does not show up (route -n)

One thing did notice is that VDSM is inserting

DEFROUTE=no  in ifcfg-primary
And
DEFROUTE=yes in ifcfg-ovirtmgmt

If is issue the "route add default gateway x.x.x.x" command, it starts working 
as expected and the default route also shows up (route -n)

Any ideas?

Thanks,
Usman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Hypervisor won't assign default gateway

2014-12-03 Thread Aslam, Usman
I've worked around this issue by

Flipping the DEFROUTE parameter on both vlans
AND
Adding "net_persistence = ifcfg" to "/etc/vdsm/vdsm.conf"

I'm sure this isn't ideal but it works. If I update configuration using the web 
UI, it requires me to manually edit and flip the DEFROUTE argument again.

Is there a way to control how this flag is setup?

Thanks,
Usman


From: Aslam, Usman
Sent: Wednesday, December 03, 2014 12:44 PM
To: 'users@ovirt.org'
Subject: Hypervisor won't assign default gateway

I have a couple of bonded nics with two bridged vlans. Ovirtmgmt and Primary.
Primary vlan also  has a gateway setup (all using the ovirt UI). The gateway 
shows up in the ifcfg file and I've added it to /etc/sysconfig/network as well.

Problem: Upon a reboot the default gateway does not show up (route -n)

One thing did notice is that VDSM is inserting

DEFROUTE=no  in ifcfg-primary
And
DEFROUTE=yes in ifcfg-ovirtmgmt

If is issue the "route add default gateway x.x.x.x" command, it starts working 
as expected and the default route also shows up (route -n)

Any ideas?

Thanks,
Usman
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] [QE][ACTION REQUIRED] oVirt 3.6.0 status

2014-12-03 Thread Robert Story
On Wed, 03 Dec 2014 10:37:19 +0100 Sandro wrote:
SB> Two different proposals have been made about above scheduling [3]:
SB> 1) extend the cycle to 10 months for allowing to include a large
SB> feature set 2) reduce the cycle to less than 6 months and split
SB> features over 3.6 and 3.7

I'd prefer a six-month cycle, so that the smaller features and enhancements
come more quickly.

Robert

-- 
Senior Software Engineer @ Parsons


signature.asc
Description: PGP signature
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup solution using the API

2014-12-03 Thread plysan
Hi,

For the live-backup, i think you can make a live snapshot of the vm, and
then clone a new vm from that snapshot, after that you can do export.

2014-11-27 23:12 GMT+08:00 Keppler, Thomas (PEBA) :

>  Hello,
>
> now that our oVirt Cluster runs great, I'd like to know if anybody has
> done a backup solution for oVirt before.
>
> My basic idea goes as follows:
> - Create a JSON file with machine's preferences for later restore, create
> a snapshot, snatch the disk by its disk-id (copy it to the fileserver),
> then deleting the snapshot and be done with it.
> - On restore I'd just create a new VM with the Disks and NICs specified in
> the preferences; then I'd go ahead and and put back the disk... t
>
> I've played a little bit around with building a JSON file and so far it
> works great; I haven't tried to make a VM with that, though...
>
> Using the export domain or a simple export command is not what I want
> since you'd have to turn off the machine to do that - AFAIK. Correct me if
> that should not be true.
>
> Now, before I go into any more hassle, has somebody else of you done a
> live-backup solution for oVirt? Are there any recommendations? Thanks for
> any help provided!
>
> Best regards
> Thomas Keppler
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Backup solution using the API

2014-12-03 Thread Liron Aravot
Hi Thomas/plysan
oVirt has the Backup/Restore api designated to provide the capabilities to 
backup/restore a vm.
see here:
http://www.ovirt.org/Features/Backup-Restore_API_Integration

- Original Message -
> From: "plysan" 
> To: "Thomas Keppler (PEBA)" 
> Cc: users@ovirt.org
> Sent: Thursday, December 4, 2014 3:51:24 AM
> Subject: Re: [ovirt-users] Backup solution using the API
> 
> Hi,
> 
> For the live-backup, i think you can make a live snapshot of the vm, and then
> clone a new vm from that snapshot, after that you can do export.
> 
> 2014-11-27 23:12 GMT+08:00 Keppler, Thomas (PEBA) < thomas.kepp...@kit.edu >
> :
> 
> 
> 
> Hello,
> 
> now that our oVirt Cluster runs great, I'd like to know if anybody has done a
> backup solution for oVirt before.
> 
> My basic idea goes as follows:
> - Create a JSON file with machine's preferences for later restore, create a
> snapshot, snatch the disk by its disk-id (copy it to the fileserver), then
> deleting the snapshot and be done with it.
> - On restore I'd just create a new VM with the Disks and NICs specified in
> the preferences; then I'd go ahead and and put back the disk... t
> 
> I've played a little bit around with building a JSON file and so far it works
> great; I haven't tried to make a VM with that, though...
> 
> Using the export domain or a simple export command is not what I want since
> you'd have to turn off the machine to do that - AFAIK. Correct me if that
> should not be true.
> 
> Now, before I go into any more hassle, has somebody else of you done a
> live-backup solution for oVirt? Are there any recommendations? Thanks for
> any help provided!
> 
> Best regards
> Thomas Keppler
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users