Your suggestion solved the problem.
In the UI relative flag still missing, but now VMs are using gfapi.

Il 11/09/2017 05:23, Sahina Bose ha scritto:
You could try to enable the config option for the 4.1 cluster level - using engine-config tool from the Hosted Engine VM. This will require a restart of the engine service and will enable gfapi access for all clusters at 4.1 level though - so try this option if this is acceptable.

On Wed, Aug 30, 2017 at 8:02 PM, Stefano Danzi <s.da...@hawai.it <mailto:s.da...@hawai.it>> wrote:

    above the logs.
    PS cluster compatibility level is 4.1

    engine:

    2017-08-30 16:26:07,928+02 INFO
    [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
    [56d090c5-1097-4641-b745-74af8397d945] Lock Acquired to object
    'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'
    2017-08-30 16:26:07,951+02 WARN
    [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
    [56d090c5-1097-4641-b745-74af8397d945] Validation of action
    'UpdateCluster' failed for user admin@internal. Reasons:
    
VAR__TYPE__CLUSTER,VAR__ACTION__UPDATE,CLUSTER_CANNOT_UPDATE_SUPPORTED_FEATURES_WITH_LOWER_HOSTS
    2017-08-30 16:26:07,952+02 INFO
    [org.ovirt.engine.core.bll.UpdateClusterCommand] (default task-8)
    [56d090c5-1097-4641-b745-74af8397d945] Lock freed to object
    'EngineLock:{exclusiveLocks='[]', sharedLocks='[]'}'

    vdsm:

    2017-08-30 16:29:23,310+0200 INFO  (jsonrpc/0)
    [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
    0.15 seconds (__init__:539)
    2017-08-30 16:29:23,419+0200 INFO  (jsonrpc/4)
    [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in
    0.01 seconds (__init__:539)
    2017-08-30 16:29:23,424+0200 INFO  (jsonrpc/3)
    [jsonrpc.JsonRpcServer] RPC call Host.getAllVmIoTunePolicies
    succeeded in 0.00 seconds (__init__:539)
    2017-08-30 16:29:23,814+0200 INFO  (jsonrpc/5)
    [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
    0.15 seconds (__init__:539)
    2017-08-30 16:29:24,011+0200 INFO  (Reactor thread)
    [ProtocolDetector.AcceptorImpl] Accepted connection from ::1:51862
    (protocoldetector:72)
    2017-08-30 16:29:24,023+0200 INFO  (Reactor thread)
    [ProtocolDetector.Detector] Detected protocol stomp from ::1:51862
    (protocoldetector:127)
    2017-08-30 16:29:24,024+0200 INFO  (Reactor thread)
    [Broker.StompAdapter] Processing CONNECT request (stompreactor:103)
    2017-08-30 16:29:24,031+0200 INFO  (JsonRpc (StompReactor))
    [Broker.StompAdapter] Subscribe command received (stompreactor:130)
    2017-08-30 16:29:24,287+0200 INFO  (jsonrpc/2)
    [jsonrpc.JsonRpcServer] RPC call Host.getHardwareInfo succeeded in
    0.01 seconds (__init__:539)
    2017-08-30 16:29:24,443+0200 INFO  (jsonrpc/7) [vdsm.api] START
    getSpmStatus(spUUID=u'00000002-0002-0002-0002-0000000001ef',
    options=None) from=::ffff:192.168.1.55,46502, flow_id=1f664a9,
    task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:46)
    2017-08-30 16:29:24,446+0200 INFO  (jsonrpc/7) [vdsm.api] FINISH
    getSpmStatus return={'spm_st': {'spmId': 1, 'spmStatus': 'SPM',
    'spmLver': 1430L}} from=::ffff:192.168.1.55,46502,
    flow_id=1f664a9, task_id=c856903a-0af1-4c0c-8a44-7971fee7dffa (api:52)
    2017-08-30 16:29:24,447+0200 INFO  (jsonrpc/7)
    [jsonrpc.JsonRpcServer] RPC call StoragePool.getSpmStatus
    succeeded in 0.00 seconds (__init__:539)
    2017-08-30 16:29:24,460+0200 INFO  (jsonrpc/6)
    [jsonrpc.JsonRpcServer] RPC call GlusterHost.list succeeded in
    0.16 seconds (__init__:539)
    2017-08-30 16:29:24,467+0200 INFO  (jsonrpc/1) [vdsm.api] START
    getStoragePoolInfo(spUUID=u'00000002-0002-0002-0002-0000000001ef',
    options=None) from=::ffff:192.168.1.55,46506, flow_id=1f664a9,
    task_id=029ec55e-9c47-4a20-be44-8c80fd1fd5ac (api:46)


    Il 30/08/2017 16:06, Shani Leviim ha scritto:
    Hi Stefano,
    Can you please attach your engine and vdsm logs?

    *Regards,
    *
    *Shani Leviim
    *

    On Wed, Aug 30, 2017 at 12:46 PM, Stefano Danzi <s.da...@hawai.it
    <mailto:s.da...@hawai.it>> wrote:

        Hello,
        I have a test environment with a sigle host and self hosted
        engine running oVirt Engine: 4.1.5.2-1.el7.centos

        I what to try the option "Native Access on gluster storage
        domain" but I get an error because I have to put the
        host in maintenance mode. I can't do that because I have a
        single host so the hosted engine can't be migrated.

        There are a way to change this option but apply it at next
        reboot?

        _______________________________________________
        Users mailing list
        Users@ovirt.org <mailto:Users@ovirt.org>
        http://lists.ovirt.org/mailman/listinfo/users
        <http://lists.ovirt.org/mailman/listinfo/users>




    _______________________________________________
    Users mailing list
    Users@ovirt.org <mailto:Users@ovirt.org>
    http://lists.ovirt.org/mailman/listinfo/users
    <http://lists.ovirt.org/mailman/listinfo/users>



_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to