Re: [vdsm] Future of Vdsm network configuration

2012-11-13 Thread Alon Bar-Lev

- Original Message - 

> From: "Mark Wu" 
> To: "Alon Bar-Lev" 
> Cc: "Dan Kenigsberg" , vdsm-de...@fedorahosted.org
> Sent: Tuesday, November 13, 2012 5:39:12 AM
> Subject: Re: [vdsm] Future of Vdsm network configuration

> On 11/11/2012 10:46 PM, Alon Bar-Lev wrote:

> > - Original Message -
> 

> > > From: "Dan Kenigsberg"  To:
> > > vdsm-de...@fedorahosted.org Sent: Sunday, November 11, 2012
> > > 4:07:30
> > > PM
> > 
> 
> > > Subject: [vdsm] Future of Vdsm network configuration
> > 
> 

> > > Hi,
> > 
> 

> > > Nowadays, when vdsm receives the setupNetowrk verb, it mangles
> > 
> 
> > > /etc/sysconfig/network-scripts/ifcfg-* files and restarts the
> > > network
> > 
> 
> > > service, so they are read by the responsible SysV service.
> > 
> 

> > > This is very much Fedora-oriented, and not up with the new themes
> > 
> 
> > > in Linux network configuration. Since we want oVirt and Vdsm to
> > > be
> > 
> 
> > > distribution agnostic, and support new features, we have to
> > > change.
> > 
> 

> > > setupNetwork is responsible for two different things:
> > 
> 
> > > (1) configure the host networking interfaces, and
> > 
> 
> > > (2) create virtual networks for guests and connect the to the
> > > world
> > 
> 
> > > over (1).
> > 
> 

> > > Functionality (2) is provided by building Linux software bridges,
> > > and
> > 
> 
> > > vlan devices. I'd like to explore moving it to Open vSwitch,
> > > which
> > 
> 
> > > would
> > 
> 
> > > enable a host of functionalities that we currently lack (e.g.
> > 
> 
> > > tunneling). One thing that worries me is the need to reimplement
> > > our
> > 
> 
> > > config snapshot/recovery on ovs's database.
> > 
> 

> > > As far as I know, ovs is unable to maintain host level parameters
> > > of
> > 
> 
> > > interfaces (e.g. eth0's IPv4 address), so we need another
> > 
> 
> > > tool for functionality (1): either speak to NetworkManager
> > > directly,
> > 
> 
> > > or
> > 
> 
> > > to use NetCF, via its libvirt virInterface* wrapper.
> > 
> 

> > > I have minor worries about NetCF's breadth of testing and usage;
> > > I
> > 
> 
> > > know
> > 
> 
> > > it is intended to be cross-platform, but unlike ovs, I am not
> > > aware
> > 
> 
> > > of a
> > 
> 
> > > wide Debian usage thereof. On the other hand, its API is ready
> > > for
> > 
> 
> > > vdsm's
> > 
> 
> > > usage for quite a while.
> > 
> 

> > > NetworkManager has become ubiquitous, and we'd better integrate
> > > with
> > 
> 
> > > it
> > 
> 
> > > better than our current setting of NM_CONTROLLED=no. But as DPB
> > > tells
> > 
> 
> > > us,
> > > https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.html
> > > we'd better offload integration with NM to libvirt.
> > 
> 

> > > We would like to take Network configuration in VDSM to the next
> > > level
> > 
> 
> > > and make it distribution agnostic in addition for setting the
> > 
> 
> > > infrastructure for more advanced features to be used going
> > > forward.
> > 
> 
> > > The path we think of taking is to integrate with OVS and for
> > > feature
> > 
> 
> > > completeness use NetCF, via its libvirt virInterface* wrapper.
> > > Any
> > 
> 
> > > comments or feedback on this proposal is welcomed.
> > 
> 

> > > Thanks to the oVirt net team members who's input has helped
> > > writing
> > 
> 
> > > this
> > 
> 
> > > email.
> > 
> 
> > Hi,
> 

> > As far as I see this, network manager is a monster that is a huge
> > dependency to have just to create bridges or configure network
> > interfaces... It is true that on a host where network manager lives
> > it would be not polite to define network resources not via its
> > interface, however I don't like we force network manager.
> 

> > libvirt is long not used as virtualization library but system
> > management agent, I am not sure this is the best system agent I
> > would have chosen.
> 

> > I think that all the terms and building blocks got lost in time...
> > and the result integration became more and more complex.
> 

> > Stabilizing such multi-layered component environment is much harder
> > than monolithic environment.
> 

> > I would really want to see vdsm as monolithic component with full
> > control over its resources, I believe this is the only way vdsm can
> > be stable enough to be production grade.
> 

> > Hypervisor should be a total slave of manager (or cluster), so I
> > have
> > no problem in bypassing/disabling any distribution specific tool in
> > favour of atoms (brctl, iproute), in non persistence mode.
> 
> Do you mean just using the utilities (brctl, iproute) on demand and
> not keeping any network configuration
> on vdsm host? Then manager needs reconfigure network on every host
> reboot. Actually, I like this way.
> It could be more flexible than libvirt's virInterface (netcf or NM)
> and have fine-grained control to handle some tough cases. Moreover,
> it's clean than the current mangling network configuration files.

Yes, exactly. 

> > I know this d

Re: [vdsm] Future of Vdsm network configuration

2012-11-13 Thread Simon Grinberg

> > Hypervisor should be a total slave of manager (or cluster), so I have
> > no problem in bypassing/disabling any distribution specific tool in
> > favour of atoms (brctl, iproute), in non persistence mode. 


> Do you mean just using the utilities (brctl, iproute) on demand and not
> keeping any network configuration
> on vdsm host? Then manager needs reconfigure network on every host
> reboot. Actually, I like this way.
> It could be more flexible than libvirt's virInterface (netcf or NM)
> and have fine-grained control to handle some tough cases. Moreover,
> it's clean than the current mangling network configuration files.

+1, 
I've raised that is the past, I don't think the network configuration done by 
the engine should be persisted. This way the admin sets up the node in a 
persistent way such that it always succeeds to boot and has a rout to the 
engine. Engine on node activation updates the network, connects to storage etc. 
 

Now that setupNetworks can do it in one atomic operation this is the way to go, 
very simple.
It's also eases the move of a node from a cluster to cluster. With current 
concept, after you move the host you need to modify the node networks to fit 
the new cluster topology. With non-persistent, placing a host into maintenance 
should also revert the host to the original networking after boot same as it 
disconnects the storage. Now you can easily move the node from cluster to 
cluster or even a differed DC. As soon as you activate - it is configured with 
the new DC/Cluster pair requirements.

And it's a valuable step in the direction of -> Go go dynamic host allocation 
:) 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Future of Vdsm network configuration

2012-11-13 Thread Alon Bar-Lev


- Original Message -
> From: "Simon Grinberg" 
> To: "Mark Wu" 
> Cc: vdsm-de...@fedorahosted.org, "Alon Bar-Lev" 
> Sent: Tuesday, November 13, 2012 10:06:42 AM
> Subject: Re: [vdsm] Future of Vdsm network configuration
> 
> 
> > > Hypervisor should be a total slave of manager (or cluster), so I
> > > have
> > > no problem in bypassing/disabling any distribution specific tool
> > > in
> > > favour of atoms (brctl, iproute), in non persistence mode.
> 
> 
> > Do you mean just using the utilities (brctl, iproute) on demand and
> > not
> > keeping any network configuration
> > on vdsm host? Then manager needs reconfigure network on every host
> > reboot. Actually, I like this way.
> > It could be more flexible than libvirt's virInterface (netcf or NM)
> > and have fine-grained control to handle some tough cases. Moreover,
> > it's clean than the current mangling network configuration files.
> 
> +1,
> I've raised that is the past, I don't think the network configuration
> done by the engine should be persisted. This way the admin sets up
> the node in a persistent way such that it always succeeds to boot
> and has a rout to the engine. Engine on node activation updates the
> network, connects to storage etc.
> 
> Now that setupNetworks can do it in one atomic operation this is the
> way to go, very simple.
> It's also eases the move of a node from a cluster to cluster. With
> current concept, after you move the host you need to modify the node
> networks to fit the new cluster topology. With non-persistent,
> placing a host into maintenance should also revert the host to the
> original networking after boot same as it disconnects the storage.
> Now you can easily move the node from cluster to cluster or even a
> differed DC. As soon as you activate - it is configured with the new
> DC/Cluster pair requirements.
> 
> And it's a valuable step in the direction of -> Go go dynamic host
> allocation :)
> 

So I am not the only insane one where!
Good to know!

Alon
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Future of Vdsm network configuration

2012-11-13 Thread Simon Grinberg


- Original Message -
> From: "Alon Bar-Lev" 
> To: "Simon Grinberg" 
> Cc: vdsm-de...@fedorahosted.org, "Mark Wu" 
> Sent: Tuesday, November 13, 2012 10:13:41 AM
> Subject: Re: [vdsm] Future of Vdsm network configuration
> 
> 
> 
> - Original Message -
> > From: "Simon Grinberg" 
> > To: "Mark Wu" 
> > Cc: vdsm-de...@fedorahosted.org, "Alon Bar-Lev" 
> > Sent: Tuesday, November 13, 2012 10:06:42 AM
> > Subject: Re: [vdsm] Future of Vdsm network configuration
> > 
> > 
> > > > Hypervisor should be a total slave of manager (or cluster), so
> > > > I
> > > > have
> > > > no problem in bypassing/disabling any distribution specific
> > > > tool
> > > > in
> > > > favour of atoms (brctl, iproute), in non persistence mode.
> > 
> > 
> > > Do you mean just using the utilities (brctl, iproute) on demand
> > > and
> > > not
> > > keeping any network configuration
> > > on vdsm host? Then manager needs reconfigure network on every
> > > host
> > > reboot. Actually, I like this way.
> > > It could be more flexible than libvirt's virInterface (netcf or
> > > NM)
> > > and have fine-grained control to handle some tough cases.
> > > Moreover,
> > > it's clean than the current mangling network configuration files.
> > 
> > +1,
> > I've raised that is the past, I don't think the network
> > configuration
> > done by the engine should be persisted. This way the admin sets up
> > the node in a persistent way such that it always succeeds to boot
> > and has a rout to the engine. Engine on node activation updates the
> > network, connects to storage etc.
> > 
> > Now that setupNetworks can do it in one atomic operation this is
> > the
> > way to go, very simple.
> > It's also eases the move of a node from a cluster to cluster. With
> > current concept, after you move the host you need to modify the
> > node
> > networks to fit the new cluster topology. With non-persistent,
> > placing a host into maintenance should also revert the host to the
> > original networking after boot same as it disconnects the storage.
> > Now you can easily move the node from cluster to cluster or even a
> > differed DC. As soon as you activate - it is configured with the
> > new
> > DC/Cluster pair requirements.
> > 
> > And it's a valuable step in the direction of -> Go go dynamic host
> > allocation :)
> > 
> 
> So I am not the only insane one where!
> Good to know!

Hold your horses, all I've said is that I strongly agree that networks should 
be dynamically set from the engine, I did not comment on how. 

If there is a cross distribution utility out there that can do this in  
reliable <\bold> manner and actually  offloads <\bold> logic from VDSM, 
meaning it's simpler then direct usage of mkdev, ip, brctl, etc to configure 
the hosts networking, it should be considered. 

What's important is the goal and not the way there. 
One of my goals, is returning to the stateless node concept the ovirt node had 
started with so many, many years (5?) ago. It just makes sense.

BTU,
I've always been insane, they just didn't caught up with me yet.  

> 
> Alon
> 
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] [PEP 8] About configuring editor plugins for checking PEP 8

2012-11-13 Thread Zhou Zheng Sheng
Hi all,

In the latest version of pep8 checking tool, the rules are very strict.
Currently the VDSM project applies a less strict rule set by suppressing
some errors from pep8. You can find them at Makefile.am . Under the
"check-local:" target, you will see

--ignore=E121,E122,E123,E124,E125,E126,E127,E128,E241


I know some of us using editor plugins to check pep8 erros. Those
plugins invoke flake8/pep8 and report errors. To ignore some rules not
needed by VDSM in the editor, create a file in ~/.config/pep8 and input
the text as follow.

[pep8]
ignore=E121,E122,E123,E124,E125,E126,E127,E128,E241

-- 
Thanks and best regards!

Zhou Zheng Sheng / 周征晟
E-mail: zhshz...@linux.vnet.ibm.com
Telephone: 86-10-82454397

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [PEP 8] About configuring editor plugins for checking PEP 8

2012-11-13 Thread Vinzenz Feenstra

On 11/13/2012 12:16 PM, Zhou Zheng Sheng wrote:

Hi all,

In the latest version of pep8 checking tool, the rules are very strict.
Currently the VDSM project applies a less strict rule set by suppressing
some errors from pep8. You can find them at Makefile.am . Under the
"check-local:" target, you will see

--ignore=E121,E122,E123,E124,E125,E126,E127,E128,E241


I know some of us using editor plugins to check pep8 erros. Those
plugins invoke flake8/pep8 and report errors. To ignore some rules not
needed by VDSM in the editor, create a file in ~/.config/pep8 and input
the text as follow.

[pep8]
ignore=E121,E122,E123,E124,E125,E126,E127,E128,E241


Hi,

I am actually for being more strict than more relaxed. I don't see a 
point in saying we're following the PEP8 guidelines and then disabling 
them again. Where's the point in that?


It's maybe a pain to get us to a pep8 ready state, but I would prefer to 
see no exceptions (ignores).


--
Regards,

Vinzenz Feenstra | Senior Software Engineer
RedHat Engineering Virtualization R & D
Phone: +420 532 294 625
IRC: vfeenstr or evilissimo

Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] [PEP 8] About configuring editor plugins for checking PEP 8

2012-11-13 Thread Dan Kenigsberg
On Tue, Nov 13, 2012 at 12:54:58PM +0100, Vinzenz Feenstra wrote:
> On 11/13/2012 12:16 PM, Zhou Zheng Sheng wrote:
> >Hi all,
> >
> >In the latest version of pep8 checking tool, the rules are very strict.
> >Currently the VDSM project applies a less strict rule set by suppressing
> >some errors from pep8. You can find them at Makefile.am . Under the
> >"check-local:" target, you will see
> >
> >--ignore=E121,E122,E123,E124,E125,E126,E127,E128,E241
> >
> >
> >I know some of us using editor plugins to check pep8 erros. Those
> >plugins invoke flake8/pep8 and report errors. To ignore some rules not
> >needed by VDSM in the editor, create a file in ~/.config/pep8 and input
> >the text as follow.
> >
> >[pep8]
> >ignore=E121,E122,E123,E124,E125,E126,E127,E128,E241
> >
> Hi,
> 
> I am actually for being more strict than more relaxed. I don't see a
> point in saying we're following the PEP8 guidelines and then
> disabling them again. Where's the point in that?
> 
> It's maybe a pain to get us to a pep8 ready state, but I would
> prefer to see no exceptions (ignores).

Indeed. The only point for my adding this exceptions was that vdsm would
otherwise not build on Fedora 18. I would like to see our code fully
confrming to the standards. Even though some of them are really
annoying!
___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


Re: [vdsm] Future of Vdsm network configuration

2012-11-13 Thread Adam Litke
On Sun, Nov 11, 2012 at 09:46:43AM -0500, Alon Bar-Lev wrote:
> 
> 
> - Original Message -
> > From: "Dan Kenigsberg" 
> > To: vdsm-de...@fedorahosted.org
> > Sent: Sunday, November 11, 2012 4:07:30 PM
> > Subject: [vdsm] Future of Vdsm network configuration
> > 
> > Hi,
> > 
> > Nowadays, when vdsm receives the setupNetowrk verb, it mangles
> > /etc/sysconfig/network-scripts/ifcfg-* files and restarts the network
> > service, so they are read by the responsible SysV service.
> > 
> > This is very much Fedora-oriented, and not up with the new themes
> > in Linux network configuration. Since we want oVirt and Vdsm to be
> > distribution agnostic, and support new features, we have to change.
> > 
> > setupNetwork is responsible for two different things:
> > (1) configure the host networking interfaces, and
> > (2) create virtual networks for guests and connect the to the world
> > over (1).
> > 
> > Functionality (2) is provided by building Linux software bridges, and
> > vlan devices. I'd like to explore moving it to Open vSwitch, which
> > would
> > enable a host of functionalities that we currently lack (e.g.
> > tunneling). One thing that worries me is the need to reimplement our
> > config snapshot/recovery on ovs's database.
> > 
> > As far as I know, ovs is unable to maintain host level parameters of
> > interfaces (e.g. eth0's IPv4 address), so we need another
> > tool for functionality (1): either speak to NetworkManager directly,
> > or
> > to use NetCF, via its libvirt virInterface* wrapper.
> > 
> > I have minor worries about NetCF's breadth of testing and usage; I
> > know
> > it is intended to be cross-platform, but unlike ovs, I am not aware
> > of a
> > wide Debian usage thereof. On the other hand, its API is ready for
> > vdsm's
> > usage for quite a while.
> > 
> > NetworkManager has become ubiquitous, and we'd better integrate with
> > it
> > better than our current setting of NM_CONTROLLED=no. But as DPB tells
> > us,
> > https://lists.fedorahosted.org/pipermail/vdsm-devel/2012-November/001677.html
> > we'd better offload integration with NM to libvirt.
> > 
> > We would like to take Network configuration in VDSM to the next level
> > and make it distribution agnostic in addition for setting the
> > infrastructure for more advanced features to be used going forward.
> > The path we think of taking is to integrate with OVS and for feature
> > completeness use NetCF, via its libvirt virInterface* wrapper. Any
> > comments or feedback on this proposal is welcomed.
> > 
> > Thanks to the oVirt net team members who's input has helped writing
> > this
> > email.
> 
> Hi,
> 
> As far as I see this, network manager is a monster that is a huge dependency
> to have just to create bridges or configure network interfaces... It is true
> that on a host where network manager lives it would be not polite to define
> network resources not via its interface, however I don't like we force network
> manager.
> 
> libvirt is long not used as virtualization library but system management
> agent, I am not sure this is the best system agent I would have chosen.
> 
> I think that all the terms and building blocks got lost in time... and the
> result integration became more and more complex.
> 
> Stabilizing such multi-layered component environment is much harder than
> monolithic environment.
> 
> I would really want to see vdsm as monolithic component with full control over
> its resources, I believe this is the only way vdsm can be stable enough to be
> production grade.
> 
> Hypervisor should be a total slave of manager (or cluster), so I have no
> problem in bypassing/disabling any distribution specific tool in favour of
> atoms (brctl, iproute), in non persistence mode.
> 
> I know this derive some more work, but I don't think it is that complex to
> implement and maintain.
> 
> Just my 2 cents...

I couldn't disagree more.  What you are suggesting requires that we reimplement
every single networking feature in oVirt by ourselves.  If we want to support
the (absolutely critical) goal of being distro agnostic, then we need to
implement the same functionality across multiple distros too.  This is more work
than we will ever be able to keep up with.  If you think it's hard to stabilize
the integration of an external networking library, imagine how hard it will be
to stabilize our own rewritten and buggy version.  This is not how open source
is supposed to work.  We should be assembling distinct, modular, pre-existing
components together when they are available.  If NetworkManager has integration
problems, let's work upstream to fix them.  If it's dependencies are too great,
let's modularize it so we don't need to ship the parts that we don't need.

-- 
Adam Litke 
IBM Linux Technology Center

___
vdsm-devel mailing list
vdsm-devel@lists.fedorahosted.org
https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel


[vdsm] Adding Local Storage Domain failed

2012-11-13 Thread Itzik Brown
Hi,
When trying to configure local storage domain I get an error.
Below some lines from vdsm.log:

Thread-46::DEBUG::2012-11-14 
02:16:13,568::task::568::TaskManager.Task::(_updateState) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::moving from state init -> state 
preparing
Thread-46::INFO::2012-11-14 02:16:13,568::logUtils::37::dispatcher::(wrapper) 
Run and protect: repoStats(options=None)
Thread-46::INFO::2012-11-14 02:16:13,568::logUtils::39::dispatcher::(wrapper) 
Run and protect: repoStats, Return response: {}
Thread-46::DEBUG::2012-11-14 
02:16:13,568::task::1151::TaskManager.Task::(prepare) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::finished: {}
Thread-46::DEBUG::2012-11-14 
02:16:13,568::task::568::TaskManager.Task::(_updateState) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::moving from state preparing -> 
state finished
Thread-46::DEBUG::2012-11-14 
02:16:13,569::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-46::DEBUG::2012-11-14 
02:16:13,569::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-46::DEBUG::2012-11-14 
02:16:13,569::task::957::TaskManager.Task::(_decref) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::ref 0 aborting False
Thread-49::DEBUG::2012-11-14 02:16:16,586::BindingXMLRPC::161::vds::(wrapper) 
[172.30.49.69]
Thread-49::DEBUG::2012-11-14 
02:16:16,586::task::568::TaskManager.Task::(_updateState) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::moving from state init -> state 
preparing
Thread-49::INFO::2012-11-14 02:16:16,587::logUtils::37::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection(domType=4, 
spUUID='----', conList=[{'connection': 
'/data/images', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 
'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084', 'port': ''}], options=None)
Thread-49::INFO::2012-11-14 02:16:16,587::logUtils::39::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection, Return response: 
{'statuslist': [{'status': 0, 'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084'}]}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::task::1151::TaskManager.Task::(prepare) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::finished: {'statuslist': 
[{'status': 0, 'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084'}]}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::task::568::TaskManager.Task::(_updateState) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::moving from state preparing -> 
state finished
Thread-49::DEBUG::2012-11-14 
02:16:16,587::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::task::957::TaskManager.Task::(_decref) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::ref 0 aborting False
Thread-50::DEBUG::2012-11-14 02:16:16,646::BindingXMLRPC::161::vds::(wrapper) 
[172.30.49.69]
Thread-50::DEBUG::2012-11-14 
02:16:16,646::task::568::TaskManager.Task::(_updateState) 
Task=`492eab31-c95c-42dd-8d15-73d30d37387c`::moving from state init -> state 
preparing
Thread-50::INFO::2012-11-14 02:16:16,646::logUtils::37::dispatcher::(wrapper) 
Run and protect: connectStorageServer(domType=4, 
spUUID='----', conList=[{'connection': 
'/data/images', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 
'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084', 'port': ''}], options=None)
Thread-50::ERROR::2012-11-14 
02:16:16,764::hsm::2046::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/hsm.py", line 2043, in connectStorageServer
conObj.connect()
  File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect
if not self.checkTarget():
  File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget
fileSD.validateDirAccess(self._path))
  File "/usr/share/vdsm/storage/fileSD.py", line 52, in validateDirAccess
getProcPool().fileUtils.validateAccess(dirPath)
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 277, in 
callCrabRPCFunction
*args, **kwargs)
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in 
callCrabRPCFunction
rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll
timeLeft):
  File "/usr/lib64/python2.7/contextlib.py", line 84, in helper
return GeneratorContextManager(func(*args, **kwds))
  File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll
raise Timeout()
Timeout
Thread-50::INFO::2012-11-14 02:16:16,766::logUtils::39::dispatcher::(wrapper) 
Run and protect: connectStorageServer, Return response: {'statuslist': 
[{'status': 100, 'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084'}]}
Thread-50::DEBUG::2012-11-

Re: [vdsm] Adding Local Storage Domain failed

2012-11-13 Thread Shu Ming

It looks like you were hit by this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=875678

And the patch is here:
http://gerrit.ovirt.org/#/c/9193/


Itzik Brown:

Hi,
When trying to configure local storage domain I get an error.
Below some lines from vdsm.log:

Thread-46::DEBUG::2012-11-14 
02:16:13,568::task::568::TaskManager.Task::(_updateState) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::moving from state init -> state 
preparing
Thread-46::INFO::2012-11-14 02:16:13,568::logUtils::37::dispatcher::(wrapper) 
Run and protect: repoStats(options=None)
Thread-46::INFO::2012-11-14 02:16:13,568::logUtils::39::dispatcher::(wrapper) 
Run and protect: repoStats, Return response: {}
Thread-46::DEBUG::2012-11-14 
02:16:13,568::task::1151::TaskManager.Task::(prepare) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::finished: {}
Thread-46::DEBUG::2012-11-14 
02:16:13,568::task::568::TaskManager.Task::(_updateState) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::moving from state preparing -> 
state finished
Thread-46::DEBUG::2012-11-14 
02:16:13,569::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-46::DEBUG::2012-11-14 
02:16:13,569::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-46::DEBUG::2012-11-14 
02:16:13,569::task::957::TaskManager.Task::(_decref) 
Task=`ff34ce79-4e9c-4e90-8598-4ec6bd9a682f`::ref 0 aborting False
Thread-49::DEBUG::2012-11-14 02:16:16,586::BindingXMLRPC::161::vds::(wrapper) 
[172.30.49.69]
Thread-49::DEBUG::2012-11-14 
02:16:16,586::task::568::TaskManager.Task::(_updateState) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::moving from state init -> state 
preparing
Thread-49::INFO::2012-11-14 02:16:16,587::logUtils::37::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection(domType=4, 
spUUID='----', conList=[{'connection': 
'/data/images', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 
'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084', 'port': ''}], options=None)
Thread-49::INFO::2012-11-14 02:16:16,587::logUtils::39::dispatcher::(wrapper) 
Run and protect: validateStorageServerConnection, Return response: 
{'statuslist': [{'status': 0, 'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084'}]}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::task::1151::TaskManager.Task::(prepare) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::finished: {'statuslist': 
[{'status': 0, 'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084'}]}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::task::568::TaskManager.Task::(_updateState) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::moving from state preparing -> 
state finished
Thread-49::DEBUG::2012-11-14 
02:16:16,587::resourceManager::809::ResourceManager.Owner::(releaseAll) 
Owner.releaseAll requests {} resources {}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::resourceManager::844::ResourceManager.Owner::(cancelAll) 
Owner.cancelAll requests {}
Thread-49::DEBUG::2012-11-14 
02:16:16,587::task::957::TaskManager.Task::(_decref) 
Task=`11fbe03e-edec-402c-a59a-9efa09d802a0`::ref 0 aborting False
Thread-50::DEBUG::2012-11-14 02:16:16,646::BindingXMLRPC::161::vds::(wrapper) 
[172.30.49.69]
Thread-50::DEBUG::2012-11-14 
02:16:16,646::task::568::TaskManager.Task::(_updateState) 
Task=`492eab31-c95c-42dd-8d15-73d30d37387c`::moving from state init -> state 
preparing
Thread-50::INFO::2012-11-14 02:16:16,646::logUtils::37::dispatcher::(wrapper) 
Run and protect: connectStorageServer(domType=4, 
spUUID='----', conList=[{'connection': 
'/data/images', 'iqn': '', 'portal': '', 'user': '', 'password': '**', 
'id': 'd64e9233-cfa5-407a-9d2d-8d3d012cb084', 'port': ''}], options=None)
Thread-50::ERROR::2012-11-14 
02:16:16,764::hsm::2046::Storage.HSM::(connectStorageServer) Could not connect 
to storageServer
Traceback (most recent call last):
   File "/usr/share/vdsm/storage/hsm.py", line 2043, in connectStorageServer
 conObj.connect()
   File "/usr/share/vdsm/storage/storageServer.py", line 462, in connect
 if not self.checkTarget():
   File "/usr/share/vdsm/storage/storageServer.py", line 449, in checkTarget
 fileSD.validateDirAccess(self._path))
   File "/usr/share/vdsm/storage/fileSD.py", line 52, in validateDirAccess
 getProcPool().fileUtils.validateAccess(dirPath)
   File "/usr/share/vdsm/storage/remoteFileHandler.py", line 277, in 
callCrabRPCFunction
 *args, **kwargs)
   File "/usr/share/vdsm/storage/remoteFileHandler.py", line 180, in 
callCrabRPCFunction
 rawLength = self._recvAll(LENGTH_STRUCT_LENGTH, timeout)
   File "/usr/share/vdsm/storage/remoteFileHandler.py", line 149, in _recvAll
 timeLeft):
   File "/usr/lib64/python2.7/contextlib.py", line 84, in helper
 return GeneratorContextManager(func(*args, **kwds))
   File "/usr/share/vdsm/storage/remoteFileHandler.py", line 136, in _poll
 raise Timeout()
Timeout
Thread-50::INFO::2012-11-14 02:16:16,766::logUtils::39::d