[Libvir] [PATCH] Add /usr/sbin to path when searching for iptables

2008-05-05 Thread Jim Fehlig
iptables resides in /usr/sbin on SuSE distros.  Add it to path when
searching for iptables.

Regards,
Jim

diff -ur a/configure.in b/configure.in
--- a/configure.in	2008-05-05 13:46:20.0 -0600
+++ b/configure.in	2008-05-05 13:43:14.0 -0600
@@ -217,7 +217,7 @@
AC_DEFINE_UNQUOTED(LOKKIT_PATH, $LOKKIT_PATH, [path to lokkit binary])
 fi
 
-AC_PATH_PROG(IPTABLES_PATH, iptables, /sbin/iptables)
+AC_PATH_PROG(IPTABLES_PATH, iptables, /sbin/iptables, [/usr/sbin:$PATH])
 AC_DEFINE_UNQUOTED(IPTABLES_PATH, $IPTABLES_PATH, [path to iptables binary])
 
 dnl
--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] [PATCH] Add /usr/sbin to path when searching for iptables

2008-05-05 Thread Daniel P. Berrange
On Mon, May 05, 2008 at 01:51:38PM -0600, Jim Fehlig wrote:
 iptables resides in /usr/sbin on SuSE distros.  Add it to path when
 searching for iptables.

Thanks, I've comitted this patch.

Dan.
-- 
|: Red Hat, Engineering, Boston   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


[Libvir] RFC 'xen like scripts'

2008-05-05 Thread Stefan de Konink
Currently in Xen it is very easy to prototype a certain function related
to binding host hardware to virtual hardware. The script gets an argument,
can do anything it wants and returns another argument, or better: writes
it to the xen-store.

I was reviewing the current iSCSI code, because I cannot switch to libvirt
with the current implementation. (From OpenSolaris, I migrated to NetApp.)


So I pose a simple request: would anyone be able to create a xen-like
storage backend that in principle passes the URI to a script (that runs as
a fork) this script sets for example an envirionment variable or text
output, the code uses this envirionment variable as path to the storage
area. And a great prototypable system has been created.


It could be a question by some: 'why doesn't he write a simple
implementation in C?' basically: I'll will do this, no worries about that
one, but I would like to be able to prototype my programs first.


Is there anyone who wants to spend a few hours on this simple request, or
could someone tell me what he/she doesn't like about setting the
environment? Other solutions are ofcourse possible.


Waiting for your comments,

Stefan de Konink

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] RFC 'xen like scripts'

2008-05-05 Thread Daniel P. Berrange
On Mon, May 05, 2008 at 10:15:19PM +0200, Stefan de Konink wrote:
 Currently in Xen it is very easy to prototype a certain function related
 to binding host hardware to virtual hardware. The script gets an argument,
 can do anything it wants and returns another argument, or better: writes
 it to the xen-store.
 
 I was reviewing the current iSCSI code, because I cannot switch to libvirt
 with the current implementation. (From OpenSolaris, I migrated to NetApp.)
 
 
 So I pose a simple request: would anyone be able to create a xen-like
 storage backend that in principle passes the URI to a script (that runs as
 a fork) this script sets for example an envirionment variable or text
 output, the code uses this envirionment variable as path to the storage
 area. And a great prototypable system has been created.

This kind of plugin functionality is delibrately not exposed in libvirt. 
The XML configuration for a guest is intended to provide a description of 
a guest with guarenteed semantics which is portable to any machine using
libvirt. If we were to enable arbitrary admin provided scripts on the
backend the semantics could not longer be guarenteed.

 It could be a question by some: 'why doesn't he write a simple
 implementation in C?' basically: I'll will do this, no worries about that
 one, but I would like to be able to prototype my programs first.

While I understand your desire to be able to prototype things quickly
I don't want to expose a generic scriptable plugin in the libvirt backend

BTW, if you want any hints / advise / help with making the iSCSI stuff work
on OpenSolaris let me know. I'm assuming the iSCSI admin tools on Linux are
rather differnent in calling conventions, but the general principles of
the Linux impl should still apply.

Dan.
-- 
|: Red Hat, Engineering, Boston   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] [PATCH] lxc: handle SIGCHLD from exiting container

2008-05-05 Thread Daniel P. Berrange
On Wed, Apr 30, 2008 at 11:38:01PM -0700, Dave Leskovec wrote:
 This patch allows the lxc driver to handle SIGCHLD signals from exiting
 containers.  The handling will perform some cleanup such as waiting for
 the container process and killing/waiting the tty process.  This is also
 required as a first step towards providing some kind of client container exit
 notification.  Additional support is needed for that but this SIGCHLD handling
 is what would trigger the notification.
 
 libvirtd was already catching SIGCHLD although it was just ignoring it.  I
 implemented a mechanism to distribute the signal to any other drivers in the
 daemon that registered a function to handle them.  This required some changes 
 to
 the way libvirtd was catching signals (to get the pid of the sending process) 
 as
 well as an addition to the state driver structure.  The intent was to provide
 future drivers access to signals as well.

The reason it was ignoring it was because the QEMU driver detects the
shutdown of the VM without using the SIGCHLD directly. It instead detects
EOF on the STDOUT/ERR of the VM child process  calls waitpid() then to
cleanup.  I notice that the LXC driver does not appear to setup any
STDERR/OUT for its VMs so they're still inheriting the daemon's. If it
isn't a huge problem it'd be desirable to try  have QEMU  LXC operate
in the same general way wrt to their primary child procs for VMs. 

Regards,
Daniel.
-- 
|: Red Hat, Engineering, Boston   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] RFC 'xen like scripts'

2008-05-05 Thread Stefan de Konink
On Mon, 5 May 2008, Daniel P. Berrange wrote:

  So I pose a simple request: would anyone be able to create a xen-like
  storage backend that in principle passes the URI to a script (that runs as
  a fork) this script sets for example an envirionment variable or text
  output, the code uses this envirionment variable as path to the storage
  area. And a great prototypable system has been created.

 This kind of plugin functionality is delibrately not exposed in libvirt.
 The XML configuration for a guest is intended to provide a description of
 a guest with guarenteed semantics which is portable to any machine using
 libvirt. If we were to enable arbitrary admin provided scripts on the
 backend the semantics could not longer be guarenteed.

I agree on your argument, but this limitation in flexibility is a real
show stopper. If only it was a 'dev backend' it would be a great help.
Like we discussed before; our both intentions are to kill xend, but
prototyping in libvirt is currently SM.

  It could be a question by some: 'why doesn't he write a simple
  implementation in C?' basically: I'll will do this, no worries about that
  one, but I would like to be able to prototype my programs first.

 While I understand your desire to be able to prototype things quickly
 I don't want to expose a generic scriptable plugin in the libvirt backend

 BTW, if you want any hints / advise / help with making the iSCSI stuff work
 on OpenSolaris let me know. I'm assuming the iSCSI admin tools on Linux are
 rather differnent in calling conventions, but the general principles of
 the Linux impl should still apply.

I'm not running it *on* Open Solaris (although I would like to do this),
I'm currently creating the tools to be able to talk to an arbritary
'storage backend' to 'create/clone/snapshot/destroy' and in our case:
'create/remove users'. Since this works for NetApp and ZFS now purely
remote based it is rather dynamic.

To implement the iSCSI connection to NetApp I would prefer to pass:

netapp://username/partition


I would prefer to let libvirt figure out where the lun can be found on the
system. (This involves connecting to the fileserver, fetching the LUN,
looking up the connection on the Linux side, reading the symlink).

The other way around would be to 'postprocess' my configurations before I
push them inside libvirt. But then I get another problem, migration.

In my humble opinion a scriptable way doesn't need to be something bad.
Like with Xen the URI can be parsed or not. Would a patch be accepted?



For the iscsi backend, like we have discussed before, just discovery needs
to be implemented. The problem with the NetApp implementation is that it
exports all 'luns' at the same time. Technically this can be done 'host
based', but still *far* from implementable in libvirt using the current
configuration.


Stefan

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] RFC 'xen like scripts'

2008-05-05 Thread Daniel P. Berrange
On Mon, May 05, 2008 at 10:50:25PM +0200, Stefan de Konink wrote:
 On Mon, 5 May 2008, Daniel P. Berrange wrote:
   It could be a question by some: 'why doesn't he write a simple
   implementation in C?' basically: I'll will do this, no worries about that
   one, but I would like to be able to prototype my programs first.
 
  While I understand your desire to be able to prototype things quickly
  I don't want to expose a generic scriptable plugin in the libvirt backend
 
  BTW, if you want any hints / advise / help with making the iSCSI stuff work
  on OpenSolaris let me know. I'm assuming the iSCSI admin tools on Linux are
  rather differnent in calling conventions, but the general principles of
  the Linux impl should still apply.
 
 I'm not running it *on* Open Solaris (although I would like to do this),
 I'm currently creating the tools to be able to talk to an arbritary
 'storage backend' to 'create/clone/snapshot/destroy' and in our case:
 'create/remove users'. Since this works for NetApp and ZFS now purely
 remote based it is rather dynamic.
 
 To implement the iSCSI connection to NetApp I would prefer to pass:
 
 netapp://username/partition
 
 I would prefer to let libvirt figure out where the lun can be found on the
 system. (This involves connecting to the fileserver, fetching the LUN,
 looking up the connection on the Linux side, reading the symlink).

So you're wanting to pass this URL directly to the domain config, rather
than the storage pool ?  If so, then I'd suggest a different approach
which is to extend the domain XML so it can refer to a libvirt managed
storage volume explicitly

Instead of doing

disk type=block
   source dev=/dev/sdf/
/disk

Refering to the storage pool and volume name (which are independant of
the disk path)

disk type=vol
   source pool=somepool vol=somelun/
/disk

When starting the VM, libvirt can turn the pool + volume name into a
path.

 The other way around would be to 'postprocess' my configurations before I
 push them inside libvirt. But then I get another problem, migration.

Migration is one of the reasons for having a clear description of the

 In my humble opinion a scriptable way doesn't need to be something bad.
 Like with Xen the URI can be parsed or not. Would a patch be accepted?

The problem with the Xen approach to storage is that there is no definition
of the semantics of the URIs, other than the fact that the URI scheme maps
to an arbitrary shell script.  The semantics of the configuration are very
important for apps to be able to interpret. 

The Xen scripts only setup the storage at the time the VM starts, so
there is no way to validate apriori that the configuration is actually
meaningful. With an iSCSI hotplug script there's not even any connection
existing until the VM starts.

 For the iscsi backend, like we have discussed before, just discovery needs
 to be implemented. The problem with the NetApp implementation is that it
 exports all 'luns' at the same time. Technically this can be done 'host
 based', but still *far* from implementable in libvirt using the current
 configuration.

I'm struggling to understand where there's needs to be a netapp specific
impl of the iSCSI backend. Either netapp complies with iSCSI spec or it
doesn't. The iSCSI backend is inteded to work with any compliant server.
Or are you trying to use to netapp specific functionality that isn't 
actually part of its iSCSI support ?

Dan.
-- 
|: Red Hat, Engineering, Boston   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] [PATCH] lxc: handle SIGCHLD from exiting container

2008-05-05 Thread Dave Leskovec
Daniel P. Berrange wrote:
 On Wed, Apr 30, 2008 at 11:38:01PM -0700, Dave Leskovec wrote:
 This patch allows the lxc driver to handle SIGCHLD signals from exiting
 containers.  The handling will perform some cleanup such as waiting for
 the container process and killing/waiting the tty process.  This is also
 required as a first step towards providing some kind of client container exit
 notification.  Additional support is needed for that but this SIGCHLD 
 handling
 is what would trigger the notification.

 libvirtd was already catching SIGCHLD although it was just ignoring it.  I
 implemented a mechanism to distribute the signal to any other drivers in the
 daemon that registered a function to handle them.  This required some 
 changes to
 the way libvirtd was catching signals (to get the pid of the sending 
 process) as
 well as an addition to the state driver structure.  The intent was to provide
 future drivers access to signals as well.
 
 The reason it was ignoring it was because the QEMU driver detects the
 shutdown of the VM without using the SIGCHLD directly. It instead detects
 EOF on the STDOUT/ERR of the VM child process  calls waitpid() then to
 cleanup.  I notice that the LXC driver does not appear to setup any
 STDERR/OUT for its VMs so they're still inheriting the daemon's. If it
 isn't a huge problem it'd be desirable to try  have QEMU  LXC operate
 in the same general way wrt to their primary child procs for VMs. 
 
 Regards,
 Daniel.

stdout/err for the container is set to the tty.  Containers can be used in a
non-VM fashion as well.  Think of a container running a daemon process or a
container running a job as a part of a job scheduler/distribution system.
Wouldn't it be valid in these cases for the container close stdout/err while
continuing to run?

-- 
Best Regards,
Dave Leskovec
IBM Linux Technology Center
Open Virtualization

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] [PATCH] lxc: handle SIGCHLD from exiting container

2008-05-05 Thread Daniel P. Berrange
On Mon, May 05, 2008 at 02:33:09PM -0700, Dave Leskovec wrote:
 Daniel P. Berrange wrote:
  On Wed, Apr 30, 2008 at 11:38:01PM -0700, Dave Leskovec wrote:
  This patch allows the lxc driver to handle SIGCHLD signals from exiting
  containers.  The handling will perform some cleanup such as waiting for
  the container process and killing/waiting the tty process.  This is also
  required as a first step towards providing some kind of client container 
  exit
  notification.  Additional support is needed for that but this SIGCHLD 
  handling
  is what would trigger the notification.
 
  libvirtd was already catching SIGCHLD although it was just ignoring it.  I
  implemented a mechanism to distribute the signal to any other drivers in 
  the
  daemon that registered a function to handle them.  This required some 
  changes to
  the way libvirtd was catching signals (to get the pid of the sending 
  process) as
  well as an addition to the state driver structure.  The intent was to 
  provide
  future drivers access to signals as well.
  
  The reason it was ignoring it was because the QEMU driver detects the
  shutdown of the VM without using the SIGCHLD directly. It instead detects
  EOF on the STDOUT/ERR of the VM child process  calls waitpid() then to
  cleanup.  I notice that the LXC driver does not appear to setup any
  STDERR/OUT for its VMs so they're still inheriting the daemon's. If it
  isn't a huge problem it'd be desirable to try  have QEMU  LXC operate
  in the same general way wrt to their primary child procs for VMs. 
  
  Regards,
  Daniel.
 
 stdout/err for the container is set to the tty.  Containers can be used in a
 non-VM fashion as well.  Think of a container running a daemon process or a
 container running a job as a part of a job scheduler/distribution system.
 Wouldn't it be valid in these cases for the container close stdout/err while
 continuing to run?

Hmm, yes, that could be a reasonable use case.  I see the key difference
here is the the immediate child of libvirt *is* the startup application 
in the container which can be anything. So yes, we can't rely on its use
of stderr/out, as we do with QEMU where the immediate child has defined
behaviour

Dan.
-- 
|: Red Hat, Engineering, Boston   -o-   http://people.redhat.com/berrange/ :|
|: http://libvirt.org  -o-  http://virt-manager.org  -o-  http://ovirt.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505  -o-  F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] RFC 'xen like scripts'

2008-05-05 Thread Stefan de Konink
On Mon, 5 May 2008, Daniel P. Berrange wrote:

  netapp://username/partition
 
  I would prefer to let libvirt figure out where the lun can be found on the
  system. (This involves connecting to the fileserver, fetching the LUN,
  looking up the connection on the Linux side, reading the symlink).

 So you're wanting to pass this URL directly to the domain config, rather
 than the storage pool ?  If so, then I'd suggest a different approach
 which is to extend the domain XML so it can refer to a libvirt managed
 storage volume explicitly

 Instead of doing

 disk type=block
source dev=/dev/sdf/
 /disk

 Refering to the storage pool and volume name (which are independant of
 the disk path)

 disk type=vol
source pool=somepool vol=somelun/
 /disk

 When starting the VM, libvirt can turn the pool + volume name into a
 path.

This would work indeed for NetApp. Where somelun would be the 'path' on
the server, that should be resolved to a lun.

  For the iscsi backend, like we have discussed before, just discovery needs
  to be implemented. The problem with the NetApp implementation is that it
  exports all 'luns' at the same time. Technically this can be done 'host
  based', but still *far* from implementable in libvirt using the current
  configuration.

 I'm struggling to understand where there's needs to be a netapp specific
 impl of the iSCSI backend. Either netapp complies with iSCSI spec or it
 doesn't. The iSCSI backend is inteded to work with any compliant server.
 Or are you trying to use to netapp specific functionality that isn't
 actually part of its iSCSI support ?

Short answer:

NetApp puts *all* iSCSI luns on one connection.

Add 'automatic' lunnumbering and no explicit exported comments
in Vendor names etc. to the scenario and you see my ballpark.


So to make it more simple:

OpenSolaris NetApp
All Luns exported   Per hostgroup export of all assigned luns
Maintains 'use' Doesn't know if a specifc lun is used
Uses an identifier  Uses one iSCSI identifier, needs rescanning
lun can be fetched from 'configuration interface'


Due to the reason all LUNs are exported over one connection, a
rescan before usage a rescan is always required. LUN numbering is not
stable, nor they can be found at the client side.


So I guess the best way to see this device is as a 'already connected'
scsi device, that has many disks that can be swapped around. And that it
is connected to the network with an information service. For some obscure
reason the 'information' service doesn't use the same IP address as the
iSCSI connection.



...a lot of fun to put it all in C.


Stefan

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] [PATCH] lxc: handle SIGCHLD from exiting container

2008-05-05 Thread Dave Leskovec
Hi Jim,

Thanks for the review.  Answers below -

Jim Meyering wrote:
 Dave Leskovec [EMAIL PROTECTED] wrote:
 This patch allows the lxc driver to handle SIGCHLD signals from exiting
 ...
 
 Hi Dave,
 At least superficially, this looks fine.
 Two questions:
 
 Index: b/src/driver.h
 ===
 --- a/src/driver.h   2008-04-10 09:54:54.0 -0700
 +++ b/src/driver.h   2008-04-30 15:36:47.0 -0700
 @@ -11,6 +11,10 @@

  #include libxml/uri.h

 +#ifndef _SIGNAL_H
 +#include signal.h
 +#endif
 
 In practice it's fine to include signal.h unconditionally,
 and even multiple times.  Have you encountered a version of signal.h
 that may not be included twice?  If so, it probably deserves a comment
 with the details.
 

No, I don't have any special condition here.  This is probably some past
conditioning resurfacing briefly.  If I remember correctly, it had more to do
with compile efficiency rather than avoiding compile failures from multiple
inclusions.

 ...
 Index: b/src/lxc_driver.c
 ===
 ...
 -static int lxcDomainDestroy(virDomainPtr dom)
 +static int lxcVMCleanup(lxc_driver_t *driver, lxc_vm_t * vm)
  {
  int rc = -1;
 ...
 -rc = WEXITSTATUS(childStatus);
 -DEBUG(container exited with rc: %d, rc);
 +if (WIFEXITED(childStatus)) {
 +rc = WEXITSTATUS(childStatus);
 +DEBUG(container exited with rc: %d, rc);
 +}
 +
 +rc = 0;
 
 Didn't you mean to initialize rc=0 before that if block?
 If not, please add a comment saying why the child failure
 doesn't affect the function's return value.

Nice.  Yes that rc = 0 definitely shouldn't be there.

-- 
Best Regards,
Dave Leskovec
IBM Linux Technology Center
Open Virtualization

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] [PATCH] lxc: handle SIGCHLD from exiting container

2008-05-05 Thread Jim Meyering
Dave Leskovec [EMAIL PROTECTED] wrote:
...
 +#ifndef_SIGNAL_H
 +#include signal.h
 +#endif

 In practice it's fine to include signal.h unconditionally,
 and even multiple times.  Have you encountered a version of signal.h
 that may not be included twice?  If so, it probably deserves a comment
 with the details.

 No, I don't have any special condition here.  This is probably some past
 conditioning resurfacing briefly.  If I remember correctly, it had more to do
 with compile efficiency rather than avoiding compile failures from multiple
 inclusions.

Then don't bother.
gcc performs a handy optimization whereby it doesn't even open
the header file the second (and subsequent) time it's included, as
long as it's entire contents is wrapped in the usual sort of guard:

  #ifndef SYM
  #define SYM
  ...
  #endif

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [Libvir] RFC 'xen like scripts'

2008-05-05 Thread Stefan de Konink
On Tue, 6 May 2008, Daniel P. Berrange wrote:

  NetApp puts *all* iSCSI luns on one connection.
 
  Add 'automatic' lunnumbering and no explicit exported comments
  in Vendor names etc. to the scenario and you see my ballpark.
 
 
  So to make it more simple:
 
  OpenSolaris NetApp
  All Luns exported   Per hostgroup export of all assigned luns
  Maintains 'use' Doesn't know if a specifc lun is used
  Uses an identifier  Uses one iSCSI identifier, needs rescanning
  lun can be fetched from 'configuration interface'
 
 
  Due to the reason all LUNs are exported over one connection, a
  rescan before usage a rescan is always required. LUN numbering is not
  stable, nor they can be found at the client side.

 So where does the information mapping netapp pathnames to the the LUNs
 come from ?

The information service, so in my case ssh script.

 If LUNs can change when re-scanning what happens to LUNs
 already in use for other guests ? It doesn't sound usable if LUNs that
 are in use get renumbered at rescan.

Luns that don't change get not remapped. But if a user decides to destroy
a disk, and have one with the same name again, it is most likely to get a
other lun. But with the same name.

 AFAICT this is basically just suggesting an alternate naming scheme for
 storage volumes, instead of 'lun-XXX' where XXX is the number, you want
 a name that's independant of LUN numbering. So the key question is where
 does the information for the persistent names come from  ?

Exactly this is what I want. If I have a iSCSI uri, I want to have it
discovered, if I have a netapp uri I want to have it 'discovered' too, but
in this case I provide the address of the administrative interface.

And unlike you do for storage pools with iSCSI that the provided target
name for volumes should 'match up' with the 'discovered name', I want this
to be transparent to the user. *Because* Linux might have a target
/dev/bla-by-path/X but who says *BSD, *Solaris has it? (Yes I know, there
are other problems, but the base problem is that the target provided target 
device
is pretty limited to one OS running udev.)


Stefan

--
Libvir-list mailing list
Libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list