Re: Add services to active & in-use Network Offering

2015-01-12 Thread Lee Webb
I was trying to avoid having to manipulate the VM's themselves (other than 
perhaps stop & start) or something that was a little less destructive than 
delete the network & recreate it.

I've had a go at updating the Network using a new NetworkOfferingId, but 
Cloudmonkey reports that it's not something that can be done for Shared 
networks:

(local) > update network id=b2623beb-a685-4b72-81b7-53cb72351896 
networkofferingid=546e9b9a-ec00-438e-ba2f-2509bea949f2 
Async job 19acea4c-443f-4e1b-8e25-a66f13ba789c failed
Error 530, NetworkOffering and domain suffix upgrade can be perfomed for 
Isolated networks only
accountid = 0c0622ba-7b4b-11e4-84fb-2ebf37d37efd
cmd = org.apache.cloudstack.api.command.admin.network.UpdateNetworkCmdByAdmin
created = 2015-01-13T04:41:13+
jobid = 19acea4c-443f-4e1b-8e25-a66f13ba789c
jobprocstatus = 0
jobresult:
errorcode = 530
errortext = NetworkOffering and domain suffix upgrade can be perfomed for 
Isolated networks only
jobresultcode = 530
jobresulttype = object
jobstatus = 2
userid = 0c062ac6-7b4b-11e4-84fb-2ebf37d37efd

> On 13 Jan 2015, at 3:41 pm, Ahmad Emneina  wrote:
> 
> One solution that comes to mind is: import the vm volume as a template in
> cloudstack. Then deploy the vm to the desired network. At deploytime you
> could specify the desired ip address for the vm.
> 
> On Mon, Jan 12, 2015 at 8:21 PM, Lee Webb  wrote:
> 
>> Hi List!
>> 
>> I'm wondering whether it's possible to add services to a Network Offering
>> after it has been created & in use?
>> 
>> I've a very basic offering with no services on a shared network sitting on
>> a physical VLAN which I'd now like to add services to (such as DNS & DHCP).
>> 
>> Is it possible to add them in flight so that they can keep all of the
>> assigned addresses etc.?
>> 
>> What options do I have here
>> 
>> Regards, lee



Add services to active & in-use Network Offering

2015-01-12 Thread Lee Webb
Hi List!

I'm wondering whether it's possible to add services to a Network Offering after 
it has been created & in use?

I've a very basic offering with no services on a shared network sitting on a 
physical VLAN which I'd now like to add services to (such as DNS & DHCP).

Is it possible to add them in flight so that they can keep all of the assigned 
addresses etc.?

What options do I have here

Regards, lee

Re: AW: Shared Storage for VMs

2014-12-15 Thread Lee Webb
On 15/12/2014 6:45 PM, "Jochim, Ingo"  wrote:
>
> Hello Lee,
>
> thanks for sharing your ideas.
> In this scenario someone needs to administer the NAS and handle all
requests, right?

Yes, that's correct

>
> In the case you have a little VM which shares the storage then all
traffic will go through this VM and not directly to the storage.

Also correct.

If your using a physical vlan and shared network then this can be a
physical nas or San device though - this is what I'm doing at the moment

>
> Is there a standard to communicate to different storage systems for
managing volumes/shares?

I'm not aware of one used to manage storage devices.

The underlying protocols to access the storage are standard though
depending on what you choose (nfs, iscsi etc)

If you want customers to manage their own acls etc in a vm then something
like an open filer template might be perfect and allows them to use
allocated cs resources normally

>
> Regards,
> Ingo
>
> -Ursprüngliche Nachricht-
> Von: Lee Webb [mailto:nullify...@gmail.com]
> Gesendet: Samstag, 13. Dezember 2014 02:46
> An: users@cloudstack.apache.org
> Betreff: Re: Shared Storage for VMs
>
> I have this requirement for some of the applications deployed in CS too
>
> Workaround solution (of sorts) was to create a Shared Network attached to
a Physical VLAN & then hook up a Physical NAS to the same VLAN.
> The shared network is bound to a particular account / project so that it
can't be used by everyone
>
> For me part of the attraction of using CS over OpenStack was that you
could craft the network in this way so that you an support applications /
deployments which have physical device requirements or haven't been
developed to be 100% cloudy.
>
> I notice that OpenStack is looking into a shared volume system, & I'd
also like the option of doing it 100% inside of CS if it was capable of
doing so.
>
> I do recall though that in XenServer (& ESX 4 I think) it wasn't possible
to attach a single volume to multiple machines without significant hacking
of the underlying Python - after which building things like an Oracle RAC /
GRID system or OCFS2 was possible but this is probably out of reach for
most users.
>
> Perhaps something like a Virtual NAS VM like the Virtual Routers etc.
would be sufficient - think OpenFiler but inside of CS?
>
> On Fri, Dec 12, 2014 at 10:10 PM, Jochim, Ingo 
> wrote:
> >
> > But this is completely outside of CS. I prefer to have something
> > controlled by CS to have centralized management and quota/usage
> > functionality.
> >
> > -Ursprüngliche Nachricht-
> > Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com]
> > Gesendet: Freitag, 12. Dezember 2014 11:44
> > An: users@cloudstack.apache.org
> > Betreff: Re: Shared Storage for VMs
> >
> > Just use a Unified Storage and export volumes to VMs...
> >
> > On Fri, Dec 12, 2014 at 11:39 AM, Andrija Panic
> > 
> > wrote:
> >
> > > Drbd and gfs2 or something?
> > >
> > > Sent from Google Nexus 4
> > > On Dec 12, 2014 10:00 AM, "Jochim, Ingo" 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I'd like to discuss my feature request for having shared storage
> > > > for several virtual machines controlled by ACS.
> > > > https://issues.apache.org/jira/browse/CLOUDSTACK-7970
> > > > Any ideas about this? Are there workarounds which I can used today?
> > > >
> > > > Many thanks in advance.
> > > > Regards,
> > > > Ingo
> > > >
> > >
> >
> > --
> > This email was Virus checked by Astaro Security Gateway.
> > http://www.sophos.com
> >
>
> --
> This email was Virus checked by Astaro Security Gateway.
http://www.sophos.com


Re: Shared Storage for VMs

2014-12-12 Thread Lee Webb
I have this requirement for some of the applications deployed in CS too

Workaround solution (of sorts) was to create a Shared Network attached to a
Physical VLAN & then hook up a Physical NAS to the same VLAN.
The shared network is bound to a particular account / project so that it
can't be used by everyone

For me part of the attraction of using CS over OpenStack was that you could
craft the network in this way so that you an support applications /
deployments which have physical device requirements or haven't been
developed to be 100% cloudy.

I notice that OpenStack is looking into a shared volume system, & I'd also
like the option of doing it 100% inside of CS if it was capable of doing so.

I do recall though that in XenServer (& ESX 4 I think) it wasn't possible
to attach a single volume to multiple machines without significant hacking
of the underlying Python - after which building things like an Oracle RAC /
GRID system or OCFS2 was possible but this is probably out of reach for
most users.

Perhaps something like a Virtual NAS VM like the Virtual Routers etc. would
be sufficient - think OpenFiler but inside of CS?

On Fri, Dec 12, 2014 at 10:10 PM, Jochim, Ingo 
wrote:
>
> But this is completely outside of CS. I prefer to have something
> controlled by CS to have centralized management and quota/usage
> functionality.
>
> -Ursprüngliche Nachricht-
> Von: Alessandro Caviglione [mailto:c.alessan...@gmail.com]
> Gesendet: Freitag, 12. Dezember 2014 11:44
> An: users@cloudstack.apache.org
> Betreff: Re: Shared Storage for VMs
>
> Just use a Unified Storage and export volumes to VMs...
>
> On Fri, Dec 12, 2014 at 11:39 AM, Andrija Panic 
> wrote:
>
> > Drbd and gfs2 or something?
> >
> > Sent from Google Nexus 4
> > On Dec 12, 2014 10:00 AM, "Jochim, Ingo" 
> > wrote:
> >
> > > Hi all,
> > >
> > > I'd like to discuss my feature request for having shared storage for
> > > several virtual machines controlled by ACS.
> > > https://issues.apache.org/jira/browse/CLOUDSTACK-7970
> > > Any ideas about this? Are there workarounds which I can used today?
> > >
> > > Many thanks in advance.
> > > Regards,
> > > Ingo
> > >
> >
>
> --
> This email was Virus checked by Astaro Security Gateway.
> http://www.sophos.com
>


libvritd segfault when migrating with CentOS 6.6 and CS 4.4.1

2014-12-09 Thread Lee Webb
Hi List,

(apologies if there's a double post the original didn't look to have been sent)

I've encountered an unusual problem of libvirtd segfaulting when a live 
migration is initiated from the CS management server.

I have 5 identical (using SaltStack) Dell PE M420 blades running CentOS 6.6 
with Intel Xeon E5-2470 v2 CPU's which all do the same thing.

The back trace from the core indicates that something is dying within libc.so.6

I've played with the cpu passthrough settings on the agent but this doesn't 
seem to influence whether it crashes or not & normal operation of the VM's 
(start, stop, usage etc.) all appears ok.

I'm considering trying out CentOS 7 to see whether it happens there but haven't 
done that yet

Here is the GDB backtrace

Program terminated with signal 11, Segmentation fault.
#0  0x7f7d8f7fe43a in __strcmp_sse42 () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install 
libvirt-0.10.2-46.el6_6.2.x86_64
(gdb) backtrace
#0  0x7f7d8f7fe43a in __strcmp_sse42 () from /lib64/libc.so.6
#1  0x7f7d92dd6411 in ?? () from /usr/lib64/libvirt.so.0
#2  0x7f7d92dd87e8 in ?? () from /usr/lib64/libvirt.so.0
#3  0x004aac4e in ?? ()
#4  0x0048a2cc in ?? ()
#5  0x00491110 in ?? ()
#6  0x00491ab7 in ?? ()
#7  0x004550b4 in ?? ()
#8  0x7f7d92def13f in virDomainMigratePrepare3 () from 
/usr/lib64/libvirt.so.0
#9  0x0042eddf in ?? ()
#10 0x7f7d92e50132 in virNetServerProgramDispatch () from 
/usr/lib64/libvirt.so.0
#11 0x7f7d92e4d70e in ?? () from /usr/lib64/libvirt.so.0
#12 0x7f7d92e4ddac in ?? () from /usr/lib64/libvirt.so.0
#13 0x7f7d92d6bb3c in ?? () from /usr/lib64/libvirt.so.0
#14 0x7f7d92d6b429 in ?? () from /usr/lib64/libvirt.so.0
#15 0x7f7d8fe789d1 in start_thread () from /lib64/libpthread.so.0
#16 0x7f7d8f7be9dd in clone () from /lib64/libc.so.6
 
and more specifically

#0  __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp.S:260
#1  0x7f7d92dd6411 in x86ModelFind (cpu=0x7f7d68003440, map=0x7f7d680021e0, 
policy=1) at cpu/cpu_x86.c:831
#2  x86ModelFromCPU (cpu=0x7f7d68003440, map=0x7f7d680021e0, policy=1) at 
cpu/cpu_x86.c:850
#3  0x7f7d92dd87e8 in x86Compute (host=, 
cpu=0x7f7d68003440, guest=0x7f7d82f04df0, message=0x7f7d82f04de0) at 
cpu/cpu_x86.c:1243
#4  0x004aac4e in qemuBuildCpuArgStr (conn=0x7f7d5920, 
driver=0x7f7d78013b20, def=0x7f7d68002830, monitor_chr=0x7f7d680026f0, 
monitor_json=true, caps=0x7f7d68002c50, 
migrateFrom=0x7f7d680136d0 "tcp:[::]:49152", migrateFd=-1, snapshot=0x0, 
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START) at qemu/qemu_command.c:4516
#5  qemuBuildCommandLine (conn=0x7f7d5920, driver=0x7f7d78013b20, 
def=0x7f7d68002830, monitor_chr=0x7f7d680026f0, monitor_json=true, 
caps=0x7f7d68002c50, migrateFrom=0x7f7d680136d0 "tcp:[::]:49152", 
migrateFd=-1, snapshot=0x0, 
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START) at qemu/qemu_command.c:5320
#6  0x0048a2cc in qemuProcessStart (conn=0x7f7d5920, 
driver=0x7f7d78013b20, vm=0x7f7d68006e10, migrateFrom=0x7f7d680136d0 
"tcp:[::]:49152", stdin_fd=-1, stdin_path=0x0, snapshot=0x0, 
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, flags=6) at 
qemu/qemu_process.c:4008
#7  0x00491110 in qemuMigrationPrepareAny (driver=0x7f7d78013b20, 
dconn=0x7f7d5920, cookiein=, cookieinlen=255, 
cookieout=0x7f7d82f05ae0, cookieoutlen=0x7f7d82f05aec, 
dname=0x7f7d68002570 "i-2-10-VM", 
dom_xml=0x7f7d680013d0 "\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  
eqx-cs-cmp-05.ipscape.com.au\n  
44454c4c-3000-104b-8043-b4c04f573232, uri_out=0x7f7d68002680, dname=0x7f7d68002570 "i-2-10-VM", 
dom_xml=0x7f7d680013d0 "\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  
eqx-cs-cmp-05.ipscape.com.au\n  
44454c4c-3000-104b-8043-b4c04f573232, uri_in=, uri_out=0x7f7d68002680, flags=1, dname=0x7f7d68002570 
"i-2-10-VM", 
resource=1, 
dom_xml=0x7f7d680013d0 "\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  
eqx-cs-cmp-05.ipscape.com.au\n  
44454c4c-3000-104b-8043-b4c04f573232\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
, client=, msg=, 
rerr=0x7f7d82f05b80, args=0x7f7d680027b0, 
ret=0x7f7d68002790) at remote.c:3590
#12 remoteDispatchDomainMigratePrepare3Helper (server=, 
client=, msg=, rerr=0x7f7d82f05b80, 
args=0x7f7d680027b0, ret=0x7f7d68002790)
at remote_dispatch.h:3695
#13 0x7f7d92e50132 in virNetServerProgramDispatchCall (prog=0x16b7700, 
server=0x16aea20, client=0x16b7010, msg=0x16b07f0) at 
rpc/virnetserverprogram.c:431
#14 virNetServerProgramDispatch (prog=0x16b7700, server=0x16ae

libvritd segfault when migrating with CentOS 6.6 and CS 4.4.1

2014-12-09 Thread Lee Webb
Hi List,

I've encountered an unusual problem of libvirtd segfaulting when a live 
migration is initiated from the CS management server.

I have 5 identical (using SaltStack) Dell PE M420 blades running CentOS 6.6 
with Intel Xeon E5-2470 v2 CPU's which all do the same thing.

The back trace from the core indicates that something is dying within libc.so.6

I've played with the cpu passthrough settings on the agent but this doesn't 
seem to influence whether it crashes or not & normal operation of the VM's 
(start, stop, usage etc.) all appears ok.

I'm considering trying out CentOS 7 to see whether it happens there but haven't 
done that yet

Here is the GDB backtrace

Program terminated with signal 11, Segmentation fault.
#0  0x7f7d8f7fe43a in __strcmp_sse42 () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install 
libvirt-0.10.2-46.el6_6.2.x86_64
(gdb) backtrace
#0  0x7f7d8f7fe43a in __strcmp_sse42 () from /lib64/libc.so.6
#1  0x7f7d92dd6411 in ?? () from /usr/lib64/libvirt.so.0
#2  0x7f7d92dd87e8 in ?? () from /usr/lib64/libvirt.so.0
#3  0x004aac4e in ?? ()
#4  0x0048a2cc in ?? ()
#5  0x00491110 in ?? ()
#6  0x00491ab7 in ?? ()
#7  0x004550b4 in ?? ()
#8  0x7f7d92def13f in virDomainMigratePrepare3 () from 
/usr/lib64/libvirt.so.0
#9  0x0042eddf in ?? ()
#10 0x7f7d92e50132 in virNetServerProgramDispatch () from 
/usr/lib64/libvirt.so.0
#11 0x7f7d92e4d70e in ?? () from /usr/lib64/libvirt.so.0
#12 0x7f7d92e4ddac in ?? () from /usr/lib64/libvirt.so.0
#13 0x7f7d92d6bb3c in ?? () from /usr/lib64/libvirt.so.0
#14 0x7f7d92d6b429 in ?? () from /usr/lib64/libvirt.so.0
#15 0x7f7d8fe789d1 in start_thread () from /lib64/libpthread.so.0
#16 0x7f7d8f7be9dd in clone () from /lib64/libc.so.6
 
and more specifically

#0  __strcmp_sse42 () at ../sysdeps/x86_64/multiarch/strcmp.S:260
#1  0x7f7d92dd6411 in x86ModelFind (cpu=0x7f7d68003440, map=0x7f7d680021e0, 
policy=1) at cpu/cpu_x86.c:831
#2  x86ModelFromCPU (cpu=0x7f7d68003440, map=0x7f7d680021e0, policy=1) at 
cpu/cpu_x86.c:850
#3  0x7f7d92dd87e8 in x86Compute (host=, 
cpu=0x7f7d68003440, guest=0x7f7d82f04df0, message=0x7f7d82f04de0) at 
cpu/cpu_x86.c:1243
#4  0x004aac4e in qemuBuildCpuArgStr (conn=0x7f7d5920, 
driver=0x7f7d78013b20, def=0x7f7d68002830, monitor_chr=0x7f7d680026f0, 
monitor_json=true, caps=0x7f7d68002c50, 
migrateFrom=0x7f7d680136d0 "tcp:[::]:49152", migrateFd=-1, snapshot=0x0, 
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START) at qemu/qemu_command.c:4516
#5  qemuBuildCommandLine (conn=0x7f7d5920, driver=0x7f7d78013b20, 
def=0x7f7d68002830, monitor_chr=0x7f7d680026f0, monitor_json=true, 
caps=0x7f7d68002c50, migrateFrom=0x7f7d680136d0 "tcp:[::]:49152", 
migrateFd=-1, snapshot=0x0, 
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START) at qemu/qemu_command.c:5320
#6  0x0048a2cc in qemuProcessStart (conn=0x7f7d5920, 
driver=0x7f7d78013b20, vm=0x7f7d68006e10, migrateFrom=0x7f7d680136d0 
"tcp:[::]:49152", stdin_fd=-1, stdin_path=0x0, snapshot=0x0, 
vmop=VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START, flags=6) at 
qemu/qemu_process.c:4008
#7  0x00491110 in qemuMigrationPrepareAny (driver=0x7f7d78013b20, 
dconn=0x7f7d5920, cookiein=, cookieinlen=255, 
cookieout=0x7f7d82f05ae0, cookieoutlen=0x7f7d82f05aec, 
dname=0x7f7d68002570 "i-2-10-VM", 
dom_xml=0x7f7d680013d0 "\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  
eqx-cs-cmp-05.ipscape.com.au\n  
44454c4c-3000-104b-8043-b4c04f573232, uri_out=0x7f7d68002680, dname=0x7f7d68002570 "i-2-10-VM", 
dom_xml=0x7f7d680013d0 "\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  
eqx-cs-cmp-05.ipscape.com.au\n  
44454c4c-3000-104b-8043-b4c04f573232, uri_in=, uri_out=0x7f7d68002680, flags=1, dname=0x7f7d68002570 
"i-2-10-VM", 
resource=1, 
dom_xml=0x7f7d680013d0 "\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  
eqx-cs-cmp-05.ipscape.com.au\n  
44454c4c-3000-104b-8043-b4c04f573232\n  i-2-10-VM\n  
95c5aa11-f7ad-4322-b377-d153774e330f\n  CentOS 6.5 
(64-bit)\n  1048576\n  
, client=, msg=, 
rerr=0x7f7d82f05b80, args=0x7f7d680027b0, 
ret=0x7f7d68002790) at remote.c:3590
#12 remoteDispatchDomainMigratePrepare3Helper (server=, 
client=, msg=, rerr=0x7f7d82f05b80, 
args=0x7f7d680027b0, ret=0x7f7d68002790)
at remote_dispatch.h:3695
#13 0x7f7d92e50132 in virNetServerProgramDispatchCall (prog=0x16b7700, 
server=0x16aea20, client=0x16b7010, msg=0x16b07f0) at 
rpc/virnetserverprogram.c:431
#14 virNetServerProgramDispatch (prog=0x16b7700, server=0x16aea20, 
client=0x16b7010, msg=0x16b07f0) at rpc/virnetserverprogram.c:304
#15 0x000

Account scoped Guest Network not available in a Project?

2014-12-05 Thread Lee Webb
Hi List,

I've scoped some Guest Networks to a particular account & was anticipating them 
to be available within a project assuming that they were heirarchcal, however 
this doesn't appear to be the case.

I'd like to share the Guest Networks to a few Projects under an Account, but 
want to hide them from other Accounts in the same Domain

Is this possible?

Regards, lee



Re: Cannot create a Scoped Guest Network

2014-12-05 Thread Lee Webb
Hi Jayapal,

There was no evidence of the create in the logging, so it looks like a UI issue.

After getting cloudmonkey up & running I created the network without any fuss

Thanks for that!

Regards, Lee

> On 5 Dec 2014, at 11:39 pm, Jayapal Reddy Uradi 
>  wrote:
> 
> Hi Lee,
> 
> When you create network with scope did you see API call in the log. If not 
> then it is UI issue.
> Can you try the same thing with API or cloud monkey.
> 
> Thanks,
> Jayapal
> 
> 
> On 05-Dec-2014, at 5:59 PM, Lee Webb 
> wrote:
> 
>> Hi List!
>> 
>> I've got a 4.4.1 CS system up & running however I'm having issues creating a 
>> Guest Network which is scoped to a particular domain or account.
>> 
>> The overall intent is to create a Network for an Account which will allow it 
>> to connect to physical devices within a certain VLAN - in my case a hosted 
>> NAS.
>> 
>> Within the Networks tab I can create my Guest Network specifying the VLAN, 
>> IP, Range etc. without a problem provided that I leave the scoping to ALL.
>> 
>> If I try to do the same withe scoping set to anything lower the UI just 
>> sends me back to the network tab without saying that anything was incorrect.
>> 
>> There's also no evidence of stack traces in the management logging either, 
>> it just silently fails.
>> 
>> The documentation seems to suggest that scoping the Network is possible - or 
>> should I be trying to do the same with an Isolated Network?
>> 
>> Regards, Lee
> 



Cannot create a Scoped Guest Network

2014-12-05 Thread Lee Webb
Hi List!

I've got a 4.4.1 CS system up & running however I'm having issues creating a 
Guest Network which is scoped to a particular domain or account.

The overall intent is to create a Network for an Account which will allow it to 
connect to physical devices within a certain VLAN - in my case a hosted NAS.

Within the Networks tab I can create my Guest Network specifying the VLAN, IP, 
Range etc. without a problem provided that I leave the scoping to ALL.

If I try to do the same withe scoping set to anything lower the UI just sends 
me back to the network tab without saying that anything was incorrect.

There's also no evidence of stack traces in the management logging either, it 
just silently fails.

The documentation seems to suggest that scoping the Network is possible - or 
should I be trying to do the same with an Isolated Network?

Regards, Lee