All,

I was wondering if anyone else is experiencing this problem when using 
secondary storage on a devcloud-style VM with a host-only and NAT adapter.  One 
aspect of this issue that seems interesting is that following route table from 
the SSVM:

root@s-5-TEST:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.0.3.2        192.168.56.1    255.255.255.255 UGH   0      0        0 eth1
10.0.3.0        *               255.255.255.0   U     0      0        0 eth2
192.168.56.0    *               255.255.255.0   U     0      0        0 eth1
192.168.56.0    *               255.255.255.0   U     0      0        0 eth3
link-local      *               255.255.0.0     U     0      0        0 eth0
default         10.0.3.2        0.0.0.0         UG    0      0        0 eth2

In particular, the gateways for the management and guest networks do not match 
to the configuration provided to the management server (i.e. 10.0.3.2 is the 
gateway for the 10.0.3.0/24 network and 192.168.56.1 is the gateway for the 
192.168.56.0/24 network).  With this configuration, the SSVM has a socket 
connection to the management server, but is in alert state.  Finally, when I 
remove the host-only NIC and use only a NAT adapter the SSVM's networking works 
as expecting leading me to believe that the segregated network configuration is 
at the root of the problem.

Until I can get the networking on the SSVM configured, I am unable to complete 
the testing of the S3-backed Secondary Storage enhancement.

Thank you for your help,
-John

On Dec 3, 2012, at 4:46 PM, John Burwell <jburw...@basho.com> wrote:

> All,
> 
> I am setting up a multi-zone devcloud configuration on VirtualBox 4.2.4 using 
> the Ubuntu 12.04.1 and Xen 4.1.  I have configured the base management server 
> VM (zone1) to serve as both the zone1, as well as, the management server 
> (running MySql) with eth0 as a host-only adapter and a static IP of 
> 192.168.56.15 and eth1 as a NAT adapter (see the attached zone1-interfaces 
> file for the exact network configuration on the VM).  The management and 
> guest networks are configured as follows:
> 
> Zone 1
> Management: 192.168.56.100-149 gw 192.168.56.1 dns 10.0.3.2 (?)
> Guest: 10.0.3.200-10.0.3.220 gw 10.0.3.2 dns 8.8.8.8
> Zone 2
> Management: 192.168.56.150-200 gw 192.68.56.1 dns 10.0.3.2 (?)
> Guest: 10.0.3.221-240 gw 10.0.3.2 dns 8.8.8.8
> 
> The management server deploys and starts without error.  I then populate the 
> configuration it using the attached Marvin configuration file 
> (zone1.devcloud.cfg) and restart the management server in order to allow the 
> global configuration option changes to take effect.  Following the restart, 
> the CPVM and SSVM start without error.  Unfortunately, they drop into alert 
> status, and the SSVM is unable to connect outbound through the guest network 
> (very important for my tests because I am testing S3-backed secondary 
> storage).  
> 
> From the diagnostic checks I have performed on the management server and the 
> SSVM, it appears that the daemon on the SSVM is connecting back to the 
> management server.  I have attached a set of diagnostic information from the 
> management server (mgmtsvr-zone1-diagnostics.log) and SSVM server 
> (ssvm-zone1-diagnostics.log) that includes the results of ifconfig, route, 
> netstat and ping checks, as well as, other information (e.g. the contents of 
> /var/cache/cloud/cmdline on the SSVM).  Finally, I have attached the vmops 
> log from the management server (vmops-zone1.log).
> 
> What changes need to be made to management server configuration in order to 
> start up an SSVM that can communicate with the secondary storage NFS volumes, 
> management server, and connect to hosts on the Internet?
> 
> Thanks for your help,
> -John
> 
> <ssvm-zone1-diagnostics.log>
> <vmops-zone1.tar.gz>
> <mgmtsvr-zone1-diagnostics.log>
> <zone1-interfaces>
> <zone1.devcloud.cfg>

Reply via email to