Re: Secondary Storage

2020-05-06 Thread Mr Jazze
@Andrija

CloudStack 4.13.1.0 running on Ubuntu 16.04 Server
3 x hosts (1 x manage/mysql/NFS shares and 2 x KVM Hosts)
Network Type: Advance Zone
Guest Isolation: VXLAN (mtu 9000 everywhere)
Manage Server Network = 2 interfaces (1 x manage, 1 x storage)

# MANAGEMENT & NFS-SECONDARY

auto eth0

iface eth0 inet static

address 172.16.15.230 (manage)

netmask 255.255.255.0

gateway 172.16.15.254

dns-nameservers 172.16.15.254

dns-search cloudstack.local

# NFS-PRIMARY

auto eth1

iface eth1 inet static

address 192.168.201.230

netmask 255.255.255.0

mtu 9000


KVM Servers Network = 4 interfaces (1 x manage, 1 x public, 1 x guest, 1 x
storage)

# MANAGEMENT & NFS-SECONDARY

auto eth0

iface eth0 inet static

address 172.16.15.231 (kvm1) and 172.16.15.232 (kvm2)

netmask 255.255.255.0

gateway 172.16.15.254

dns-nameservers 172.16.15.254

dns-search cloudstack.local

# PUBLIC BRIDGE

auto cloudbr0

iface cloudbr0 inet manual

bridge_ports eth1

bridge_fd 5

bridge_stp off

bridge_maxwait 1

# GUEST BRIDGE

auto cloudbr1

iface cloudbr1 inet static

bridge_ports eth2

bridge_fd 5

bridge_stp off

bridge_maxwait 1

address 192.0.2.231 (kvm1) and 192.0.2.232 (kvm2)

  netmask 255.255.255.0

# NFS-PRIMARY

auto eth3

iface eth3 inet static

address 192.168.201.231 (kvm1) and 192.168.201.232 (kvm2)

netmask 255.255.255.0

mtu 9000


Once completing the Advance Zone wizard and allowing time for backend
processing, the dashboard does not show metrics for secondary storage (0/0
KB), the console proxy and secondary storage VMs repeatedly cycle through
build-up and tear-down, while never stabilizing to online state.


I've tried Centos deployments as well, but tell me why would it matter
given your documentation clearly state either (Centos, RedHat, Ubuntu, Xen,
VMware) and provides instructions for either?


Yes, I executed and validated all host settings and communications: pings
to/from everything, nslookup, rpcinfo, exportfs, showmount, libvirtd
settings, qemu settings, brctl, etc.


Finally, as for my attitude as you put it. I've been primarily a Windows
Systems Administrator over 20 years and have worked with Linux
and opensource products for about 15 years. Said not to be boastful, but
simply to depict a degree technical know how. Based on my calculations
CloudStack have been around for 7 to 8 years and I'm sure has achieved
tremendous milestones. But experience has taught me where we often fall
short is documentation; not just in doing it, but also the accuracy and
detail of content. I've had the distinct pleasure of working for one
organization that required us to get a none technical person to
successfully follow our documentation before it was considered standard
operating procedure.


And for your offer to have me produce videos, I really wouldn't have an
issue with the idea; except a huge matter of not having proper technical
knowledge of said product. By the time and effort it would take someone to
get me up to speed, that same or less effort could be used to produce the
videos and share the knowledge with the multitudes.




On Wed, May 6, 2020 at 2:16 PM Andrija Panic 
wrote:

> What **exactly** is your problem that you are trying to solve?
>
> i.e. this makes no sense to me or at least I can't understand it:
>
> "I cannot get the console to show/recognize the secondary NFS storage which
> seems to be preventing the build/starting of systemvms"
>
> 1. Describe your setup, your networking, basic zone or advanced zone, do
> you have dedicated Storage network (Seecondary Storage network that is) or
> not, etc.What are you IP ranges/reserved IP ranges, vlans, etc.
> 2. I recommend CentOS 7.x for the beginning, not Ubuntu 16.04
> 3. Did you preseed the systemVM templates via script as in the manual, did
> you confirm you KVM hosts can mount Primary AND Secondary Storage (manual
> test)
> 4. Later you might want to upload your logs if we are troubleshooting a
> specific problem, but not for now
>
> You need to provide some more information besides what you have provided in
> order to be helped.
> Regards,
>
> P.S. Related to your attitude... When you learn on how to deploy CloudStack
> on that setup, do you swear you will update all the missing CloudStack
> documentation and produce all the missing videos (I'm sure you have all the
> time of this world besides your work hours, your family obligations
> (wife/kids), etc? I don't think so either... This is a community project
> and one need to use brain to connect the dots - it's cloud a.k.a. complex
> stuff, not an awerage web server setup. Cheers
>
> On Wed, 6 May 2020 at 20:48, Mr Jazze  wrote:
>
> > Can I get some support with 

Re: Secondary Storage

2020-05-06 Thread Mr Jazze
Can I get some support with this storage issue? Which is really a matter of
understanding exactly how the networking SHOULD be configured in a
multi-node deployment. If this is the only resource available whereby Cloud
Stack development team offers assistance, it is sorely lacking.

Just to be clear, I'd like to see the potential of a successful Cloud Stack
deployment (a single server installation appears to operate as expected).
However, its quiet a different picture when attempting to perform a full on
production like deployment. As I've somewhat stated before, the
documentation is seriously lacking adequate rationale as to why a
setting/configuration is made.

Something else that has bewildered me is why there has not been and effort
to produce youtube videos demonstrating/educating deployments?

On Tue, May 5, 2020 at 4:37 AM Karol Jędrzejczyk 
wrote:

> wt., 5 maj 2020 o 00:24 Mr Jazze  napisał(a):
>
> > Luis, thanks for your feedback. Though waiting seem to resolve your
> issue,
> > I'm sure that is not the intended deployment outcome.
> >
>
> Waiting didn't resolve the issue for me, unfortunately.
>
> I've made some observations around the problem.
> > - The system vms console/storage repeatedly rebuild and never come
> online.
> > - The system vm template is downloaded to secondary storage (per
> > instructions) yet does not show size in console.
> > - The dashboard shows secondary 0/0 KB, primary 192.50 kb/787 GB. I don't
> > understand why there is yet a separate storage meter.
> > - This VXLAN deployment for some unknown reason now shows the VNI range
> on
> > all three networks (Management, Public & Guest) instead of just the Guest
> > network.
> >
>
> I have similar observations. At the moment I have one of the VMs in a
> *running* state but there's no connectivity, the other one is in
> *starting*. I see two qemu processes on the hypervisor but virsh list
> returns an empty list. I'm planning to connect to the VNC servers next to
> see what's going on.
> --
> Karol Jędrzejczyk
>
> --
> CONFIDENTIALITY NOTICE
>
>
> This message and any attachment is intended
> exclusively for the individual or entity to which it is addressed. This
> communication may contain information that is proprietary, confidential,
> legally privileged or otherwise exempt from disclosure. The security and
> integrity of e-mails cannot be guaranteed. Any reply to this e-mail and
> any
> attachment received may be subject to Icotera monitoring.
> If you are not
> the named addressee, you are not authorized to use, distribute, copy or
> take any action in reliance on this message. The unauthorized use,
> disclosure, or copying of this communication, or any attachment, is
> strictly prohibited and may be unlawful. No waiver of confidentiality or
> any applicable privilege is intended by any mistransmission. If you have
> received this message in error, please notify the sender immediately by
> replying to this e-mail and delete all copies of this message and any
> attachments.
>
>
>
>
>
>
>

-- 

==

My Search to Build a Private Cloud!


Re: Secondary Storage

2020-05-04 Thread Mr Jazze
Luis, thanks for your feedback. Though waiting seem to resolve your issue,
I'm sure that is not the intended deployment outcome.

I've made some observations around the problem.
- The system vms console/storage repeatedly rebuild and never come online.
- The system vm template is downloaded to secondary storage (per
instructions) yet does not show size in console.
- The dashboard shows secondary 0/0 KB, primary 192.50 kb/787 GB. I don't
understand why there is yet a separate storage meter.
- This VXLAN deployment for some unknown reason now shows the VNI range on
all three networks (Management, Public & Guest) instead of just the Guest
network.

On Mon, May 4, 2020 at 2:41 PM Luis Martinez 
wrote:

> I had a similar problem, secondary storage not mounted by CS, I left the
> CS running for a couple of hours and not I see the secondary storage
> working, I am not sure if it has to download something before mounting it.
>
> On 5/4/2020 3:27 PM, Mr Jazze wrote:
> > I've now tried 3 attempts to deploy CS with the following resources:
> >
> > Management Server also hosting NFS shares (primary & secondary)
> > 2 x KVM Hosts running Ubuntu 16.04
> > CloudStack 4.13.1
> > Native Linux Bridge
> > VXLAN Isolation
> >
> > I cannot get the console to show/recognize the secondary NFS storage
> which
> > seems to be preventing the build/starting of systemvms (console proxy &
> > storage).
> >
> > Can I get someone to detail exactly the steps to properly setup storage?
> >
> > Please don't reference the CS online documentation as I've walked through
> > those instructions and find much to be lacking.
> >
>


-- 

==

My Search to Build a Private Cloud!


Secondary Storage

2020-05-04 Thread Mr Jazze
I've now tried 3 attempts to deploy CS with the following resources:

Management Server also hosting NFS shares (primary & secondary)
2 x KVM Hosts running Ubuntu 16.04
CloudStack 4.13.1
Native Linux Bridge
VXLAN Isolation

I cannot get the console to show/recognize the secondary NFS storage which
seems to be preventing the build/starting of systemvms (console proxy &
storage).

Can I get someone to detail exactly the steps to properly setup storage?

Please don't reference the CS online documentation as I've walked through
those instructions and find much to be lacking.

-- 

==

My Search to Build a Private Cloud!


OVS network provider

2020-04-29 Thread Mr Jazze
The instructions for 4.13 states the following:

*The OVS provider is disabled by default. Navigate to the “Network Service
Providers” configuration of the physical network with the GRE isolation
type. Navigate to the OVS provider and press the “Enable Provider” button.*


However when I look at the list of providers, there is not listed "OVS" as
depicted here:
http://docs.cloudstack.apache.org/en/latest/plugins/ovs-plugin.html.

I'm was running 4.13 on Ubuntu 18.04 with OVS 2.9. Now I'll try with 16.04
which installs OVS 2.5. Does CS support OVS on Ubuntu?

Is it an absolute requirement to configure the networking as follows?

With KVM, the traffic type should be configured with the traffic label that
matches the name of the Integration Bridge on the hypervisor. For example,
you should set the traffic label as following:


   - Management & Storage traffic: cloudbr0
   - *Guest & Public traffic: cloudbr1 See KVM networking configuration
   guide for more detail.*


-- 

==

My Search to Build a Private Cloud!


Re: VXLAN Connectivity

2020-04-11 Thread Mr Jazze
ubtkvm2:~$ *netstat -gn*

IPv6/IPv4 Group Memberships

Interface   RefCnt Group

--- -- -

lo  1  224.0.0.1

eth0 1  224.0.0.1

*eth1  1239.0.7.227*

*eth1  1239.0.7.220*

eth1 1  224.0.0.1

eth0.1001   1  224.0.0.1

cloudbr0 1  224.0.0.1

eth1.1003   1  224.0.0.1

cloudbr2 1  224.0.0.1

eth0.1002   1  224.0.0.1

cloudbr1 1  224.0.0.1

cloud0 1  224.0.0.1

vxlan2012   1  224.0.0.1

brvx-2012   1  224.0.0.1

vxlan2019   1  224.0.0.1

brvx-2019   1  224.0.0.1

vnet0   1  224.0.0.1

vnet1   1  224.0.0.1

vnet2   1  224.0.0.1

vnet3   1  224.0.0.1

vnet4   1  224.0.0.1

On Thu, Mar 19, 2020 at 10:50 PM Mr Jazze  wrote:

> Hi @li jerry,
>
> There is no physical switch involved as the whole setup is configured in a
> nested Hyper-V environment where yes the virtual switch is configured with
> MTU 9000 and Trunk VLANs
>
> Here is overview:
>
> External vSwitch = CloudStack (MTU 9000)
> All ethX interfaces are vlan ports off of the vswitch
>
> auto eth0.1001
> iface eth0.1001 inet manual
> mtu 9000
>
> auto eth0.1002
> iface eth0.1002 inet manual
> mtu 9000
>
> auto eth1.1003
> iface eth1.1003 inet manual
> mtu 9000
>
> # MANAGEMENT BRIDGE
> auto cloudbr0
> iface cloudbr0 inet static
> address 192.168.101.11
> netmask 255.255.255.0
> gateway 192.168.101.1
> dns-nameservers 192.168.101.1
> bridge_ports eth0.1001
> bridge_fd 5
> bridge_stp off
> bridge_maxwait 1
>
> # PUBLIC BRIDGE
> auto cloudbr1
> iface cloudbr1 inet manual
> bridge_ports eth0.1002
> bridge_fd 5
> bridge_stp off
> bridge_maxwait 1
>
> # GUEST (PRIVATE) BRIDGE
> auto cloudbr2
> iface cloudbr2 inet static
> address 192.168.254.11
> netmask 255.255.255.0
> bridge_ports eth1.1003
> bridge_fd 5
> bridge_stp off
> bridge_maxwait 1
>
> cloudbr0, cloudbr1 and cloudbr2 = were assigned to their appropriate
> traffic labels
>
> 1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
> DEFAULT group default qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: eth0:  mtu 9000 qdisc mq state UP mode
> DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 3: eth1:  mtu 9000 qdisc mq state UP mode
> DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
> 4: eth1.1003@eth1:  mtu 9000 qdisc
> noqueue master cloudbr2 state UP mode DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
> 5: eth0.1001@eth0:  mtu 9000 qdisc
> noqueue master cloudbr0 state UP mode DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 6: cloudbr0:  mtu 9000 qdisc noqueue
> state UP mode DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 7: cloudbr2:  mtu 9000 qdisc noqueue
> state UP mode DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
> 8: eth0.1002@eth0:  mtu 9000 qdisc
> noqueue master cloudbr1 state UP mode DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 9: cloudbr1:  mtu 9000 qdisc noqueue
> state UP mode DEFAULT group default qlen 1000
> link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 10: cloud0:  mtu 1500 qdisc noqueue state
> UP mode DEFAULT group default qlen 1000
> link/ether fe:00:a9:fe:44:96 brd ff:ff:ff:ff:ff:ff
> 11: vnet0:  mtu 1500 qdisc pfifo_fast
> master cloud0 state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether fe:00:a9:fe:44:96 brd ff:ff:ff:ff:ff:ff
> 13: vnet2:  mtu 9000 qdisc pfifo_fast
> master cloudbr0 state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether fe:00:0e:00:00:1c brd ff:ff:ff:ff:ff:ff
> 15: vnet4:  mtu 9000 qdisc pfifo_fast
> master cloudbr1 state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether fe:00:87:00:00:84 brd ff:ff:ff:ff:ff:ff
> 17: vnet6:  mtu 1500 qdisc pfifo_fast
> master cloud0 state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether fe:00:a9:fe:62:dc brd ff:ff:ff:ff:ff:ff
> 18: vnet7:  mtu 9000 qdisc htb master
> cloudbr1 state UNKNOWN mode DEFAULT group default qlen 1000
> link/ether fe:00:ee:00:00:86 brd ff:ff:ff:ff:ff:ff
> 19: vxlan2005:  mtu 8950 qdisc noq

Re: VXLAN Connectivity

2020-03-19 Thread Mr Jazze
Hi @li jerry,

There is no physical switch involved as the whole setup is configured in a
nested Hyper-V environment where yes the virtual switch is configured with
MTU 9000 and Trunk VLANs

Here is overview:

External vSwitch = CloudStack (MTU 9000)
All ethX interfaces are vlan ports off of the vswitch

auto eth0.1001
iface eth0.1001 inet manual
mtu 9000

auto eth0.1002
iface eth0.1002 inet manual
mtu 9000

auto eth1.1003
iface eth1.1003 inet manual
mtu 9000

# MANAGEMENT BRIDGE
auto cloudbr0
iface cloudbr0 inet static
address 192.168.101.11
netmask 255.255.255.0
gateway 192.168.101.1
dns-nameservers 192.168.101.1
bridge_ports eth0.1001
bridge_fd 5
bridge_stp off
bridge_maxwait 1

# PUBLIC BRIDGE
auto cloudbr1
iface cloudbr1 inet manual
bridge_ports eth0.1002
bridge_fd 5
bridge_stp off
bridge_maxwait 1

# GUEST (PRIVATE) BRIDGE
auto cloudbr2
iface cloudbr2 inet static
address 192.168.254.11
netmask 255.255.255.0
bridge_ports eth1.1003
bridge_fd 5
bridge_stp off
bridge_maxwait 1

cloudbr0, cloudbr1 and cloudbr2 = were assigned to their appropriate
traffic labels

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN mode
DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:  mtu 9000 qdisc mq state UP mode
DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
3: eth1:  mtu 9000 qdisc mq state UP mode
DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
4: eth1.1003@eth1:  mtu 9000 qdisc noqueue
master cloudbr2 state UP mode DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
5: eth0.1001@eth0:  mtu 9000 qdisc noqueue
master cloudbr0 state UP mode DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
6: cloudbr0:  mtu 9000 qdisc noqueue state
UP mode DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
7: cloudbr2:  mtu 9000 qdisc noqueue state
UP mode DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
8: eth0.1002@eth0:  mtu 9000 qdisc noqueue
master cloudbr1 state UP mode DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
9: cloudbr1:  mtu 9000 qdisc noqueue state
UP mode DEFAULT group default qlen 1000
link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
10: cloud0:  mtu 1500 qdisc noqueue state
UP mode DEFAULT group default qlen 1000
link/ether fe:00:a9:fe:44:96 brd ff:ff:ff:ff:ff:ff
11: vnet0:  mtu 1500 qdisc pfifo_fast
master cloud0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:00:a9:fe:44:96 brd ff:ff:ff:ff:ff:ff
13: vnet2:  mtu 9000 qdisc pfifo_fast
master cloudbr0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:00:0e:00:00:1c brd ff:ff:ff:ff:ff:ff
15: vnet4:  mtu 9000 qdisc pfifo_fast
master cloudbr1 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:00:87:00:00:84 brd ff:ff:ff:ff:ff:ff
17: vnet6:  mtu 1500 qdisc pfifo_fast
master cloud0 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:00:a9:fe:62:dc brd ff:ff:ff:ff:ff:ff
18: vnet7:  mtu 9000 qdisc htb master
cloudbr1 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:00:ee:00:00:86 brd ff:ff:ff:ff:ff:ff
19: vxlan2005:  mtu 8950 qdisc noqueue
master brvx-2005 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 2e:08:0e:8f:da:2b brd ff:ff:ff:ff:ff:ff
20: brvx-2005:  mtu 8950 qdisc noqueue
state UP mode DEFAULT group default qlen 1000
link/ether 2e:08:0e:8f:da:2b brd ff:ff:ff:ff:ff:ff
21: vnet8:  mtu 8950 qdisc htb master
brvx-2005 state UNKNOWN mode DEFAULT group default qlen 1000
link/ether fe:00:1c:e9:00:06 brd ff:ff:ff:ff:ff:ff

QUESTION: I understand the requirement of the MTU and as you can see from
above output its being set. But does this same requirement apply to router?


On Tue, Mar 17, 2020 at 8:15 PM li jerry  wrote:

> Please check your switch port MTU value (9000), the default is 1500
>
> Vxlan needs to modify the package header. The default MTU value will not
> be able to transfer data.
>
> -邮件原件-
> 发件人: Mr Jazze 
> 发送时间: 2020年3月18日 6:31
> 收件人: CloudStack Mailing-List 
> 主题: VXLAN Connectivity
>
> Hello Again,
>
> I've reconfigured my test environment to use VXLAN instead of OVS which
> went no where. I've of course deployed Advance Mode and put all the pieces
> in place which yielded a somewhat functional cloud. I was able to deploy
> Windows Server 2016 virtual machine. Initially, this VM didn't acquire it's
> DHCP address from VPC router. I noticed VM was running on 2nd host and
> router was running on 1st host, so I migrated VM to the same host as
> router; then it was able to acquire DHCP address and 

Re: VXLAN Connectivity

2020-03-19 Thread Mr Jazze
Hi @Simon

Yes, I'm using the native Ubuntu 16.04 VXLAN feature. And yes I have
routable IP addresses assigned to private interfaces 192.168.254.x/24 on
both hosts.

Likewise, I'd like to reiterate the VM was able to obtain DHCP address and
ping out to internet, initially, once VPC router was on same host.

On Tue, Mar 17, 2020 at 8:35 PM Simon Weller 
wrote:

>
> I assume you're using the native linux VXLAN feature. If so, it uses
> multicast, so make sure you have a routable ip address on the interface
> being used for VXLAN, or it won't pass any traffic. Also make sure iptables
> is allowing multicast traffic to pass.
>
> -Soi
> 
> From: Mr Jazze 
> Sent: Tuesday, March 17, 2020 5:31 PM
> To: CloudStack Mailing-List 
> Subject: VXLAN Connectivity
>
> Hello Again,
>
> I've reconfigured my test environment to use VXLAN instead of OVS which
> went no where. I've of course deployed Advance Mode and put all the pieces
> in place which yielded a somewhat functional cloud. I was able to deploy
> Windows Server 2016 virtual machine. Initially, this VM didn't acquire it's
> DHCP address from VPC router. I noticed VM was running on 2nd host and
> router was running on 1st host, so I migrated VM to the same host as
> router; then it was able to acquire DHCP address and ping 1.1.1.1. Then,
> while trying to troubleshoot why there was no connectivity across hosts the
> router took a dump and I had to destroy it to get another router deployed,
> now VM is unable to get IP address regardless of which host.
>
> Does anyone have any experience with similar issue with VXLAN connectivity
> and/or advice on how to resolve?
>
> --
>
> ==
>
> My Search to Build a Private Cloud!
>


-- 

==

My Search to Build a Private Cloud!


VXLAN Connectivity

2020-03-17 Thread Mr Jazze
Hello Again,

I've reconfigured my test environment to use VXLAN instead of OVS which
went no where. I've of course deployed Advance Mode and put all the pieces
in place which yielded a somewhat functional cloud. I was able to deploy
Windows Server 2016 virtual machine. Initially, this VM didn't acquire it's
DHCP address from VPC router. I noticed VM was running on 2nd host and
router was running on 1st host, so I migrated VM to the same host as
router; then it was able to acquire DHCP address and ping 1.1.1.1. Then,
while trying to troubleshoot why there was no connectivity across hosts the
router took a dump and I had to destroy it to get another router deployed,
now VM is unable to get IP address regardless of which host.

Does anyone have any experience with similar issue with VXLAN connectivity
and/or advice on how to resolve?

-- 

==

My Search to Build a Private Cloud!


OVS Provider

2020-03-09 Thread Mr Jazze
I've done a GRE guest isolation network, but per instructions
 when
I go to enable OVS provider it isn't listed.

What does this mean?

-- 

==

My Search to Build a Private Cloud!


Host Reassignment Procedure

2020-02-29 Thread Mr Jazze
Hi All,



Is there a documented procedure for removing a host from existing cluster,
not to be re-installed, but rather joined to another cluster or pod?



-- 

==

My Search to Build a Private Cloud!