Re: Primary storage recovery

2024-04-06 Thread Niclas Lindblom
Hi,

Latest version 4.19.01

Once I calmed down, I believe the Primary storage was a red herring as both 
hosts are down and the cloudstack agent is not starting which I believe is 
related to a traefik loadbalancer that I also had to recover. Let me do some 
more troubleshooting and I will come back once I pinned down the problem with 
some better details if I need more help.

Thanks for the response anyway.

Niclas

> On 6 Apr 2024, at 12:30, Jayanth Babu A  
> wrote:
>
> Hello Niclas,
> May I know what is the version of CloudStack, is it 4.19?
>
> Let us say we keep the "browse" functionality aside for now, are you able to 
> start and manage the VMs, images and ISOs from CloudStack?
>
> You also said that you managed to restore a backup, how much loss are we 
> talking about here?
>
> Thanks,
> Jayanth
>
> 
> From: Niclas Lindblom 
> Sent: Saturday, April 6, 2024 2:11:14 pm
> To: users@cloudstack.apache.org 
> Subject: Primary storage recovery
>
> Hello,
>
> I have had a disk failure on my NFS server which hosts primary and secondary 
> storage. I have managed to restore a backup and the file structure is back on 
> the primary storage on the NFS server. However, it seems Cloudstack has lost 
> the reference to it, Primary storage is showing as “up”, but when I click 
> browse, it’s empty. I have validated the NFS connection and can mount the 
> primary storage to a test folder, so it appears there is some relation 
> between cloudstack and the primary storage volumes that’s disappeared.
>
> I am in a bit of panic mode here, so it’s possible I missed something but are 
> there any steps I could take from here ?
>
> Thanks
>
> Niclas
> Disclaimer *** This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION 
> intended solely for the use of the addressee(s). If you are not the intended 
> recipient, please notify the sender by e-mail and delete the original 
> message. Further, you are not authorised to copy, disclose, or distribute 
> this e-mail or its contents to any other person and any such actions are 
> unlawful and strictly prohibited. This e-mail may contain viruses. NxtGen 
> Datacenter & Cloud Technologies Private Ltd (“NxtGen”) has taken every 
> reasonable precaution to minimize this risk but is not liable for any damage 
> you may sustain as a result of any virus in this e-mail. You should carry out 
> your own virus checks before opening the e-mail or attachment. NxtGen 
> reserves the right to monitor and review the content of all messages sent to 
> or from this e-mail address. Messages sent to or from this e-mail address may 
> be stored on the NxtGen e-mail system. *** End of Disclaimer ***NXTGEN***



smime.p7s
Description: S/MIME cryptographic signature


Primary storage recovery

2024-04-06 Thread Niclas Lindblom
Hello,

I have had a disk failure on my NFS server which hosts primary and secondary 
storage. I have managed to restore a backup and the file structure is back on 
the primary storage on the NFS server. However, it seems Cloudstack has lost 
the reference to it, Primary storage is showing as “up”, but when I click 
browse, it’s empty. I have validated the NFS connection and can mount the 
primary storage to a test folder, so it appears there is some relation between 
cloudstack and the primary storage volumes that’s disappeared.

 I am in a bit of panic mode here, so it’s possible I missed something but are 
there any steps I could take from here ?

Thanks

Niclas

smime.p7s
Description: S/MIME cryptographic signature


Re: Web UI on Safari 4.19.0

2024-02-15 Thread Niclas Lindblom
Thanks,

Yes, I did clear cookies and data, but still unable to load it. However, if it 
works for others like yourself, I suppose it must be something local with my 
browser.

Thanks for the response though

Niclas

> On 14 Feb 2024, at 19:17, Jimmy Huybrechts  wrote:
>
> Did you clear your cookies and data from the website running your portal? 
> That was my issue at first, after cleaning that it was solved.
>
> --
> Jimmy
>
> Op 14-02-2024 17:50 heeft Niclas Lindblom 
>  geschreven:
> Hi all,
>
> I upgraded to 4.19 this weekend and noticed that I can no longer load the Web 
> UI using Safari on my Mac, I only get the Cloudstack spinning wheel and the 
> login page never loads. However, using Chrome it works fine, has anyone else 
> seen this, or is it something with my laptop ?
>
> Thanks
>
> Niclas



smime.p7s
Description: S/MIME cryptographic signature


Web UI on Safari 4.19.0

2024-02-14 Thread Niclas Lindblom
Hi all,

I upgraded to 4.19 this weekend and noticed that I can no longer load the Web 
UI using Safari on my Mac, I only get the Cloudstack spinning wheel and the 
login page never loads. However, using Chrome it works fine, has anyone else 
seen this, or is it something with my laptop ?

Thanks

Niclas

smime.p7s
Description: S/MIME cryptographic signature


updateNetworkOffering

2023-09-01 Thread Niclas Lindblom
Hello,

I am trying to update a network service offering to set the network rate to 
1000 (Gigabit)

cmk updateNetworkOffering id=169bcc12-be7d-4175-b5ce-b244f25e42bf 
networkrate=1000

But the changes do not take effect and the json output still has the value set 
to 200 and no error message.

The shared network has VM’s running on it, am I required to clear the network 
using this offering before it can be updated or are there any other gotchas or 
ways to update this ?

Thanks

Niclas



smime.p7s
Description: S/MIME cryptographic signature


Re: Terraform Cloudstack module

2022-08-03 Thread Niclas Lindblom
And of course as soon as I posted this I realised my own stupidity, but I 
thought I share it.

I store some static key pair values in Consul that are retrieved by the Consul 
Terraform module where there was a case sensitivity mismatch

Large Instance (Cloudstack) was retrieved as Large instance from the key store 
triggering the Terraform change

However

disk_offering I had as “custom” which is what the documentation states, but 
should be “Custom” for Terraform to match and not trigger a disk change

Niclas 

> On 3 Aug 2022, at 18:46, Niclas Lindblom  
> wrote:
> 
> Hello,
> 
> I am not sure if the Cloudstack Terraform module is community supported 
> through this forum, but I have an issue which I am not sure if it is with the 
> module or with Terraform itself. When I deploy a virtual machine and 
> create/attach a disk it works fine on the first run and the resources are 
> created. However, when I run Terraform again without any code changes, 
> Terraform detects that the resources needs to be upgraded (which isn’t the 
> case) and then fails with the message in my case:
> 
> Error changing the service offering for instance 
> VM-c3a9b229-f817-47ea-8f8b-99fe13dbf003: CloudStack API error 431 
> (CSExceptionErrorCode: 4350): Not upgrading vm VM instance {id: "64", name: 
> "i-2-64-VM", uuid: "c3a9b229-f817-47ea-8f8b-99fe13dbf003", type="User"} since 
> it already has the requested service offering (Large Instance)
> 
> Has anyone seen this before and have any advise to offer ? 
> 
> Terraform version: 1.2.6
> Cloudstack version: 4.17.0
> Terraform Cloudstack Module: 0.4.0
> 
> Regards
> 
> Niclas



smime.p7s
Description: S/MIME cryptographic signature


Terraform Cloudstack module

2022-08-03 Thread Niclas Lindblom
Hello,

I am not sure if the Cloudstack Terraform module is community supported through 
this forum, but I have an issue which I am not sure if it is with the module or 
with Terraform itself. When I deploy a virtual machine and create/attach a disk 
it works fine on the first run and the resources are created. However, when I 
run Terraform again without any code changes, Terraform detects that the 
resources needs to be upgraded (which isn’t the case) and then fails with the 
message in my case:

Error changing the service offering for instance 
VM-c3a9b229-f817-47ea-8f8b-99fe13dbf003: CloudStack API error 431 
(CSExceptionErrorCode: 4350): Not upgrading vm VM instance {id: "64", name: 
"i-2-64-VM", uuid: "c3a9b229-f817-47ea-8f8b-99fe13dbf003", type="User"} since 
it already has the requested service offering (Large Instance)

Has anyone seen this before and have any advise to offer ? 

Terraform version: 1.2.6
Cloudstack version: 4.17.0
Terraform Cloudstack Module: 0.4.0

Regards

Niclas

smime.p7s
Description: S/MIME cryptographic signature


Re: Management Server HA

2022-06-14 Thread Niclas Lindblom
Thanks,

I did set this to the load balancer vip at what point I got the message below. 
In any case, I changed the “strictness” global setting to false and following 
this it is working. While this is OK for my environment, I would still like to 
understand if the fact that I had to change the strictness is due to a 
configuration error on my side or if this step should be part of the 
documentation.

Thanks

Niclas

> On 15 Jun 2022, at 08:18, Rohit Yadav  wrote:
>
> It's possible but then you'll need to configure the "host" global setting 
> (set this to the IP address of your load balancer) and this "host" address 
> must be accessible by your KVM agents (if any) and SSVM/CPVM agents.
>
>
> Regards.
>
> ____
>
>
>
> From: Niclas Lindblom
> Sent: Thursday, June 09, 2022 19:19
> To: users@cloudstack.apache.org
> Subject: Management Server HA
>
> Hi,
>
> I have 2 development servers that I wanted to load balance with Traefik that 
> I have running as a docker instance. I believe I have configured Traefik 
> correctly according to the guidance in the documentation 
> https://docs.cloudstack.apache.org/en/latest/adminguide/reliability.html . 
> However, when I try and logon the page hangs and I am seeing this error 
> message in the management server logs
>
> ERROR [c.c.u.n.Link] (AgentManager-SSLHandshakeHandler-70:null) (logid:) SSL 
> error caught during wrap data: Certificate ownership verification failed for 
> client: 192.168.20.6, for local address=/192.168.20.11:8250, remote 
> address=/192.168.20.6:54714.
>
> Am I missing some steps in the configuration with regards of SSL ?
>
> Niclas



smime.p7s
Description: S/MIME cryptographic signature


Management Server HA

2022-06-09 Thread Niclas Lindblom
Hi,

I have 2 development servers that I wanted to load balance with Traefik that I 
have running as a docker instance. I believe I have configured Traefik 
correctly according to the guidance in the documentation 
https://docs.cloudstack.apache.org/en/latest/adminguide/reliability.html 
 . 
However, when I try and logon the page hangs and I am seeing this error message 
in the management server logs

 ERROR [c.c.u.n.Link] (AgentManager-SSLHandshakeHandler-70:null) (logid:) SSL 
error caught during wrap data: Certificate ownership verification failed for 
client: 192.168.20.6, for local address=/192.168.20.11:8250, remote 
address=/192.168.20.6:54714.

Am I missing some steps in the configuration with regards of SSL ?

Niclas

smime.p7s
Description: S/MIME cryptographic signature


Redhat 9 - kernel panic

2022-05-23 Thread Niclas Lindblom
Hi,

I have downloaded the Redhat 9 ISO and when I launch the installation (on KVM) 
it crashes immediately with a kernel panic error message. Has anyone 
successfully deployed RHEL 9 on KVM or have any suggestions on what is going 
wrong here ?

Regards

Niclas

smime.p7s
Description: S/MIME cryptographic signature


Re: Terraform 0.4 error

2022-05-10 Thread Niclas Lindblom
Hi,

I have tried the upgrade option and I am running terraform init with the 
.terraform (state) directory deleted and fresh, still getting the same results. 
Additionally, I am calling a couple of modules and these have been set the same 
versions.tf file specifying 0.4.0, so I can’t figure out where the reference to 
0.3.0 comes from.

"terraform providers” gives the following output with no further clues to the 
reference to 0.3.0.

Providers required by configuration:
.
├── provider[registry.terraform.io/cloudstack/cloudstack] 0.4.0
├── provider[registry.terraform.io/hashicorp/null]
├── provider[registry.terraform.io/hashicorp/consul]
├── module.cs-consul
│   ├── provider[registry.terraform.io/cloudstack/cloudstack] 0.4.0
│   ├── provider[registry.terraform.io/hashicorp/null]
│   └── provider[registry.terraform.io/hashicorp/consul]
└── module.cs-vpc
├── provider[registry.terraform.io/cloudstack/cloudstack] 0.4.0
└── provider[registry.terraform.io/hashicorp/consul]

Providers required by state:

provider[registry.terraform.io/terraform-providers/cloudstack]

provider[registry.terraform.io/hashicorp/consul]

provider[registry.terraform.io/hashicorp/null]

Niclas

> On 10 May 2022, at 10:05, Pearl d'Silva  wrote:
> 
> Hi,
> 
> Can you check the following:
> 
>  *   can you try doing a 'terraform init -upgrade'
>  *   If the above step doesn't help, in the directory where you are running 
> terraform init - check if there exists a .terraform directory? If yes, can 
> you clear it , and also delete the .terraform.lock.hcl file
> 
> Thanks,
> Pearl
> 
> 
> 
> 
> 
> From: Niclas Lindblom
> Sent: Tuesday, May 10, 2022 12:41 PM
> To: users@cloudstack.apache.org
> Subject: Terraform 0.4 error
> 
> Hello,
> 
> I am trying to run Terraform Init against a directory which has the version 
> set to the latest provider
> 
> versions.tf
> 
> terraform {
>  required_providers {
>cloudstack = {
>  source = "cloudstack/cloudstack"
>  version = "0.4.0"
>}
>  }
> }
> 
> I am getting an error message which seem to reference the previous version 
> 0.3.0
> 
> Provider 
> registry.terraform.io/terraform-providers/cloudstack<http://registry.terraform.io/terraform-providers/cloudstack>
>  v0.3.0 does not have a package available for your current platform, 
> darwin_arm64
> 
> The .terraform directory has been deleted before running and the state is 
> clean, does anyone know what’s going on here ? I can’t see that I have 
> anything referencing the 0.3.0 version, though it was used previously. Is 
> there a cash somewhere I might have missed ?
> 
> Thanks
> 
> Niclas



smime.p7s
Description: S/MIME cryptographic signature


Terraform 0.4 error

2022-05-10 Thread Niclas Lindblom
Hello,

I am trying to run Terraform Init against a directory which has the version set 
to the latest provider

versions.tf

terraform {
  required_providers {
cloudstack = {
  source = "cloudstack/cloudstack"
  version = "0.4.0"
}
  }
}

I am getting an error message which seem to reference the previous version 0.3.0

Provider registry.terraform.io/terraform-providers/cloudstack v0.3.0 does not 
have a package available for your current platform, darwin_arm64

The .terraform directory has been deleted before running and the state is 
clean, does anyone know what’s going on here ? I can’t see that I have anything 
referencing the 0.3.0 version, though it was used previously. Is there a cash 
somewhere I might have missed ?

Thanks

Niclas

smime.p7s
Description: S/MIME cryptographic signature


Re: CS Kubernetes & persistent storage

2021-02-10 Thread Niclas Lindblom
Thanks Daan,

I ended up using Rook Ceph file system across a 3 node cluster with a virtual 
disk attached to each vm. Works pretty well so far and I can recommend anyone 
with the same requirement to check it out.

Regards

Niclas

> On 10 Feb 2021, at 22:59, Daan Hoogland  wrote:
> 
> sorry for the late answer Niclas,
> We don't have a solution in ACS for him right now. You probably want to
> share access to a mount on some persistent storage across VMs so a k8s
> container can move between VMs and still work on the same data. I don't
> think a zone wide storage or any other means in ACS would help with this.
> As you mention, you'll will have to run a nfs server and mount it on all
> node VMs manually. nice feature request (once you figure it out)
> 
>> On Thu, Dec 17, 2020 at 9:00 AM Niclas Lindblom
>>  wrote:
>> 
>> Hi all,
>> 
>> I am testing the Kubernetes plugin for CS 4.14 and I am trying to figure
>> out how to manage persistent storage across multiple nodes so a container
>> can survive being moved from one node to another. The only thing I can
>> think of to make this work would be a separate NFS server that containers
>> can mount to, but perhaps there is a better option ? Are there any best
>> practices around how to implement this in Cloudstack ?
>> 
>> Regards
>> 
>> Niclas
> 
> 
> 
> -- 
> Daan


Re: [DISCUSS] Terraform CloudStack provider

2021-01-26 Thread Niclas Lindblom
I can confirm that the Terraform plugin is working if it is already installed, 
since it was archived it no longer automatically downloads when applying unless 
manually installed.

From the Hashicorp website, it appears it was archived when they moved all 
plugins to their registry and needs an owner and an email to Hashicorp to be 
moved into to the registry and supported again when running Terraform. I use it 
regularly but haven’t got the technical skills to maintain the code so been 
hoping this would be resolved.

Niclas

> On 26 Jan 2021, at 18:33, christian.nieph...@zv.fraunhofer.de wrote:
> 
> 
> 
>> On 26. Jan 2021, at 10:45, Wido den Hollander  wrote:
>> 
>> 
>> 
>> On 1/26/21 10:40 AM, christian.nieph...@zv.fraunhofer.de wrote:
>>> On 25. Jan 2021, at 12:40, Abhishek Kumar  
>>> wrote:
 
 Hi all,
 
 Terraform CoudStack provider by Hashicorp is archived here 
 https://github.com/hashicorp/terraform-provider-cloudstack
 
 Is anyone using or maintaining it?
>>> 
>>> We are also using it heavily and are somewhat worried about the module 
>>> being archived.
>> 
>> Agreed. But do we know why this has been done? What needs to be done to
>> un-archive it?
>> 
>> If it's just a matter of some love and attention we can maybe arrange
>> something.
>> 
>> Is it technically broken or just abandoned?
> 
> This is just an educated guess, but given that we're not experiencing any 
> technical issues, I believe it has just been abandoned.
> 
> Christian 
> 
> 
>> 
>> Wido
>> 
>>> 
 We're aware of Ansible CloudStack module 
 (https://docs.ansible.com/ansible/latest/scenario_guides/guide_cloudstack.html)
  but are there any other alternatives of Terraform that you may be using 
 with CloudStack?
>>> 
>>> The ansible module is working quite well. However, one of the advantage of 
>>> terraform imho is that one can easily destroy defined infrastructure with 
>>> one command, while with ansible 'the destrcution' needs to be implemented 
>>> in the playbook. Another advantage is that (at least) Gitlab can now 
>>> maintain terraform states, which quite nicely supports GitOps approaches. 
>>> 
>>> Cheers, Christian 
>>> 
 
 Regards,
 Abhishek
 
 abhishek.ku...@shapeblue.com 
 www.shapeblue.com
 3 London Bridge Street,  3rd floor, News Building, London  SE1 9SGUK
 @shapeblue
 
 
 
>>> 
> 



Cloudstack Terraform Provider

2020-12-26 Thread Niclas Lindblom
Hi all,

Perhaps off-topic for this list but somewhat relevant, it appears from what I 
can gather that the Cloudstack Terraform provider has been abandoned and now 
also taken off the Terraform registry, meaning it is no longer discovered when 
a terraform file is applied. Does anyone here know if its possible to reference 
the Github source directly or how I would go about to continue using Terraform 
to deploy/maintain Cloudstack deployments ?

Regards

Niclas

Management server added each reboot

2020-12-20 Thread Niclas Lindblom
Hi all,

I have 2 management servers load balanced with HA Proxy which all appears to be 
working properly. However, each time a reboot a server a new management server 
is added in the UI (database) and I now have a long list of servers showing as 
down and the 2 currently in use showing as up all with different uuid's.

I am guess this have something to do with the HA proxy load balancing, but not 
sure how to troubleshoot and would appreciate any pointers.

Thanks

Niclas

CS Kubernetes & persistent storage

2020-12-17 Thread Niclas Lindblom
Hi all,

I am testing the Kubernetes plugin for CS 4.14 and I am trying to figure out 
how to manage persistent storage across multiple nodes so a container can 
survive being moved from one node to another. The only thing I can think of to 
make this work would be a separate NFS server that containers can mount to, but 
perhaps there is a better option ? Are there any best practices around how to 
implement this in Cloudstack ?

Regards

Niclas 

Re: Loadbalancer rule - open both TCP and UDP

2020-11-19 Thread Niclas Lindblom
Issue #4481  submitted.

I tried creating a port forwarder which does allow for 2 rules to be created 
with tcp / udp respectively, so it feels like the behaviour should be the same 
for a load balancer.

Niclas

> On 19 Nov 2020, at 21:09, Daan Hoogland  wrote:
> 
> I think it just never came up even though port 53 and others have similar
> issues. It should also be an issue for port forwarding. So whether it is a
> lack of feature or a bug is open to discussion, but the issue is there.
> please create an issue (or PR) on github and we can handle it there.
> 
> On Thu, Nov 19, 2020 at 1:14 PM Niclas Lindblom
>  wrote:
> 
>> This creates a rule with no protocol defined
>> 
>> name = test
>> id = 1e6b0dc6-897f-47fc-ac9f-a9c9707a6630
>> account = admin
>> algorithm = source
>> cidrlist =
>> domain = ROOT
>> domainid = b6155e47-64e7-11e9-b6e7-f2f9c859b60a
>> fordisplay = True
>> networkid = 299aace4-a5c5-46f4-9ae7-92c86ded0cef
>> privateport = 800
>> publicip = 192.168.30.185
>> publicipid = 2c49bd09-cd6b-44d4-93a5-7082ead298e5
>> publicport = 800
>> state = Add
>> tags:
>> zoneid = bd43ff6e-ecaf-45ad-955c-9b1e28b5aeee
>> zonename = mydc
>> 
>> 
>> The reason I started digging into this is because I have a rule for
>> Hashicorp Consul traffic which is created using Terraform with no protocol
>> specified and appears as blank in the UI protocol column. The communication
>> isn’t working properly and I get some errors in the log and I noticed that
>> the ports required is both tcp/udp. Since the traffic seem to bye working
>> on tcp I decided to add udp manually as part of my troubleshooting and came
>> across this.
>> 
>> Niclas
>> 
>>> On 19 Nov 2020, at 19:52, Daan Hoogland  wrote:
>>> 
>>> can you remove the tcp rule and then try:
>>>> createLoadBalancerRule algorithm=source name=test privateport=800
>>> publicport=800 networkid=299aace4-a5c5-46f4-9ae7-92c86ded0cea
>>> publicipid=2c49bd00-cd6b-44d4-93a5-7082ead298e0
>>> without the protocol?
>>> 
>>> On Thu, Nov 19, 2020 at 11:07 AM Niclas Lindblom
>>>  wrote:
>>> 
>>>> I tested this again using cloudmonkey by first creating a rule on port
>> 800
>>>> using tcp and then repeated the command with udp
>>>> 
>>>> createLoadBalancerRule algorithm=source name=test privateport=800
>>>> publicport=800 networkid=299aace4-a5c5-46f4-9ae7-92c86ded0cea
>>>> publicipid=2c49bd00-cd6b-44d4-93a5-7082ead298e0 protocol=udp
>>>> 
>>>> and I get the message
>>>> 
>>>> The range specified, 800-800, conflicts with rule 4214 which has 800-800
>>>> 
>>>> Is this supposed to work so we are looking at a bug here ?
>>>> 
>>>> Niclas
>>>> 
>>>>> On 19 Nov 2020, at 17:05, Daan Hoogland 
>> wrote:
>>>>> 
>>>>> Niclas, that doesn't sound good. I am assuming you use the UI and the
>> VR
>>>> as
>>>>> loadbalancer.
>>>>> if you look at the API [1], you'll find that protocol is actually not a
>>>>> required parameter.  Can you;
>>>>> 1. check with dev-tools how the call is made?
>>>>> 2. try adding it through the API directly (using cloudmonkey or
>> something
>>>>> like that)?
>>>>> 
>>>>> [1]
>>>>> 
>>>> 
>> http://cloudstack.apache.org/api/apidocs-4.14/apis/createLoadBalancerRule.html
>>>>> 
>>>>> 
>>>>> On Thu, Nov 19, 2020 at 7:45 AM Niclas Lindblom
>>>>>  wrote:
>>>>> 
>>>>>> Hi,
>>>>>> 
>>>>>> I need to add a load balancer rule for a specific port for both tcp
>> and
>>>>>> udp. In the drop down I can only select one or the other and I am not
>>>> able
>>>>>> to add 2 rules (one for each protocol) on the same port as I get a
>>>> message
>>>>>> that there’s a conflict with existing rule. How do I achieve opening a
>>>> port
>>>>>> for both tcp/udp into a VPC ?
>>>>>> 
>>>>>> Thanks
>>>>>> 
>>>>>> Niclas
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Daan
>>>> 
>>>> 
>>> 
>>> --
>>> Daan
>> 
>> 
> 
> -- 
> Daan



Re: Loadbalancer rule - open both TCP and UDP

2020-11-19 Thread Niclas Lindblom
This creates a rule with no protocol defined

name = test
id = 1e6b0dc6-897f-47fc-ac9f-a9c9707a6630
account = admin
algorithm = source
cidrlist = 
domain = ROOT
domainid = b6155e47-64e7-11e9-b6e7-f2f9c859b60a
fordisplay = True
networkid = 299aace4-a5c5-46f4-9ae7-92c86ded0cef
privateport = 800
publicip = 192.168.30.185
publicipid = 2c49bd09-cd6b-44d4-93a5-7082ead298e5
publicport = 800
state = Add
tags:
zoneid = bd43ff6e-ecaf-45ad-955c-9b1e28b5aeee
zonename = mydc


The reason I started digging into this is because I have a rule for Hashicorp 
Consul traffic which is created using Terraform with no protocol specified and 
appears as blank in the UI protocol column. The communication isn’t working 
properly and I get some errors in the log and I noticed that the ports required 
is both tcp/udp. Since the traffic seem to bye working on tcp I decided to add 
udp manually as part of my troubleshooting and came across this.

Niclas

> On 19 Nov 2020, at 19:52, Daan Hoogland  wrote:
> 
> can you remove the tcp rule and then try:
>> createLoadBalancerRule algorithm=source name=test privateport=800
> publicport=800 networkid=299aace4-a5c5-46f4-9ae7-92c86ded0cea
> publicipid=2c49bd00-cd6b-44d4-93a5-7082ead298e0
> without the protocol?
> 
> On Thu, Nov 19, 2020 at 11:07 AM Niclas Lindblom
>  wrote:
> 
>> I tested this again using cloudmonkey by first creating a rule on port 800
>> using tcp and then repeated the command with udp
>> 
>> createLoadBalancerRule algorithm=source name=test privateport=800
>> publicport=800 networkid=299aace4-a5c5-46f4-9ae7-92c86ded0cea
>> publicipid=2c49bd00-cd6b-44d4-93a5-7082ead298e0 protocol=udp
>> 
>> and I get the message
>> 
>> The range specified, 800-800, conflicts with rule 4214 which has 800-800
>> 
>> Is this supposed to work so we are looking at a bug here ?
>> 
>> Niclas
>> 
>>> On 19 Nov 2020, at 17:05, Daan Hoogland  wrote:
>>> 
>>> Niclas, that doesn't sound good. I am assuming you use the UI and the VR
>> as
>>> loadbalancer.
>>> if you look at the API [1], you'll find that protocol is actually not a
>>> required parameter.  Can you;
>>> 1. check with dev-tools how the call is made?
>>> 2. try adding it through the API directly (using cloudmonkey or something
>>> like that)?
>>> 
>>> [1]
>>> 
>> http://cloudstack.apache.org/api/apidocs-4.14/apis/createLoadBalancerRule.html
>>> 
>>> 
>>> On Thu, Nov 19, 2020 at 7:45 AM Niclas Lindblom
>>>  wrote:
>>> 
>>>> Hi,
>>>> 
>>>> I need to add a load balancer rule for a specific port for both tcp and
>>>> udp. In the drop down I can only select one or the other and I am not
>> able
>>>> to add 2 rules (one for each protocol) on the same port as I get a
>> message
>>>> that there’s a conflict with existing rule. How do I achieve opening a
>> port
>>>> for both tcp/udp into a VPC ?
>>>> 
>>>> Thanks
>>>> 
>>>> Niclas
>>> 
>>> 
>>> 
>>> --
>>> Daan
>> 
>> 
> 
> -- 
> Daan



Re: Loadbalancer rule - open both TCP and UDP

2020-11-19 Thread Niclas Lindblom
I tested this again using cloudmonkey by first creating a rule on port 800 
using tcp and then repeated the command with udp 

createLoadBalancerRule algorithm=source name=test privateport=800 
publicport=800 networkid=299aace4-a5c5-46f4-9ae7-92c86ded0cea 
publicipid=2c49bd00-cd6b-44d4-93a5-7082ead298e0 protocol=udp

and I get the message

The range specified, 800-800, conflicts with rule 4214 which has 800-800 

Is this supposed to work so we are looking at a bug here ?

Niclas

> On 19 Nov 2020, at 17:05, Daan Hoogland  wrote:
> 
> Niclas, that doesn't sound good. I am assuming you use the UI and the VR as
> loadbalancer.
> if you look at the API [1], you'll find that protocol is actually not a
> required parameter.  Can you;
> 1. check with dev-tools how the call is made?
> 2. try adding it through the API directly (using cloudmonkey or something
> like that)?
> 
> [1]
> http://cloudstack.apache.org/api/apidocs-4.14/apis/createLoadBalancerRule.html
> 
> 
> On Thu, Nov 19, 2020 at 7:45 AM Niclas Lindblom
>  wrote:
> 
>> Hi,
>> 
>> I need to add a load balancer rule for a specific port for both tcp and
>> udp. In the drop down I can only select one or the other and I am not able
>> to add 2 rules (one for each protocol) on the same port as I get a message
>> that there’s a conflict with existing rule. How do I achieve opening a port
>> for both tcp/udp into a VPC ?
>> 
>> Thanks
>> 
>> Niclas
> 
> 
> 
> -- 
> Daan



Loadbalancer rule - open both TCP and UDP

2020-11-18 Thread Niclas Lindblom
Hi,

I need to add a load balancer rule for a specific port for both tcp and udp. In 
the drop down I can only select one or the other and I am not able to add 2 
rules (one for each protocol) on the same port as I get a message that there’s 
a conflict with existing rule. How do I achieve opening a port for both tcp/udp 
into a VPC ?

Thanks

Niclas

Re: Help with publishing and consuming events using rabbitmq

2020-11-18 Thread Niclas Lindblom
Thanks,

I actually got it working, the problem I had was the part on creating the 
routing key filter in RabbitMQ which isn’t too well documented at the moment. 
However, there is a bug currently in the string being send by CS not being 
formatted correctly which has been logged as issue #4468 .

Nclas

> On 19 Nov 2020, at 00:52, Gabriel Beims Bräscher  wrote:
> 
> I did play a bit with Rabbit MQ + CloudStack some time ago. Here follows
> some details regarding it.
> 
> Note that this example is really simplified and bound to steps for a "proof
> of concept".
> 
> *I. Configuring RabbitMQ + CloudStack:*
> 
> 1. create file
> /etc/cloudstack/management/META-INF/cloudstack/core/spring-event-bus-context.xml
> 2. edit spring-event-bus-context.xml to contain the following data:
> 2.1 the server that is running the RabbitMQ: localhost
> 2.2 the port on which RabbitMQ server is running: 5672
> 2.3 username associated with the account to access the RabbitMQ server:
> guest
> 2.4 password associated with the username of the account to access the
> RabbitMQ server: guest
> 2.5 The exchange name on the RabbitMQ server where CloudStack events are
> published: cloudstack-events
> 
> - - spring-event-bus-context.xml:
> 
> *http://www.springframework.org/schema/beans";
> <http://www.springframework.org/schema/beans%22>*
> 
> *xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> <http://www.w3.org/2001/XMLSchema-instance%22>*
> 
> *xmlns:context="http://www.springframework.org/schema/context";
> <http://www.springframework.org/schema/context%22>*
> 
> *xmlns:aop="http://www.springframework.org/schema/aop";
> <http://www.springframework.org/schema/aop%22>*
> 
> *xsi:schemaLocation="http://www.springframework.org/schema/beans
> <http://www.springframework.org/schema/beans>*
> 
> *http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
> <http://www.springframework.org/schema/beans/spring-beans-3.0.xsd>*
> 
> *http://www.springframework.org/schema/aop
> <http://www.springframework.org/schema/aop>http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
> <http://www.springframework.org/schema/aop/spring-aop-3.0.xsd>*
> 
> *http://www.springframework.org/schema/context
> <http://www.springframework.org/schema/context>*
> 
> *http://www.springframework.org/schema/context/spring-context-3.0.xsd";
> <http://www.springframework.org/schema/context/spring-context-3.0.xsd%22>>*
> 
> * class="org.apache.cloudstack.mom.rabbitmq.RabbitMQEventBus">*
> 
> **
> 
> **
> 
> **
> 
> **
> 
> **
> 
> **
> 
> **
> 
> **
> 
> 3. Enable the rabbitmq_management
> *rabbitmq-plugins enable rabbitmq_management*
> 
> 4. restart rabbitmq and cloudstack services
> 
> 
> *systemctl restart rabbitmq-server.servicesystemctl restart
> cloudstack-management.service*
> 
> 5. connected on the management server tunneling the localhost with port
> 15672
> ssh -L 15672:localhost:15672 user@cloudstack-management-host
> 
> 6. In this example the UI would be available at http://localhost:15672
> 
> *II. Binding the exchange ‘cloudstack-events’ with a queue*
> 
> CloudStack creates the exchange ‘cloudstack-events’ which will receive
> messages containing CloudStack events; however, there are no queues yet.
> To create a queue and bind with cloudstack-events the following steps are
> needed:
> 
> 1. Go to Queues tab and add a queue, e.g. 'cloudstack-queue’
> 2. Go to Exchanges tab and Bind to queue cloudstack-queue with the desired
> ‘Routing key’.
> 
> 
> *III. Routing keys*
> 
> The routing key is a list of words, delimited by a period (".").
> CloudStack builds routing keys according to each event type, some examples
> are:
> a)
> /management-server.ActionEvent.ACCOUNT-CREATE.Account.b9117aa2-9432-4dc4-a055-fee45c428239/
> b)
> /management-server.UsageEvent.VOLUME-CREATE.com-cloud-storage-Volume.1232e3e6-2576-4983-bde5-b904eba9e4cb/
> c)
> /management-server.UsageEvent.VM-CREATE.com-cloud-vm-VirtualMachine.1232e3e-9432-4dc4-a055-fee45c428239/
> 
> Some example of routing keys that match CloudStack events:
> a) A pound symbol (“#”) indicates a match on zero or more words; thus, it
> will match any possible set of words;
> b) Asterisk (“*”) matching any word and the period (“.”) delimiting:
> ‘*.*.*.*.*’;
> c) expressions to filter a specific set of events, e.g. matching VM-CREATE
> or UsageEvent: ‘management-server.UsageEvent.VM-CREATE.#’ or
> ‘management-server.UsageEvent.#’.
> 
> Cheers,
> Gabriel.
> 
> Em seg., 9 de nov. de 2020 às 13:38, 

Help with publishing and consuming events using rabbitmq

2020-11-09 Thread Niclas Lindblom
Hi,

I would appreciate some help with setting up events to be consumed using AQMP. 
I have limited knowledge with rabbitmq and have got to the following point 

1. I have 2 cloudstack management servers (this is not a production environment)

2. I have configured both with the spring-event-bus-context.xml as per the 4.14 
documentation and my understanding is that there is no further software to be 
installed.

3. I have a rabbitmq server on a separate server and I can see the 
cloudstack-events Exchange being created once I restart the management service

4. I have recreated the exchange as fanout and bound it to a queue 

I can’t see any events coming through when setting up a consumer using the 
basic python code from rabbitmq’s tutorial, but if I publish an event using the 
same example code to the same queue it does come through.


Questions 

1 Am I missing something in my configuration ?

2. What should I set the routing key to, with fanout out I believe the routing 
key is ignored but for reference.

2. How can I confirm that messages are being sent from the server, the only 
evidence I have is the exchange being created once the management service was 
restart

3. I can’t see anything pointing towards messages being sent in the management 
log, should I ?

The endgame here is to consume these events with a Stackstorm sensor to trigger 
automation, any pointers would be appreciated

Thanks

Niclas