Re: [Site-to-Site IPSEC Slow]

2017-10-05 Thread Gian Paolo Buono
Hi,
thank you all, I solved changing the encryption from 3des-md5 to aes128-md5

Bye

On 10/02/2017 07:13 PM, Glenn Wagner wrote:

Hi,

Can you check the auth.log file on the VR’s to see if you got any errors, also 
are you using any private gateways with these VPC’s?

Regards
Glenn



glenn.wag...@shapeblue.com
www.shapeblue.com
Winter Suite, 1st Floor, The Avenues, Drama Street, Somerset West, Cape Town  
7129South Africa
@shapeblue



From: Andrija Panic [mailto:andrija.pa...@gmail.com]
Sent: Monday, 02 October 2017 12:53 PM
To: users@cloudstack.apache.org
Cc: Glenn Wagner 
Subject: Re: [Site-to-Site IPSEC Slow]

Hi Gian,

can you please try same test with iperf ?

I would check remote side (Openswap Debian), since these are bad numbers, and 
we never hit similar issue with ACS 4.5 and ACS 4.8 (not yet using 4.9)

FYI, between 2 VPC sites (S-2-S VPN), I was able to get 340 Mbps out of 1Gbps 
internet connection, so you can't always expect full link performance simply 
because of IPsec protocol overhead (this is with VRs being resized to 4 x 2GHz 
CPUs, just for test/fun)...

Best
Andrija




On 26 September 2017 at 15:36, Gian Paolo Buono 
mailto:gianpaolo.bu...@gesca.it>>
 wrote:
Hi Glenn,

1. ACS version: 4.9.1
2.  Centos 7
3. XenServer 6.5
4. Storage Type: NFS
5. Storage Network 10Gb

the test is with netcat...

thanks


On 09/26/2017 08:23 AM, Glenn Wagner wrote:

Hi,

Can you give us some information about your environment?

1. ACS version: 4.9.2
2. ACS OS version: Ubuntu 14.04 / Ubuntu 16.04, Centos 6/7
2. Hypervisor: Xenserver , KVM ,VMware 5.5/6.0
3. Storage Type , NFS, iscsi , fibre channel
4. Storage Network Speed. 1GB , 10GB

Regards
Glenn


glenn.wag...@shapeblue.com>
http://mail01.gesca.it:32224/?dmVyPTEuMDAxJiZmNGMwYjFmMDhmODk2OTY3MD01OUQyNzNENl81NjE1XzE5MjgyXzEmJjdiMWU3MjdiMTgxMTFlYT0xMjIzJiZ1cmw9d3d3JTJFc2hhcGVibHVlJTJFY29t
Winter Suite, 1st Floor, The Avenues, Drama Street, Somerset West, Cape Town  
7129South Africa
@shapeblue




-Original Message-
From: Gian Paolo Buono 
[mailto:gianpaolo.bu...@gesca.it]
Sent: Monday, 25 September 2017 11:54 PM
To: 
users@cloudstack.apache.org>
Subject: [Site-to-Site IPSEC Slow]

Hi all,
I have an IPSEC tunnel established between two sites (VPC CloudStack) vs 
Openswan Debian and both sites get 100Mbps down / 100 Mbps up.
When I send the traffic into the tunnel the max bandwith is 4MB/s, when i send 
the traffic out the tunnel the bandwith is 12MB/s, any idea ?

Regards
Gian Paolo




--

Andrija Panić




Merge VHD

2017-10-05 Thread Gian Paolo Buono
Hi,

I use cloudstack with XenServer and I would like to move some disks to another 
xenserver pool.
I would like to join the chain of disks and create a single vhd to move it. I'm 
doing some tests with a disk with an example chain:

vhd-util scan -f -m 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd -p
vhd=8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd capacity=8589934592 size=20992 
hidden=0 parent=f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd (not found in scan)

I would like to create a new disk called merge.vhd that contain both vhd, and I 
tried to follow this:

https://shankerbalan.net/blog/recover-xenserver-vhd-volumes/

[root@xenser]# mkdir appo ; cp 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd  
f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd appo ; cd appo
[root@xenser]# vhd-util coalesce -n 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd

but I can not have a single file.

Thanks


Re: Merge VHD

2017-10-05 Thread Dag Sonstebo
Hi Gian Paolo,

Can you elaborate ? What are the errors you are seeing, what doesn’t work?

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 05/10/2017, 08:40, "Gian Paolo Buono"  wrote:

Hi,

I use cloudstack with XenServer and I would like to move some disks to 
another xenserver pool.
I would like to join the chain of disks and create a single vhd to move it. 
I'm doing some tests with a disk with an example chain:

vhd-util scan -f -m 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd -p
vhd=8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd capacity=8589934592 size=20992 
hidden=0 parent=f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd (not found in scan)

I would like to create a new disk called merge.vhd that contain both vhd, 
and I tried to follow this:

https://shankerbalan.net/blog/recover-xenserver-vhd-volumes/

[root@xenser]# mkdir appo ; cp 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd  
f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd appo ; cd appo
[root@xenser]# vhd-util coalesce -n 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd

but I can not have a single file.

Thanks



dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: Merge VHD

2017-10-05 Thread Mārtiņš Jakubovičs

Hello Gian,

I would suggest just to copy vdi.

xe vdi-copy uuid= sr-uuid=

This will create new VDI in destination SR without VHD chain.


On 2017.10.05. 10:40, Gian Paolo Buono wrote:

Hi,

I use cloudstack with XenServer and I would like to move some disks to another 
xenserver pool.
I would like to join the chain of disks and create a single vhd to move it. I'm 
doing some tests with a disk with an example chain:

vhd-util scan -f -m 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd -p
vhd=8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd capacity=8589934592 size=20992 
hidden=0 parent=f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd (not found in scan)

I would like to create a new disk called merge.vhd that contain both vhd, and I 
tried to follow this:

https://shankerbalan.net/blog/recover-xenserver-vhd-volumes/

[root@xenser]# mkdir appo ; cp 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd  
f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd appo ; cd appo
[root@xenser]# vhd-util coalesce -n 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd

but I can not have a single file.

Thanks




Re: Merge VHD

2017-10-05 Thread Rafael Weingärtner
Gian, I did that once, but I used the following command:
vhd-util coalesce -p -n .vhd –o /tmp/ .vhd

You need the full chain of the '.vhd' in the folder you are
running this command

On Thu, Oct 5, 2017 at 8:07 AM, Mārtiņš Jakubovičs  wrote:

> Hello Gian,
>
> I would suggest just to copy vdi.
>
> xe vdi-copy uuid= sr-uuid=
>
> This will create new VDI in destination SR without VHD chain.
>
>
>
> On 2017.10.05. 10:40, Gian Paolo Buono wrote:
>
>> Hi,
>>
>> I use cloudstack with XenServer and I would like to move some disks to
>> another xenserver pool.
>> I would like to join the chain of disks and create a single vhd to move
>> it. I'm doing some tests with a disk with an example chain:
>>
>> vhd-util scan -f -m 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd -p
>> vhd=8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd capacity=8589934592
>> size=20992 hidden=0 parent=f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd (not
>> found in scan)
>>
>> I would like to create a new disk called merge.vhd that contain both vhd,
>> and I tried to follow this:
>>
>> https://shankerbalan.net/blog/recover-xenserver-vhd-volumes/
>>
>> [root@xenser]# mkdir appo ; cp 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd
>> f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd appo ; cd appo
>> [root@xenser]# vhd-util coalesce -n 8d430cda-c109-4c4b-acfd-3492b8
>> f35cd7.vhd
>>
>> but I can not have a single file.
>>
>> Thanks
>>
>
>


-- 
Rafael Weingärtner


Re: Merge VHD

2017-10-05 Thread Gian Paolo Buono
Great...thank !!!


On 10/05/2017 01:12 PM, Rafael Weingärtner wrote:

Gian, I did that once, but I used the following command:
vhd-util coalesce -p -n .vhd –o /tmp/ .vhd

You need the full chain of the '.vhd' in the folder you are
running this command

On Thu, Oct 5, 2017 at 8:07 AM, Mārtiņš Jakubovičs 
mailto:martins-li...@hostnet.lv>


wrote:





Hello Gian,

I would suggest just to copy vdi.

xe vdi-copy uuid= sr-uuid=

This will create new VDI in destination SR without VHD chain.



On 2017.10.05. 10:40, Gian Paolo Buono wrote:



Hi,

I use cloudstack with XenServer and I would like to move some disks to
another xenserver pool.
I would like to join the chain of disks and create a single vhd to move
it. I'm doing some tests with a disk with an example chain:

vhd-util scan -f -m 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd -p
vhd=8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd capacity=8589934592
size=20992 hidden=0 parent=f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd (not
found in scan)

I would like to create a new disk called merge.vhd that contain both vhd,
and I tried to follow this:

http://mail01.gesca.it:32224/?dmVyPTEuMDAxJiYyMzhhNTczODAyZjAwY2QxYj01OUQ2MTNBM18xNjQ0XzIxMTBfMSYmMTkwYWZkZDczNjc2NTY5PTEyMjMmJnVybD1odHRwcyUzQSUyRiUyRnNoYW5rZXJiYWxhbiUyRW5ldCUyRmJsb2clMkZyZWNvdmVyLXhlbnNlcnZlci12aGQtdm9sdW1lcyUyRg==

[root@xenser]# mkdir appo ; cp 8d430cda-c109-4c4b-acfd-3492b8f35cd7.vhd
f8ab7260-c1ae-4007-85ec-aa2ba4a04127.vhd appo ; cd appo
[root@xenser]# vhd-util coalesce -n 8d430cda-c109-4c4b-acfd-3492b8
f35cd7.vhd

but I can not have a single file.

Thanks













Cloud-stack 4.9.3 for 7.x & 4.10.0 build For Centos 6.x

2017-10-05 Thread Barbadekar, Anil
Hi Admin,

Kindly upload RPM for 4.9.3 for 7.x.

Also let us know if  4.10.0 build For Centos 6.x are available & co-exist with 
Centos 6.x /7.x Hypervisor.

Regards,
Anil Barbadekar


Re: Cloud-stack 4.9.3 for 7.x & 4.10.0 build For Centos 6.x

2017-10-05 Thread Dag Sonstebo
Hi Anil,

Long time no see, hope you are well.

RPM repo URLs for both 4.9.3 and 4.10 are listed on 
http://cloudstack.apache.org/downloads.html 

With regards to hypervisors there are no requirements to match up OS versions 
on CloudStack management and KVM hosts.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 05/10/2017, 16:17, "Barbadekar, Anil"  wrote:

Hi Admin,

Kindly upload RPM for 4.9.3 for 7.x.

Also let us know if  4.10.0 build For Centos 6.x are available & co-exist 
with Centos 6.x /7.x Hypervisor.

Regards,
Anil Barbadekar



dag.sonst...@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 



Re: somebody experience with bare metal?

2017-10-05 Thread Harikrishna Patnala
This does not include configuring VLAN on the switch and I don’t think it is 
integrated in baremetal deployment flow. So we may need to write our own.


> On 11-Sep-2017, at 4:37 PM, S. Brüseke - proIO GmbH  
> wrote:
> 
> Hi Harikrishna,
> 
> thank  you for your response! We are using Juniper switches. I found this 
> here: 
> https://www.juniper.net/documentation/en_US/release-independent/junos/topics/topic-map/cloudstack-network-guru-plugin.html
> Any experience with it? It looks a little bit outdated.
> 
> Mit freundlichen Grüßen / With kind regards,
> 
> Swen
> 
> -Ursprüngliche Nachricht-
> Von: Harikrishna Patnala [mailto:harikrishna.patn...@accelerite.com] 
> Gesendet: Montag, 11. September 2017 11:01
> An: users@cloudstack.apache.org; S. Brüseke - proIO GmbH 
> 
> Betreff: Re: somebody experience with bare metal?
> 
> Hi,
> 
> We have a pretty good experience with baremetal deployments in both basic and 
> advanced zones.
> 
> Yes, as you said currently cloudstack supports only Dell Force10 switch for 
> dynamic VLAN configuration. 
> Since this is a plugin model, one can develop their own support for the 
> switches. Only thing is that, switch should support configuring VLANs 
> dynamically.
> 
> Here is the interface to implement
> https://github.com/apache/cloudstack/blob/master/plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/BaremetalSwitchBackend.java
> 
> Regards,
> Harikrishna
> 
> 
>> On 04-Sep-2017, at 3:45 PM, S. Brüseke - proIO GmbH  
>> wrote:
>> 
>> Hello,
>> 
>> I have 2 questions and hope somebody can share his/her experience with me:
>> 1) Does somebody have experience with bare metal servers in an advanced 
>> network environment?
>> 
>> 2) As far as I understand the docu correctly bare metal servers will only 
>> work with Force10 switches because of automated network port configuration 
>> of the uplink port for the physical servers.
>> We are using Juniper EX switches and I found a plugin called Network Guru 
>> Plugin from Juniper Networks 
>> (http://www.juniper.net/documentation/en_US/release-independent/junos/topics/topic-map/cloudstack-network-guru-plugin.html).
>> Does anybody know or using thus plugin?
>> 
>> Thanks to all!
>> 
>> Mit freundlichen Grüßen / With kind regards,
>> 
>> Swen
>> 
>> 
>> 
>> - proIO GmbH -
>> Geschäftsführer: Swen Brüseke
>> Sitz der Gesellschaft: Frankfurt am Main
>> 
>> USt-IdNr. DE 267 075 918
>> Registergericht: Frankfurt am Main - HRB 86239
>> 
>> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
>> Informationen. 
>> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
>> erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie 
>> diese Mail.
>> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind 
>> nicht gestattet. 
>> 
>> This e-mail may contain confidential and/or privileged information. 
>> If you are not the intended recipient (or have received this e-mail in 
>> error) please notify the sender immediately and destroy this e-mail.
>> Any unauthorized copying, disclosure or distribution of the material in this 
>> e-mail is strictly forbidden. 
>> 
>> 
> 
> DISCLAIMER
> ==
> This e-mail may contain privileged and confidential information which is the 
> property of Accelerite, a Persistent Systems business. It is intended only 
> for the use of the individual or entity to which it is addressed. If you are 
> not the intended recipient, you are not authorized to read, retain, copy, 
> print, distribute or use this message. If you have received this 
> communication in error, please notify the sender and delete all copies of 
> this message. Accelerite, a Persistent Systems business does not accept any 
> liability for virus infected mails.
> 
> 
> 
> - proIO GmbH -
> Geschäftsführer: Swen Brüseke
> Sitz der Gesellschaft: Frankfurt am Main
> 
> USt-IdNr. DE 267 075 918
> Registergericht: Frankfurt am Main - HRB 86239
> 
> Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte 
> Informationen. 
> Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich 
> erhalten haben, 
> informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. 
> Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail sind nicht 
> gestattet. 
> 
> This e-mail may contain confidential and/or privileged information. 
> If you are not the intended recipient (or have received this e-mail in error) 
> please notify 
> the sender immediately and destroy this e-mail.  
> Any unauthorized copying, disclosure or distribution of the material in this 
> e-mail is strictly forbidden. 
> 
>