Upgrade 4.2.1 --> 4.3. After the upgrade does not work console VM

2014-04-20 Thread Юрий Карпель
I am currently using cloudstack 4.3 with Citrix XenServer 6.2
After the upgrade 4.2.1 to 4.3, does not restart console VM :

cloudstack-sysvmadm -a

Stopping and starting 1 secondary storage vm(s)...
Done stopping and starting secondary storage vm(s)

Stopping and starting 1 console proxy vm(s)...
ERROR: Failed to start console proxy vm with id 2

Done stopping and starting console proxy vm(s) .

ConsoleVM is running but the console does not work:


2014-04-21 09:50:02,915 DEBUG [c.c.c.ConsoleProxyManagerImpl]
(consoleproxy-1:ctx-d7c1c131) Zone 1 is ready to launch console proxy
2014-04-21 09:50:05,038 DEBUG [c.c.a.t.Request]
(DirectAgent-357:ctx-eec0d179) Seq 2-973078659: Processing:  { Ans: ,
MgmtId: 51443468558854, via: 2, Ver: v1, Flags: 10,
[{"com.cloud.agent.api.GetVncPortAnswer":{"address":"consoleurl=
https://10.30.10.42/console?uuid=20a5363d-6f83-254c-d801-e60a6bb09070&sessionref=OpaqueRef:4771b76c-ad13-e7cf-27ba-dc28c2d25af4","port":-1,"result":true,"wait":0}}]
}
2014-04-21 09:50:05,038 DEBUG [c.c.a.t.Request] (http-6443-exec-14:null)
Seq 2-973078659: Received:  { Ans: , MgmtId: 51443468558854, via: 2, Ver:
v1, Flags: 10, { GetVncPortAnswer } }
2014-04-21 09:50:05,038 DEBUG [c.c.s.ConsoleProxyServlet]
(http-6443-exec-14:null) Port info consoleurl=
https://10.30.10.42/console?uuid=20a5363d-6f83-254c-d801-e60a6bb09070&sessionref=OpaqueRef:4771b76c-ad13-e7cf-27ba-dc28c2d25af4
2014-04-21 09:50:05,038 INFO  [c.c.s.ConsoleProxyServlet]
(http-6443-exec-14:null) Parse host info returned from executing
GetVNCPortCommand. host info: consoleurl=
https://10.30.10.42/console?uuid=20a5363d-6f83-254c-d801-e60a6bb09070&sessionref=OpaqueRef:4771b76c-ad13-e7cf-27ba-dc28c2d25af4
2014-04-21 09:50:05,043 DEBUG [c.c.s.ConsoleProxyServlet]
(http-6443-exec-14:null) Compose console url: https://***-174-***-140.*.
realhostip.com/ajax?token=3pWQchuSmjavikwTa3pUXkmGpgfDywM1KGgjRjiiotEw7Cpg2s7o_dkIEmaoxGlv12dcgmESDqiuGT9pXVJaulXgWwR6oPr7Suzadj85C8skUiSgOUwRffBiQ2uWGAKpgirAOUOyZwqdNh19kK73fu3WeZOiSZN3aMXwKUggne3Zf4sP4K-SXfq52p0L-WigHpxpBSXVeTNpMgJJaDiqIqXX7cdOMj15wsWsR_yLmVtYLRpRjRrgy5YHuLnw4AKcSyZERA6zE9kwbVWT8xYvXpRshzQG2ZItlrPdSBYAIKgzqJVifIxXgIXfZ7DJRG9zM6q1-ddz_oJR-QXM37ZukRuZF-LLaEri4zSeVxbVEvBs-x3ztn5HyeV9Qgun4OquEyYhGyD55Azlf1aSU6GNwBBfmNACEj78BDnr65RJlaAZTN-kgt1AYnLntq1X9MciSZQcG_fCWG9fFHjKmjkJveKXKnjJVwKd75V3Au9Ljqo
2014-04-21 09:50:05,043 DEBUG [c.c.s.ConsoleProxyServlet]
(http-6443-exec-14:null) the console url is ::
vm-testhttps://**-174-***-140.*.
realhostip.com/ajax?token=3pWQchuSmjavikwTa3pUXkmGpgfDywM1KGgjRjiiotEw7Cpg2s7o_dkIEmaoxGlv12dcgmESDqiuGT9pXVJaulXgWwR6oPr7Suzadj85C8skUiSgOUwRffBiQ2uWGAKpgirAOUOyZwqdNh19kK73fu3WeZOiSZN3aMXwKUggne3Zf4sP4K-SXfq52p0L-WigHpxpBSXVeTNpMgJJaDiqIqXX7cdOMj15wsWsR_yLmVtYLRpRjRrgy5YHuLnw4AKcSyZERA6zE9kwbVWT8xYvXpRshzQG2ZItlrPdSBYAIKgzqJVifIxXgIXfZ7DJRG9zM6q1-ddz_oJR-QXM37ZukRuZF-LLaEri4zSeVxbVEvBs-x3ztn5HyeV9Qgun4OquEyYhGyD55Azlf1aSU6GNwBBfmNACEj78BDnr65RJlaAZTN-kgt1AYnLntq1X9MciSZQcG_fCWG9fFHjKmjkJveKXKnjJVwKd75V3Au9Ljqo
">


Re: Cloudstack 4.3 instances can't access outside world

2014-04-20 Thread Serg
Hi Suresh,

Thanks for your update. 
There is already submitted bug ( bug id?)? it's will be fixed in 4.3.1 or 
committed to 4.4?



--
Serg 

> On 21 באפר 2014, at 08:50, Suresh Sadhu  wrote:
> 
> Its temporary  and its regression bug caused due to other last min commit. 
> due to this traffic labels are not considering.
> 
> Regards
> Sadhu
> 
> 
> 
> -Original Message-
> From: Serg Senko [mailto:kernc...@gmail.com] 
> Sent: 21 April 2014 11:12
> To: users@cloudstack.apache.org
> Subject: Re: Cloudstack 4.3 instances can't access outside world
> 
> Hi,
> 
> What does mean "In 4.3 traffic labels are not considering" ?
> It's temporary or " traffic labels " is deprecated now ?
> 
> 
> Does mean, anyone with KVM traffic labels environment can't upgrade to 4.3.0?
> 
> 
> 
> 
> 
> On Thu, Apr 10, 2014 at 5:05 PM, Suresh Sadhu wrote:
> 
>> Did you used traffic name labels?
>> 
>> In 4.3 traffic labels are not considering ,by default its attaching to 
>> default  traffic labels(eg:in KVM its cloudbr0 ...due to this unable 
>> to access public network i.r before upgrade if ieth2 attached cloudbr1 
>> and after upgrade its attached to cloudbr0).maybe you are hitting this issue.
>> 
>> Regards
>> sadhu
>> 
>> 
>> -Original Message-
>> From: motty cruz [mailto:motty.c...@gmail.com]
>> Sent: 10 April 2014 19:28
>> To: users@cloudstack.apache.org
>> Subject: Re: Cloudstack 4.3 instances can't access outside world
>> 
>> yes I can ping VR, also after the upgrade VR has four insterfaces, 
>> eth0 subnet for Instances, eth1, eth2 for public IP and eth3 for public IP.
>> 
>> 
>>> On Wed, Apr 9, 2014 at 10:35 PM, Erik Weber  wrote:
>>> 
>>> Can you ping the VR? Log on to the VR, and get the iptables rules. 
>>> How do they look?
>>> 
>>> Erik Weber
>>> 10. apr. 2014 00:21 skrev "motty cruz"  følgende:
>>> 
 I did add egress rules, reboot network but no sucess, so I removed 
 that rules and nothing.
 
 I am lost.
 
 
 On Wed, Apr 9, 2014 at 9:08 AM, Erik Weber 
>> wrote:
 
> Did you remove the egress rule again? If not, try that.
> 
> Erik
> 9. apr. 2014 15:49 skrev "motty cruz" 
>> følgende:
> 
>> yes I try adding the rule, restart network and router but no
>> success!
>> 
>> 
>> On Tue, Apr 8, 2014 at 11:16 PM, Erik Weber 
>> 
 wrote:
>> 
>>> Try adding an egress rule, and removing it again.
>>> 
>>> We experience the same, but has so far believed it was 
>>> because we
> changed
>>> the default rule from deny to allow after accounts were made..
>>> 
>>> 
>>> On Tue, Apr 8, 2014 at 11:14 PM, motty cruz 
>>> 
>> wrote:
>>> 
 I have two isolated network both virtual routers can ping
>>> anywhere,
> but
>>> the
 Instances behind the virtual router can't ping or access 
 the
> internet.
 
 
 
 
 On Tue, Apr 8, 2014 at 10:38 AM, motty cruz <
>>> motty.c...@gmail.com>
>>> wrote:
 
> Hello,
> I'm having issues with VMs unable to access outside world.
> I
>>> can
> ping
> gateway, also when I log in to virtual router, I am able 
> to
>>> ping
> google.com or anywhere.
> in the Egress rules I am allowing all. reboot network 
> and
>>> virtual
>>> router
> does not help.
> 
> VMs were able to access outside before upgrading from 
> 4.2 to
>>> 4.3.
> 
> any ideas?
> 
> 
> 
> --
> ttyv0 "/usr/libexec/gmail Pc"  webcons on secure


Re: SSVM cannot ping DNS

2014-04-20 Thread Shanker Balan
Hi Ameen,

comments inline.

On 19-Apr-2014, at 11:02 pm, Ameen Ali  wrote:

> Dear CloudStackers,
>
> I've been having this issue for quite a while. My SSVM is not able to ping
> the Management Server or  DNS or resolve download.cloud.com Therefor I am
> not able to download any template even if I'm hosting them inside the
> Management Server itself. I tried disabling iptables on both management
> server and host but still nothing is happening.

Ok.

> Notice the following:
> 172.16.96.2 is my gateway
> 172.16.96.10 is my management server (Cloudstack 4.0.2)
> 172.16.96.40 is my host (XenServer 6.0)

Is this an advanced Zone? If so,

1) How many physical interfaces does your hypervisor have?
2) Since you are using Xen, what traffic labels have you assigned
 while creating the Zone?
3) Have you created the traffic labels on the XenServer hosts?
4) What subnet have you used as the public range?


>
> The following is the output of ssvm_check.sh.
>
> ​
> First DNS server is  8.8.8.8
> PING 8.8.8.8 (8.8.8.8): 56 data bytes
> 64 bytes from 172.16.96.46: Destination Host Unreachable
> Vr HL TOS  Len   ID Flg  off TTL Pro  cks  Src  Dst Data
> 4  5  00 5400    0 0040  40  01 5b1e 172.16.96.46  8.8.8.8
> 64 bytes from 172.16.96.46: Destination Host Unreachable
> Vr HL TOS  Len   ID Flg  off TTL Pro  cks  Src  Dst Data
> 4  5  00 5400    0 0040  40  01 5b1e 172.16.96.46  8.8.8.8
> --- 8.8.8.8 ping statistics ---
> 2 packets transmitted, 0 packets received, 100% packet loss
> WARNING: cannot ping DNS server
> route follows
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric RefUse
> Iface
> 8.8.4.4 172.16.96.2 255.255.255.255 UGH   0  00 eth1
> 8.8.8.8 172.16.96.2 255.255.255.255 UGH   0  00 eth1
> 172.16.96.0 0.0.0.0 255.255.255.0   U 0  00 eth1
> 172.16.96.0 0.0.0.0 255.255.255.0   U 0  00 eth2
> 172.16.96.0 0.0.0.0 255.255.255.0   U 0  00 eth3
> 169.254.0.0 0.0.0.0 255.255.0.0 U 0  00 eth0
> 0.0.0.0 172.16.96.2 0.0.0.0 UG0  00 eth2


According to your routing table, the DNS servers is expected to
be reachable via 172.16.96.2 gateway IP using the eth1 Interface
on the SSVM.

Are you able to ping 172.16.96.2 from the SSVM at all?

If ping to 172.16.96.2 is failing, can you double check your traffic labels
against the physical interfaces. For the network to work correctly,
the SSVM’s interfaces has to be bridged to the correct physical interface
on the hypervisor. CloudStack does this mapping using the traffic labels
which you specified at the time of zone creation.


> 
> ERROR: DNS not resolving download.cloud.com
> resolv.conf follows
> nameserver 8.8.8.8
> nameserver 8.8.4.4
> nameserver 8.8.8.8
> nameserver 8.8.4.4
> ​​
> ​
> ​
> ​
>
>
>
> ​The following is part of the output of the following command: "grep -i -E
> 'exc|unable|fail|invalid|leak|invalid|warn'
> /var/log/cloud/management/management-server.log" ​:
> ​
> ​
> ​
> ​
> 2014-04-17 05:51:09,356 WARN  [xen.resource.CitrixResourceBase]
> (DirectAgent-4:null) Failed to program default network rules for s-4-VM
> 2014-04-17 05:51:14,782 WARN  [network.element.VpcVirtualRouterElement]
> (consoleproxy-1:null) Network Ntwk[204|Guest|6] is not associated with any
> VPC
> 2014-04-17 05:51:14,799 WARN  [network.element.VpcVirtualRouterElement]
> (consoleproxy-1:null) Network Ntwk[202|Control|3] is not associated with
> any VPC
> 2014-04-17 05:51:14,819 WARN  [network.element.VpcVirtualRouterElement]
> (consoleproxy-1:null) Network Ntwk[201|Management|2] is not associated with
> any VPC
> 2014-04-17 05:51:56,275 WARN  [xen.resource.CitrixResourceBase]
> (DirectAgent-163:null) Ignoring VM v-5-VM in transition state starting.
> 2014-04-17 05:51:58,387 WARN  [xen.resource.CitrixResourceBase]
> (DirectAgent-96:null) Failed to program default network rules for v-5-VM
> 2014-04-17 06:24:51,091 DEBUG [xen.resource.XenServerConnectionPool]
> (DirectAgent-235:null) XmlRpcException for method: host.get_by_uuid due to
> Failed to read server's response: connect timed out.  Reconnecting...retry=1
> 2014-04-17 06:24:51,092 DEBUG [xen.resource.XenServerConnectionPool]
> (DirectAgent-235:null) connect through IP(172.16.96.40 for
> pool(e024bf43-7efc-daed-3644-6b7c9a5a6ceb) is broken due to
> org.apache.xmlrpc.XmlRpcException: Failed to read server's response:
> connect timed out
> 2014-04-17 06:52:47,943 DEBUG [xen.resource.XenServerConnectionPool]
> (DirectAgent-276:null) XmlRpcException for method: SR.scan due to Failed to
> create input stream: Read timed out.  Reconnecting...retry=1
> 2014-04-17 06:52:47,943 WARN  [xen.resourc

Unable to create System VMs on CS 4.3

2014-04-20 Thread Tejas Gadaria
Hi,

I am using CS 4.3 with esxi & vCenter 5.5, while creating system vms its
giving below error,

with reference to this bug fix
https://issues.apache.org/jira/browse/CLOUDSTACK-4875. I though it should
be in CS 4.3

2014-04-21 11:09:25,158 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-82:ctx-9d7c063a) Seq 1-1056899145: Executing request
2014-04-21 11:09:25,305 INFO  [c.c.s.r.VmwareStorageProcessor]
(DirectAgent-82:ctx-9d7c063a 10.129.151.67) creating full clone from
template
2014-04-21 11:09:28,741 ERROR [c.c.s.r.VmwareStorageProcessor]
(DirectAgent-82:ctx-9d7c063a 10.129.151.67) clone volume from base image
failed due to Exception: java.lang.RuntimeException
Message: The name 'ROOT-15' already exists.

java.lang.RuntimeException: The name 'ROOT-15' already exists.
at
com.cloud.hypervisor.vmware.util.VmwareClient.waitForTask(VmwareClient.java:336)
at
com.cloud.hypervisor.vmware.mo.VirtualMachineMO.createFullClone(VirtualMachineMO.java:619)
at
com.cloud.storage.resource.VmwareStorageProcessor.createVMFullClone(VmwareStorageProcessor.java:266)
at
com.cloud.storage.resource.VmwareStorageProcessor.cloneVolumeFromBaseTemplate(VmwareStorageProcessor.java:338)
at
com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.execute(StorageSubsystemCommandHandlerBase.java:78)
at
com.cloud.storage.resource.VmwareStorageSubsystemCommandHandler.execute(VmwareStorageSubsystemCommandHandler.java:171)
at
com.cloud.storage.resource.StorageSubsystemCommandHandlerBase.handleStorageCommands(StorageSubsystemCommandHandlerBase.java:50)
at
com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:571)
at
com.cloud.agent.manager.DirectAgentAttache$Task.runInContext(DirectAgentAttache.java:216)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
at
org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
at
org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
2014-04-21 11:09:28,742 DEBUG [c.c.a.m.DirectAgentAttache]
(DirectAgent-82:ctx-9d7c063a) Seq 1-1056899145: Response Received:
2014-04-21 11:09:28,742 DEBUG [c.c.a.t.Request]
(DirectAgent-82:ctx-9d7c063a) Seq 1-1056899145: Processing:  { Ans: ,
MgmtId: 345049296663, via: 1, Ver: v1, Flags: 10,
[{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"java.lang.RuntimeException:
The name 'ROOT-15' already exists.","wait":0}}] }
2014-04-21 11:09:28,742 DEBUG [c.c.a.t.Request]
(consoleproxy-1:ctx-b084e44b) Seq 1-1056899145: Received:  { Ans: , MgmtId:
345049296663, via: 1, Ver: v1, Flags: 10, { CopyCmdAnswer } }
2014-04-21 11:09:28,743 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
(RouterStatusMonitor-1:ctx-cc7f41cc) Found 0 routers to update status.
2014-04-21 11:09:28,744 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl]
(RouterStatusMonitor-1:ctx-cc7f41cc) Found 0 networks to update RvR status.
2014-04-21 11:09:28,751 WARN  [o.a.c.s.d.ObjectInDataStoreManagerImpl]
(consoleproxy-1:ctx-b084e44b) Unsupported data object (VOLUME,
org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@14ca1ff7), no
need to delete from object in store ref table
2014-04-21 11:09:28,751 DEBUG [o.a.c.e.o.VolumeOrchestrator]
(consoleproxy-1:ctx-b084e44b) Unable to create
Vol[15|vm=15|ROOT]:java.lang.RuntimeException: The name 'ROOT-15' already
exists.
2014-04-21 11:09:28,751 DEBUG [o.a.c.e.o.VolumeOrchestrator]
(consoleproxy-1:ctx-b084e44b) Unable to create
Vol[15|vm=15|ROOT]:java.lang.RuntimeException: The name 'ROOT-15' already
exists.
2014-04-21 11:09:28,752 INFO  [c.c.v.VirtualMachineManagerImpl]
(consoleproxy-1:ctx-b084e44b) Unable to contact resource.
com.cloud.exception.StorageUnavailableException: Resource [StoragePool:1]
is unreachable: Unable to create
Vol[15|vm=15|ROOT]:java.lang.RuntimeException: The name 'ROOT-15' already
exists.
at
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.recreateVolume(VolumeOrchestrator.java:

RE: Cloudstack 4.3 instances can't access outside world

2014-04-20 Thread Suresh Sadhu
Its temporary  and its regression bug caused due to other last min commit. due 
to this traffic labels are not considering.

Regards
Sadhu



-Original Message-
From: Serg Senko [mailto:kernc...@gmail.com] 
Sent: 21 April 2014 11:12
To: users@cloudstack.apache.org
Subject: Re: Cloudstack 4.3 instances can't access outside world

Hi,

What does mean "In 4.3 traffic labels are not considering" ?
It's temporary or " traffic labels " is deprecated now ?


Does mean, anyone with KVM traffic labels environment can't upgrade to 4.3.0?





On Thu, Apr 10, 2014 at 5:05 PM, Suresh Sadhu wrote:

> Did you used traffic name labels?
>
> In 4.3 traffic labels are not considering ,by default its attaching to 
> default  traffic labels(eg:in KVM its cloudbr0 ...due to this unable 
> to access public network i.r before upgrade if ieth2 attached cloudbr1 
> and after upgrade its attached to cloudbr0).maybe you are hitting this issue.
>
> Regards
> sadhu
>
>
> -Original Message-
> From: motty cruz [mailto:motty.c...@gmail.com]
> Sent: 10 April 2014 19:28
> To: users@cloudstack.apache.org
> Subject: Re: Cloudstack 4.3 instances can't access outside world
>
> yes I can ping VR, also after the upgrade VR has four insterfaces, 
> eth0 subnet for Instances, eth1, eth2 for public IP and eth3 for public IP.
>
>
> On Wed, Apr 9, 2014 at 10:35 PM, Erik Weber  wrote:
>
> > Can you ping the VR? Log on to the VR, and get the iptables rules. 
> > How do they look?
> >
> > Erik Weber
> > 10. apr. 2014 00:21 skrev "motty cruz"  følgende:
> >
> > > I did add egress rules, reboot network but no sucess, so I removed 
> > > that rules and nothing.
> > >
> > > I am lost.
> > >
> > >
> > > On Wed, Apr 9, 2014 at 9:08 AM, Erik Weber 
> wrote:
> > >
> > > > Did you remove the egress rule again? If not, try that.
> > > >
> > > > Erik
> > > > 9. apr. 2014 15:49 skrev "motty cruz" 
> følgende:
> > > >
> > > > > yes I try adding the rule, restart network and router but no
> success!
> > > > >
> > > > >
> > > > > On Tue, Apr 8, 2014 at 11:16 PM, Erik Weber 
> > > > > 
> > > wrote:
> > > > >
> > > > > > Try adding an egress rule, and removing it again.
> > > > > >
> > > > > > We experience the same, but has so far believed it was 
> > > > > > because we
> > > > changed
> > > > > > the default rule from deny to allow after accounts were made..
> > > > > >
> > > > > >
> > > > > > On Tue, Apr 8, 2014 at 11:14 PM, motty cruz 
> > > > > > 
> > > > > wrote:
> > > > > >
> > > > > > > I have two isolated network both virtual routers can ping
> > anywhere,
> > > > but
> > > > > > the
> > > > > > > Instances behind the virtual router can't ping or access 
> > > > > > > the
> > > > internet.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Apr 8, 2014 at 10:38 AM, motty cruz <
> > motty.c...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Hello,
> > > > > > > > I'm having issues with VMs unable to access outside world.
> > > > > > > > I
> > can
> > > > ping
> > > > > > > > gateway, also when I log in to virtual router, I am able 
> > > > > > > > to
> > ping
> > > > > > > > google.com or anywhere.
> > > > > > > > in the Egress rules I am allowing all. reboot network 
> > > > > > > > and
> > virtual
> > > > > > router
> > > > > > > > does not help.
> > > > > > > >
> > > > > > > > VMs were able to access outside before upgrading from 
> > > > > > > > 4.2 to
> > 4.3.
> > > > > > > >
> > > > > > > > any ideas?
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>



--
ttyv0 "/usr/libexec/gmail Pc"  webcons on secure


Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

2014-04-20 Thread Marcus
Sorry, actually I see the 'connection refused' is just your own test
after the fact. By that time the vm may be shut down, so connection
refused would make sense.

What happens if you do this:

'virsh dumpxml v-1-VM > /tmp/v-1-VM.xml' while it is running
stop the cloudstack agent
'virsh destroy v-1-VM'
'virsh create /tmp/v-1-VM.xml'
Then try connecting to that VM via VNC to watch it boot up, or running
that command manually, repeatedly? Does it time out?

In the end this may not mean much, because in CentOS 6.x that command
is retried over and over while the system vm is coming up anyway (in
other words, some failures are expected). It could be related, but it
could also be that the system vm is failing to come up for any other
reason, and this is just the thing you noticed.

On Sun, Apr 20, 2014 at 11:25 PM, Marcus  wrote:
> You may want to look in the qemu log of the vm to see if there's
> something deeper going on, perhaps the qemu process is not fully
> starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
> something like that.
>
> On Sun, Apr 20, 2014 at 11:22 PM, Marcus  wrote:
>> No, it has nothing to do with ssh or libvirt daemon. It's the literal
>> unix socket that is created for virtio-serial communication when the
>> qemu process starts. The question is why the system is refusing access
>> to the socket. I assume this is being attempted as root.
>>
>> On Sat, Apr 19, 2014 at 9:58 AM, Nux!  wrote:
>>> On 19.04.2014 15:24, Giri Prasad wrote:
>>>

 # grep listen_ /etc/libvirt/libvirtd.conf
 listen_tls=0
 listen_tcp=1
 #listen_addr = "192.XX.XX.X"
 listen_addr = "0.0.0.0"

 #
 /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
 -n v-1-VM -p

 %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
 .
 ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
 Connection refused
>>>
>>>
>>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>>
>>> (kind of stabbing in the dark)
>>>
>>>
>>> --
>>> Sent from the Delta quadrant using Borg technology!
>>>
>>> Nux!
>>> www.nux.ro


Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug

2014-04-20 Thread Serg Senko
Hi,

Yes sure,

root@r-256-VM:~# cat /etc/cloudstack-release

Cloudstack Release 4.3.0 (64-bit) Wed Jan 15 00:27:19 UTC 2014


Also I tried to destroy the VR and re-create, VR up with same problem.

The "cloudstack-sysvmadm" script haven't receive success answer from VR's.


I have a finish rolling back to 4.1.1 - VR's successfully started,
everything is work again, but how to upgrade to 4.3 ?

This bug is not documented in know issue's,








On Mon, Apr 21, 2014 at 8:16 AM, Marcus  wrote:

> No idea, but have you verified that the vm is running the new system
> vm template? What happens if you destroy the router and let it
> recreate?
>
> On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko  wrote:
> > Hi
> >
> > After upgrade and restarting system-VM's
> > all VR started with some bad network configuration, egress rules stopped
> > work.
> > also some staticNAT rules,
> >
> >
> > there is " ip addr show " from one of VR's
> >
> > root@r-256-VM:~# ip addr show
> >
> > 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
> >
> > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >
> > inet 127.0.0.1/8 scope host lo
> >
> > inet6 ::1/128 scope host
> >
> >valid_lft forever preferred_lft forever
> >
> > 2: eth0:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >
> > link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff
> >
> > inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0
> >
> > inet6 fe80::6bff:fe16:9/64 scope link
> >
> >valid_lft forever preferred_lft forever
> >
> > 3: eth1:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >
> > link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff
> >
> > inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1
> >
> > inet6 fe80::c00:a9ff:fefe:138/64 scope link
> >
> >valid_lft forever preferred_lft forever
> >
> > 4: eth2:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >
> > link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff
> >
> > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2
> >
> > inet6 fe80::406:ecff:fe00:e/64 scope link
> >
> >valid_lft forever preferred_lft forever
> >
> > 5: eth3:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >
> > link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff
> >
> > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3
> >
> > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary
> eth3
> >
> > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary
> eth3
> >
> > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary
> eth3
> >
> > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary
> eth3
> >
> > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary
> eth3
> >
> > inet6 fe80::481:44ff:fe00:e/64 scope link
> >
> >valid_lft forever preferred_lft forever
> >
> > 6: eth4:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >
> > link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff
> >
> > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4
> >
> > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary
> eth4
> >
> > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary
> eth4
> >
> > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary
> eth4
> >
> > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary
> eth4
> >
> > inet6 fe80::4e5:36ff:fe00:e/64 scope link
> >
> >valid_lft forever preferred_lft forever
> >
> > 7: eth5:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >
> > link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff
> >
> > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5
> >
> > inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary
> eth5
> >
> > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary
> eth5
> >
> > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary
> eth5
> >
> > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary
> eth5
> >
> > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary
> eth5
> >
> > inet6 fe80::46f:3aff:fe00:e/64 scope link
> >
> >valid_lft forever preferred_lft forever
> >
> > 8: eth6:  mtu 1500 qdisc pfifo_fast
> state
> > UP qlen 1000
> >
> > link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff
> >
> > inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6
> >
> > inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary
> eth6
> >
> > inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary
> eth6
> >
> > inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary
> eth6
> >
> > inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary
> eth6
> >
> > inet6 fe80::4b0:30ff:fe00:e/64 scope link
> >
> >valid_lft forever preferred_lft forever
> >
> > 9: eth7:  mtu 1500 qdisc pfifo_fast
> state
> > UP ql

Re: Cloudstack 4.3 instances can't access outside world

2014-04-20 Thread Serg Senko
Hi,

What does mean "In 4.3 traffic labels are not considering" ?
It's temporary or " traffic labels " is deprecated now ?


Does mean, anyone with KVM traffic labels environment can't upgrade to
4.3.0?





On Thu, Apr 10, 2014 at 5:05 PM, Suresh Sadhu wrote:

> Did you used traffic name labels?
>
> In 4.3 traffic labels are not considering ,by default its attaching to
> default  traffic labels(eg:in KVM its cloudbr0 ...due to this unable to
> access public network i.r before upgrade if ieth2 attached cloudbr1 and
> after upgrade its attached to cloudbr0).maybe you are hitting this issue.
>
> Regards
> sadhu
>
>
> -Original Message-
> From: motty cruz [mailto:motty.c...@gmail.com]
> Sent: 10 April 2014 19:28
> To: users@cloudstack.apache.org
> Subject: Re: Cloudstack 4.3 instances can't access outside world
>
> yes I can ping VR, also after the upgrade VR has four insterfaces, eth0
> subnet for Instances, eth1, eth2 for public IP and eth3 for public IP.
>
>
> On Wed, Apr 9, 2014 at 10:35 PM, Erik Weber  wrote:
>
> > Can you ping the VR? Log on to the VR, and get the iptables rules. How
> > do they look?
> >
> > Erik Weber
> > 10. apr. 2014 00:21 skrev "motty cruz"  følgende:
> >
> > > I did add egress rules, reboot network but no sucess, so I removed
> > > that rules and nothing.
> > >
> > > I am lost.
> > >
> > >
> > > On Wed, Apr 9, 2014 at 9:08 AM, Erik Weber 
> wrote:
> > >
> > > > Did you remove the egress rule again? If not, try that.
> > > >
> > > > Erik
> > > > 9. apr. 2014 15:49 skrev "motty cruz" 
> følgende:
> > > >
> > > > > yes I try adding the rule, restart network and router but no
> success!
> > > > >
> > > > >
> > > > > On Tue, Apr 8, 2014 at 11:16 PM, Erik Weber
> > > > > 
> > > wrote:
> > > > >
> > > > > > Try adding an egress rule, and removing it again.
> > > > > >
> > > > > > We experience the same, but has so far believed it was because
> > > > > > we
> > > > changed
> > > > > > the default rule from deny to allow after accounts were made..
> > > > > >
> > > > > >
> > > > > > On Tue, Apr 8, 2014 at 11:14 PM, motty cruz
> > > > > > 
> > > > > wrote:
> > > > > >
> > > > > > > I have two isolated network both virtual routers can ping
> > anywhere,
> > > > but
> > > > > > the
> > > > > > > Instances behind the virtual router can't ping or access the
> > > > internet.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Tue, Apr 8, 2014 at 10:38 AM, motty cruz <
> > motty.c...@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Hello,
> > > > > > > > I'm having issues with VMs unable to access outside world.
> > > > > > > > I
> > can
> > > > ping
> > > > > > > > gateway, also when I log in to virtual router, I am able
> > > > > > > > to
> > ping
> > > > > > > > google.com or anywhere.
> > > > > > > > in the Egress rules I am allowing all. reboot network and
> > virtual
> > > > > > router
> > > > > > > > does not help.
> > > > > > > >
> > > > > > > > VMs were able to access outside before upgrading from 4.2
> > > > > > > > to
> > 4.3.
> > > > > > > >
> > > > > > > > any ideas?
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 
ttyv0 "/usr/libexec/gmail Pc"  webcons on secure


Re: Cloudstack 4.3 instances can't access outside world

2014-04-20 Thread Serg Senko
Hi,

I have same issue after upgrade 4.1.1 to 4.3.0
Take a look, in CS4.2 VR you have NIC's eth0,eth1,eth2.
In CS 4.3 VR you have 4 NIC's where the eth2 and eth3 is the same.

How CS4.3 is passed the QA?



On Sat, Apr 12, 2014 at 12:16 AM, motty cruz  wrote:

> I have a testing cloudstack cluster, I destroyed it and rebuilding upgraded
> serveral times, each time I ran into the same problem, unable to access
> outside world from instances behind virtual router.
>
> here is iptables before upgrade, Cloudstack 4.2
>
> # Generated by iptables-save v1.4.14 on Fri Apr 11 19:53:57 2014
> *mangle
> :PREROUTING ACCEPT [2317:1282555]
> :INPUT ACCEPT [409:147015]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [189:29312]
> :POSTROUTING ACCEPT [189:29312]
> :FIREWALL_176.23.23.192 - [0:0]
> :VPN_176.23.23.192 - [0:0]
> -A PREROUTING -d 176.23.23.192/32 -j VPN_176.23.23.192
> -A PREROUTING -d 176.23.23.192/32 -j FIREWALL_176.23.23.192
> -A PREROUTING -m state --state RELATED,ESTABLISHED -j CONNMARK
> --restore-mark --nfmask 0x --ctmask 0x
> -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
> -A FIREWALL_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FIREWALL_176.23.23.192 -j DROP
> -A VPN_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A VPN_176.23.23.192 -j RETURN
> COMMIT
> # Completed on Fri Apr 11 19:53:57 2014
> # Generated by iptables-save v1.4.14 on Fri Apr 11 19:53:57 2014
> *filter
> :INPUT DROP [204:117504]
> :FORWARD DROP [0:0]
> :OUTPUT ACCEPT [150:22404]
> :FW_OUTBOUND - [0:0]
> :NETWORK_STATS - [0:0]
> -A INPUT -j NETWORK_STATS
> -A INPUT -d 224.0.0.18/32 -j ACCEPT
> -A INPUT -d 225.0.0.50/32 -j ACCEPT
> -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
> -A INPUT -i eth0 -p tcp -m tcp --dport 53 -j ACCEPT
> -A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 3922 -j ACCEPT
> -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
> -A INPUT -s 10.1.1.0/24 -i eth0 -p tcp -m state --state NEW -m tcp --dport
> 8080 -j ACCEPT
> -A FORWARD -j NETWORK_STATS
> -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FORWARD -i eth0 -o eth0 -m state --state NEW -j ACCEPT
> -A FORWARD -i eth0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A OUTPUT -j NETWORK_STATS
> -A FW_OUTBOUND -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A NETWORK_STATS -i eth0 -o eth2
> -A NETWORK_STATS -i eth2 -o eth0
> -A NETWORK_STATS ! -i eth0 -o eth2 -p tcp
> -A NETWORK_STATS -i eth2 ! -o eth0 -p tcp
> COMMIT
> # Completed on Fri Apr 11 19:53:57 2014
> # Generated by iptables-save v1.4.14 on Fri Apr 11 19:53:57 2014
> *nat
> :PREROUTING ACCEPT [2078:1204416]
> :INPUT ACCEPT [10:964]
> :OUTPUT ACCEPT [1:338]
> :POSTROUTING ACCEPT [1:338]
> -A POSTROUTING -o eth2 -j SNAT --to-source 176.23.23.192
> COMMIT
> # Completed on Fri Apr 11 19:53:57 2014
>
>
> after upgrading to Cloudstack 4.3
>
>
> :POSTROUTING ACCEPT [211:25828]
> :FIREWALL_176.23.23.192 - [0:0]
> :VPN_176.23.23.192 - [0:0]
> -A PREROUTING -d 176.23.23.192/32 -j VPN_176.23.23.192
> -A PREROUTING -d 176.23.23.192/32 -j FIREWALL_176.23.23.192
> -A PREROUTING -m state --state RELATED,ESTABLISHED -j CONNMARK
> --restore-mark --nfmask 0x --ctmask 0x
> -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
> -A FIREWALL_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FIREWALL_176.23.23.192 -j DROP
> -A VPN_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A VPN_176.23.23.192 -j RETURN
> COMMIT
> # Completed on Fri Apr 11 20:49:46 2014
> # Generated by iptables-save v1.4.14 on Fri Apr 11 20:49:46 2014
> *filter
> :INPUT DROP [68:32168]
> :FORWARD DROP [0:0]
> :OUTPUT ACCEPT [81:12516]
> :FW_EGRESS_RULES - [0:0]
> :FW_OUTBOUND - [0:0]
> :NETWORK_STATS - [0:0]
> -A INPUT -j NETWORK_STATS
> -A INPUT -d 224.0.0.18/32 -j ACCEPT
> -A INPUT -d 225.0.0.50/32 -j ACCEPT
> -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
> -A INPUT -i eth0 -p tcp -m tcp --dport 53 -j ACCEPT
> -A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 3922 -j ACCEPT
> -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
> -j ACCEPT
> -A FORWARD -j NETWORK_STATS
> -A FORWARD -i eth

RE: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug

2014-04-20 Thread Suresh Sadhu
Type brctl show 
And check your public interface of your router is  plugged into cloudbr0 or 
cloudbr1..If its plugged to cloubr0 and then need to detach from cloudbr0  and 
attach that interface to cloudbr1 and need to apply the all  the iptables rules 
. Take the backup of iptables rules with iptables-save command before 
performing attach /detach interfces.


 
Regards
Sadhu



-Original Message-
From: Marcus [mailto:shadow...@gmail.com] 
Sent: 21 April 2014 10:46
To: d...@cloudstack.apache.org
Cc: users@cloudstack.apache.org
Subject: Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) 
VR's bug

No idea, but have you verified that the vm is running the new system vm 
template? What happens if you destroy the router and let it recreate?

On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko  wrote:
> Hi
>
> After upgrade and restarting system-VM's all VR started with some bad 
> network configuration, egress rules stopped work.
> also some staticNAT rules,
>
>
> there is " ip addr show " from one of VR's
>
> root@r-256-VM:~# ip addr show
>
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>
> inet 127.0.0.1/8 scope host lo
>
> inet6 ::1/128 scope host
>
>valid_lft forever preferred_lft forever
>
> 2: eth0:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff
>
> inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0
>
> inet6 fe80::6bff:fe16:9/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 3: eth1:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff
>
> inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1
>
> inet6 fe80::c00:a9ff:fefe:138/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 4: eth2:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2
>
> inet6 fe80::406:ecff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 5: eth3:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary 
> eth3
>
> inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary 
> eth3
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary 
> eth3
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary 
> eth3
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary 
> eth3
>
> inet6 fe80::481:44ff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 6: eth4:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary 
> eth4
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary 
> eth4
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary 
> eth4
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary 
> eth4
>
> inet6 fe80::4e5:36ff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 7: eth5:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5
>
> inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary 
> eth5
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary 
> eth5
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary 
> eth5
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary 
> eth5
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary 
> eth5
>
> inet6 fe80::46f:3aff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 8: eth6:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary 
> eth6
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary 
> eth6
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary 
> eth6
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary 
> eth6
>
> inet6 fe80::4b0:30ff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 9: eth7:  mtu 1500 qdisc pfifo_fast 
> state UP qlen 1000
>
> link/ether 06:26:b4:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth7
>
>   

Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

2014-04-20 Thread Marcus
You may want to look in the qemu log of the vm to see if there's
something deeper going on, perhaps the qemu process is not fully
starting due to some other issue. /var/log/libvirt/qemu/v-1-VM.log, or
something like that.

On Sun, Apr 20, 2014 at 11:22 PM, Marcus  wrote:
> No, it has nothing to do with ssh or libvirt daemon. It's the literal
> unix socket that is created for virtio-serial communication when the
> qemu process starts. The question is why the system is refusing access
> to the socket. I assume this is being attempted as root.
>
> On Sat, Apr 19, 2014 at 9:58 AM, Nux!  wrote:
>> On 19.04.2014 15:24, Giri Prasad wrote:
>>
>>>
>>> # grep listen_ /etc/libvirt/libvirtd.conf
>>> listen_tls=0
>>> listen_tcp=1
>>> #listen_addr = "192.XX.XX.X"
>>> listen_addr = "0.0.0.0"
>>>
>>> #
>>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>>> -n v-1-VM -p
>>>
>>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>>> .
>>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>>> Connection refused
>>
>>
>> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>>
>> (kind of stabbing in the dark)
>>
>>
>> --
>> Sent from the Delta quadrant using Borg technology!
>>
>> Nux!
>> www.nux.ro


Re: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent

2014-04-20 Thread Marcus
No, it has nothing to do with ssh or libvirt daemon. It's the literal
unix socket that is created for virtio-serial communication when the
qemu process starts. The question is why the system is refusing access
to the socket. I assume this is being attempted as root.

On Sat, Apr 19, 2014 at 9:58 AM, Nux!  wrote:
> On 19.04.2014 15:24, Giri Prasad wrote:
>
>>
>> # grep listen_ /etc/libvirt/libvirtd.conf
>> listen_tls=0
>> listen_tcp=1
>> #listen_addr = "192.XX.XX.X"
>> listen_addr = "0.0.0.0"
>>
>> #
>> /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/patchviasocket.pl
>> -n v-1-VM -p
>>
>> %template=domP%type=consoleproxy%host=192.XXX.XX.5%port=8250%name=v-1-VM%zone=1%pod=1%guid=Proxy.1%proxy_vm=1%disable_rp_filter=true%eth2ip=192.XXX.XX.173%eth2mask=255.255.255.0%gateway=192.XXX.XX.1%eth0ip=169.254.0.173%eth0mask=255.255.0.0%eth1ip=192.XXX.XX.166%eth1mask=255.255.255.0%mgmtcidr=192.XXX.XX.0/24%localgw=192.XXX.XX.1%internaldns1=192.XXX.XX.1%dns1=192.XXX.XX.1
>> .
>> ERROR: unable to connect to /var/lib/libvirt/qemu/v-1-VM.agent -
>> Connection refused
>
>
> Do you have "-l" or "--listen" as LIBVIRTD_ARGS in /etc/sysconfig/libvirtd?
>
> (kind of stabbing in the dark)
>
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro


Re: Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug

2014-04-20 Thread Marcus
No idea, but have you verified that the vm is running the new system
vm template? What happens if you destroy the router and let it
recreate?

On Sun, Apr 20, 2014 at 6:20 PM, Serg Senko  wrote:
> Hi
>
> After upgrade and restarting system-VM's
> all VR started with some bad network configuration, egress rules stopped
> work.
> also some staticNAT rules,
>
>
> there is " ip addr show " from one of VR's
>
> root@r-256-VM:~# ip addr show
>
> 1: lo:  mtu 16436 qdisc noqueue state UNKNOWN
>
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>
> inet 127.0.0.1/8 scope host lo
>
> inet6 ::1/128 scope host
>
>valid_lft forever preferred_lft forever
>
> 2: eth0:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff
>
> inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0
>
> inet6 fe80::6bff:fe16:9/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 3: eth1:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff
>
> inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1
>
> inet6 fe80::c00:a9ff:fefe:138/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 4: eth2:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2
>
> inet6 fe80::406:ecff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 5: eth3:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth3
>
> inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth3
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth3
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth3
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth3
>
> inet6 fe80::481:44ff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 6: eth4:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth4
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth4
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth4
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth4
>
> inet6 fe80::4e5:36ff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 7: eth5:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5
>
> inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth5
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth5
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth5
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth5
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth5
>
> inet6 fe80::46f:3aff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 8: eth6:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth6
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth6
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth6
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth6
>
> inet6 fe80::4b0:30ff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
> 9: eth7:  mtu 1500 qdisc pfifo_fast state
> UP qlen 1000
>
> link/ether 06:26:b4:00:00:0e brd ff:ff:ff:ff:ff:ff
>
> inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth7
>
> inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth7
>
> inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth7
>
> inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth7
>
> inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth7
>
> inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth7
>
> inet6 fe80::426:b4ff:fe00:e/64 scope link
>
>valid_lft forever preferred_lft forever
>
>
> --
> ttyv0 "/usr/libexec/gmail Pc"  webcons on secure


Upgrade from 4.1.1 to 4.3.0 ( KVM, Traffic labels, Adv. VLAN ) VR's bug

2014-04-20 Thread Serg Senko
Hi

After upgrade and restarting system-VM's
all VR started with some bad network configuration, egress rules stopped
work.
also some staticNAT rules,


there is " ip addr show " from one of VR's

root@r-256-VM:~# ip addr show

1: lo:  mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

   valid_lft forever preferred_lft forever

2: eth0:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 02:00:6b:16:00:09 brd ff:ff:ff:ff:ff:ff

inet 10.1.1.1/24 brd 10.1.1.255 scope global eth0

inet6 fe80::6bff:fe16:9/64 scope link

   valid_lft forever preferred_lft forever

3: eth1:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 0e:00:a9:fe:01:38 brd ff:ff:ff:ff:ff:ff

inet 169.254.1.56/16 brd 169.254.255.255 scope global eth1

inet6 fe80::c00:a9ff:fefe:138/64 scope link

   valid_lft forever preferred_lft forever

4: eth2:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 06:06:ec:00:00:0e brd ff:ff:ff:ff:ff:ff

inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth2

inet6 fe80::406:ecff:fe00:e/64 scope link

   valid_lft forever preferred_lft forever

5: eth3:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 06:81:44:00:00:0e brd ff:ff:ff:ff:ff:ff

inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth3

inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth3

inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth3

inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth3

inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth3

inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth3

inet6 fe80::481:44ff:fe00:e/64 scope link

   valid_lft forever preferred_lft forever

6: eth4:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 06:e5:36:00:00:0e brd ff:ff:ff:ff:ff:ff

inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth4

inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth4

inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth4

inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth4

inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth4

inet6 fe80::4e5:36ff:fe00:e/64 scope link

   valid_lft forever preferred_lft forever

7: eth5:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 06:6f:3a:00:00:0e brd ff:ff:ff:ff:ff:ff

inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth5

inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth5

inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth5

inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth5

inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth5

inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth5

inet6 fe80::46f:3aff:fe00:e/64 scope link

   valid_lft forever preferred_lft forever

8: eth6:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 06:b0:30:00:00:0e brd ff:ff:ff:ff:ff:ff

inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth6

inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth6

inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth6

inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth6

inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth6

inet6 fe80::4b0:30ff:fe00:e/64 scope link

   valid_lft forever preferred_lft forever

9: eth7:  mtu 1500 qdisc pfifo_fast state
UP qlen 1000

link/ether 06:26:b4:00:00:0e brd ff:ff:ff:ff:ff:ff

inet XXX.XXX.XXX.219/26 brd 46.165.231.255 scope global eth7

inet XXX.XXX.XXX.247/26 brd 46.165.231.255 scope global secondary eth7

inet XXX.XXX.XXX.228/26 brd 46.165.231.255 scope global secondary eth7

inet XXX.XXX.XXX.230/26 brd 46.165.231.255 scope global secondary eth7

inet XXX.XXX.XXX.209/26 brd 46.165.231.255 scope global secondary eth7

inet XXX.XXX.XXX.227/26 brd 46.165.231.255 scope global secondary eth7

inet6 fe80::426:b4ff:fe00:e/64 scope link

   valid_lft forever preferred_lft forever


-- 
ttyv0 "/usr/libexec/gmail Pc"  webcons on secure


Network Setup Question

2014-04-20 Thread daoenix
Hello,

Question — I am following the directions found here:  
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/latest/hypervisor_installation.html?highlight=network

I am wondering something about the interfaces, I do as it says; however, I am 
unable to obtain connect to my box once setup. I have to revert eth0 (eth1) to 
regain network connectivity. Is there something I am missing? Is this setup, 
required to have a switch (hardware) if so, then I suspect I would need to 
setup OpenVSwitch, correct?

- daoenix

vi /etc/sysconfig/network-scripts/ifcfg-eth0
Make sure it looks similar to:

DEVICE=eth0
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
We now have to configure the three VLAN interfaces:

vi /etc/sysconfig/network-scripts/ifcfg-eth0.100
DEVICE=eth0.100
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
IPADDR=192.168.42.11
GATEWAY=192.168.42.1
NETMASK=255.255.255.0
vi /etc/sysconfig/network-scripts/ifcfg-eth0.200
DEVICE=eth0.200
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr0
vi /etc/sysconfig/network-scripts/ifcfg-eth0.300
DEVICE=eth0.300
HWADDR=00:04:xx:xx:xx:xx
ONBOOT=yes
HOTPLUG=no
BOOTPROTO=none
TYPE=Ethernet
VLAN=yes
BRIDGE=cloudbr1
Now we have the VLAN interfaces configured we can add the bridges on top of 
them.

vi /etc/sysconfig/network-scripts/ifcfg-cloudbr0
Now we just configure it is a plain bridge without an IP-Address

DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes
We do the same for cloudbr1

vi /etc/sysconfig/network-scripts/ifcfg-cloudbr1
DEVICE=cloudbr1
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
IPV6_AUTOCONF=no
DELAY=5
STP=yes



Re: Cloudstack with iscsi storage

2014-04-20 Thread rammohan ganapavarapu
Geoff,

Thank you that is what i wanted to have. I am planning to have NFS for
secondary and CLVM for primary as cloudstack doesn't support direct SAN,
not sure is there any other solution to present my SAN luns to all the VM
hosts.

Ram


On Sun, Apr 20, 2014 at 3:07 PM, Geoff Higginbottom <
geoff.higginbot...@shapeblue.com> wrote:

> Ram,
>
> The management server(s) need to have access to secondary storage, not
> primary.  If you have placed your pri and sec storage devices on a common
> network (perfectly acceptable config) then you just need to ensure the
> management servers have access to the sec storage devices.  Best practice
> has always been to restrict access to the pri storage devices to only the
> hosts within the clusters serviced by the pri storage.
>
> Regards
>
> Geoff Higginbottom
> CTO / Cloud Architect
>
>
> D: +44 20 3603 0542 | S: +44 20 3603 0540 +442036030540> | M: +447968161581
>
> geoff.higginbot...@shapeblue.com
> | www.shapeblue.com
>
> ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N
> 4HS
>
>
>
> On 20 Apr 2014, at 21:44, "rammohan ganapavarapu"  > wrote:
>
> Thanks for the video, i have one more question,
>
> So CS/management server needs to have access to storage network? i have two
> networks one for storage and one for regular traffic, my hyperviser hosts
> can connect to  storage but my management server doesn't have connectivity
> to storage, i am wondering does management server needs to have connection
> to storage (primary)
>
> Ram
>
>
> On Thu, Apr 17, 2014 at 8:57 PM, Matthew Midgett
> mailto:supp...@trickhosting.biz>>wrote:
>
> How about a video? Also follow these steps below that I posted as a
> comment to her video
>
> http://www.youtube.com/watch?v=srwCdkBEGZQ
>
> To those who are trying to install luci on centos or rhel 6 series after
> you install luci you run the command "service luci start" and "chkconfig
> luci on" this will start luci and set it to run at boot. There is no more
> luci_admin init.?
> Also the password is your root password.?
>
>
>
>
> On 04/17/2014 11:33 PM, rammohan ganapavarapu wrote:
>
> Can you please share your install commands?
>
>
> On Thu, Apr 17, 2014 at 8:24 PM, Matthew Midgett
> mailto:supp...@trickhosting.biz>>wrote:
>
> If your going to use clvm look up a tool called luci by redhat.  It will
> help you setup your cluster and fencing agents. If you need help I'm
> sure I
> can find my install commands in history. Clvm allows to share a logical
> volume but you still have to provide the transport layer.  Fiber, nfs...
> and other technologies.
>
>
> Sent from my Galaxy S(r)III
>
>  Original message 
> From: rammohan ganapavarapu  rammohanga...@gmail.com>>
> Date:04/17/2014  10:32 PM  (GMT-05:00)
> To: users mailto:users@cloudstack.apache.org
> >>
> Subject: Re: Cloudstack with iscsi storage
>
> Ok thank you, can I use nfs? Which one gives better performance nfs or
> clvm?
> On Apr 17, 2014 5:21 PM, "Nux!" mailto:n...@li.nux.ro>>
> wrote:
>
> On 18.04.2014 00:33, rammohan ganapavarapu wrote:
>
> Ilya,
>
> I am planning to use KVM as my hypervisor and CentOS6.4 please advice
>
> me.
>
> Ram
>
> Then you want to use CLVM. See http://www.slideshare.net/
> MarcusLSorensen/cloud-stack-clvm
>
> And upgrade to CentOS 6.5.
>
> Lucian
>
> --
> Sent from the Delta quadrant using Borg technology!
>
> Nux!
> www.nux.ro
>
>
>
> Need Enterprise Grade Support for Apache CloudStack?
> Our CloudStack Infrastructure Support<
> http://shapeblue.com/cloudstack-infrastructure-support/> offers the best
> 24/7 SLA for CloudStack Environments.
>
> Apache CloudStack Bootcamp training courses
>
> **NEW!** CloudStack 4.2.1 training<
> http://shapeblue.com/cloudstack-training/>
> 28th-29th May 2014, Bangalore. Classromm<
> http://shapeblue.com/cloudstack-training/>
> 16th-20th June 2014, Region A. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 23rd-27th June 2014, Region B. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 15th-20th September 2014, Region A. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 22nd-27th September 2014, Region B. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 1st-6th December 2014, Region A. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
> 8th-12th December 2014, Region B. Instructor led, On-line<
> http://shapeblue.com/cloudstack-training/>
>
> This email and any attachments to it may be confidential and are intended
> solely for the use of the individual to whom it is addressed. Any views or
> opinions expressed are solely those of the author and do not necessarily
> represent those of Shape Blue Ltd or related companies. If you are not the
> intended recipient of this email, you must neither take any action based
> upon its contents, nor copy or show it to anyone. Please contact the sender
> 

Re: Cloudstack with iscsi storage

2014-04-20 Thread Geoff Higginbottom
Ram,

The management server(s) need to have access to secondary storage, not primary. 
 If you have placed your pri and sec storage devices on a common network 
(perfectly acceptable config) then you just need to ensure the management 
servers have access to the sec storage devices.  Best practice has always been 
to restrict access to the pri storage devices to only the hosts within the 
clusters serviced by the pri storage.

Regards

Geoff Higginbottom
CTO / Cloud Architect


D: +44 20 3603 0542 | S: +44 20 3603 0540 
| M: +447968161581

geoff.higginbot...@shapeblue.com | 
www.shapeblue.com

ShapeBlue Ltd, 53 Chandos Place, Covent Garden, London, WC2N 
4HS



On 20 Apr 2014, at 21:44, "rammohan ganapavarapu" 
mailto:rammohanga...@gmail.com>> wrote:

Thanks for the video, i have one more question,

So CS/management server needs to have access to storage network? i have two
networks one for storage and one for regular traffic, my hyperviser hosts
can connect to  storage but my management server doesn't have connectivity
to storage, i am wondering does management server needs to have connection
to storage (primary)

Ram


On Thu, Apr 17, 2014 at 8:57 PM, Matthew Midgett
mailto:supp...@trickhosting.biz>>wrote:

How about a video? Also follow these steps below that I posted as a
comment to her video

http://www.youtube.com/watch?v=srwCdkBEGZQ

To those who are trying to install luci on centos or rhel 6 series after
you install luci you run the command "service luci start" and "chkconfig
luci on" this will start luci and set it to run at boot. There is no more
luci_admin init.?
Also the password is your root password.?




On 04/17/2014 11:33 PM, rammohan ganapavarapu wrote:

Can you please share your install commands?


On Thu, Apr 17, 2014 at 8:24 PM, Matthew Midgett
mailto:supp...@trickhosting.biz>>wrote:

If your going to use clvm look up a tool called luci by redhat.  It will
help you setup your cluster and fencing agents. If you need help I'm
sure I
can find my install commands in history. Clvm allows to share a logical
volume but you still have to provide the transport layer.  Fiber, nfs...
and other technologies.


Sent from my Galaxy S(r)III

 Original message 
From: rammohan ganapavarapu 
mailto:rammohanga...@gmail.com>>
Date:04/17/2014  10:32 PM  (GMT-05:00)
To: users mailto:users@cloudstack.apache.org>>
Subject: Re: Cloudstack with iscsi storage

Ok thank you, can I use nfs? Which one gives better performance nfs or
clvm?
On Apr 17, 2014 5:21 PM, "Nux!" mailto:n...@li.nux.ro>> wrote:

On 18.04.2014 00:33, rammohan ganapavarapu wrote:

Ilya,

I am planning to use KVM as my hypervisor and CentOS6.4 please advice

me.

Ram

Then you want to use CLVM. See http://www.slideshare.net/
MarcusLSorensen/cloud-stack-clvm

And upgrade to CentOS 6.5.

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro



Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Support offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training
28th-29th May 2014, Bangalore. 
Classromm
16th-20th June 2014, Region A. Instructor led, 
On-line
23rd-27th June 2014, Region B. Instructor led, 
On-line
15th-20th September 2014, Region A. Instructor led, 
On-line
22nd-27th September 2014, Region B. Instructor led, 
On-line
1st-6th December 2014, Region A. Instructor led, 
On-line
8th-12th December 2014, Region B. Instructor led, 
On-line

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Re: Cloudstack 4.3 instances can't access outside world

2014-04-20 Thread Serg Senko
Hello all,

There is some bug after upgrade from 4.1.1 to 4.3.0
KVM Hypervizor

Agent settings labels :

guest.network.device=cloudbr1
private.network.device=cloudbr1
public.network.device=cloudbr0

After upgrade to 4.3 SSVM, all VR's started with multiple [public]
interfaces,
I have some VR's with 5-6 ethX interfaces with same IP

So, in all cases egress rules doesn't work.

Fix? Workaround?










On Sat, Apr 12, 2014 at 12:16 AM, motty cruz  wrote:

> I have a testing cloudstack cluster, I destroyed it and rebuilding upgraded
> serveral times, each time I ran into the same problem, unable to access
> outside world from instances behind virtual router.
>
> here is iptables before upgrade, Cloudstack 4.2
>
> # Generated by iptables-save v1.4.14 on Fri Apr 11 19:53:57 2014
> *mangle
> :PREROUTING ACCEPT [2317:1282555]
> :INPUT ACCEPT [409:147015]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [189:29312]
> :POSTROUTING ACCEPT [189:29312]
> :FIREWALL_176.23.23.192 - [0:0]
> :VPN_176.23.23.192 - [0:0]
> -A PREROUTING -d 176.23.23.192/32 -j VPN_176.23.23.192
> -A PREROUTING -d 176.23.23.192/32 -j FIREWALL_176.23.23.192
> -A PREROUTING -m state --state RELATED,ESTABLISHED -j CONNMARK
> --restore-mark --nfmask 0x --ctmask 0x
> -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
> -A FIREWALL_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FIREWALL_176.23.23.192 -j DROP
> -A VPN_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A VPN_176.23.23.192 -j RETURN
> COMMIT
> # Completed on Fri Apr 11 19:53:57 2014
> # Generated by iptables-save v1.4.14 on Fri Apr 11 19:53:57 2014
> *filter
> :INPUT DROP [204:117504]
> :FORWARD DROP [0:0]
> :OUTPUT ACCEPT [150:22404]
> :FW_OUTBOUND - [0:0]
> :NETWORK_STATS - [0:0]
> -A INPUT -j NETWORK_STATS
> -A INPUT -d 224.0.0.18/32 -j ACCEPT
> -A INPUT -d 225.0.0.50/32 -j ACCEPT
> -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
> -A INPUT -i eth0 -p tcp -m tcp --dport 53 -j ACCEPT
> -A INPUT -i eth1 -p tcp -m state --state NEW -m tcp --dport 3922 -j ACCEPT
> -A INPUT -i eth0 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
> -A INPUT -s 10.1.1.0/24 -i eth0 -p tcp -m state --state NEW -m tcp --dport
> 8080 -j ACCEPT
> -A FORWARD -j NETWORK_STATS
> -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FORWARD -i eth0 -o eth0 -m state --state NEW -j ACCEPT
> -A FORWARD -i eth0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FORWARD -i eth2 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FORWARD -i eth0 -o eth2 -j FW_OUTBOUND
> -A OUTPUT -j NETWORK_STATS
> -A FW_OUTBOUND -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A NETWORK_STATS -i eth0 -o eth2
> -A NETWORK_STATS -i eth2 -o eth0
> -A NETWORK_STATS ! -i eth0 -o eth2 -p tcp
> -A NETWORK_STATS -i eth2 ! -o eth0 -p tcp
> COMMIT
> # Completed on Fri Apr 11 19:53:57 2014
> # Generated by iptables-save v1.4.14 on Fri Apr 11 19:53:57 2014
> *nat
> :PREROUTING ACCEPT [2078:1204416]
> :INPUT ACCEPT [10:964]
> :OUTPUT ACCEPT [1:338]
> :POSTROUTING ACCEPT [1:338]
> -A POSTROUTING -o eth2 -j SNAT --to-source 176.23.23.192
> COMMIT
> # Completed on Fri Apr 11 19:53:57 2014
>
>
> after upgrading to Cloudstack 4.3
>
>
> :POSTROUTING ACCEPT [211:25828]
> :FIREWALL_176.23.23.192 - [0:0]
> :VPN_176.23.23.192 - [0:0]
> -A PREROUTING -d 176.23.23.192/32 -j VPN_176.23.23.192
> -A PREROUTING -d 176.23.23.192/32 -j FIREWALL_176.23.23.192
> -A PREROUTING -m state --state RELATED,ESTABLISHED -j CONNMARK
> --restore-mark --nfmask 0x --ctmask 0x
> -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
> -A FIREWALL_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A FIREWALL_176.23.23.192 -j DROP
> -A VPN_176.23.23.192 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A VPN_176.23.23.192 -j RETURN
> COMMIT
> # Completed on Fri Apr 11 20:49:46 2014
> # Generated by iptables-save v1.4.14 on Fri Apr 11 20:49:46 2014
> *filter
> :INPUT DROP [68:32168]
> :FORWARD DROP [0:0]
> :OUTPUT ACCEPT [81:12516]
> :FW_EGRESS_RULES - [0:0]
> :FW_OUTBOUND - [0:0]
> :NETWORK_STATS - [0:0]
> -A INPUT -j NETWORK_STATS
> -A INPUT -d 224.0.0.18/32 -j ACCEPT
> -A INPUT -d 225.0.0.50/32 -j ACCEPT
> -A INPUT -i eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -i eth2 -m state --state RELATED,ESTABLISHED -j ACCEPT
> -A INPUT -p icmp -j ACCEPT
> -A INPUT -i lo -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 67 -j ACCEPT
> -A INPUT -i eth0 -p udp -m udp --dport 53 -j ACCEPT
> -A INPUT -i eth0 -p tcp -m tcp --dport 53 -j ACCEPT
> -A INPUT -i 

Re: Cloudstack with iscsi storage

2014-04-20 Thread rammohan ganapavarapu
Thanks for the video, i have one more question,

So CS/management server needs to have access to storage network? i have two
networks one for storage and one for regular traffic, my hyperviser hosts
can connect to  storage but my management server doesn't have connectivity
to storage, i am wondering does management server needs to have connection
to storage (primary)

Ram


On Thu, Apr 17, 2014 at 8:57 PM, Matthew Midgett
wrote:

> How about a video? Also follow these steps below that I posted as a
> comment to her video
>
> http://www.youtube.com/watch?v=srwCdkBEGZQ
>
> To those who are trying to install luci on centos or rhel 6 series after
> you install luci you run the command "service luci start" and "chkconfig
> luci on" this will start luci and set it to run at boot. There is no more
> luci_admin init.
> Also the password is your root password.
>
>
>
>
> On 04/17/2014 11:33 PM, rammohan ganapavarapu wrote:
>
>> Can you please share your install commands?
>>
>>
>> On Thu, Apr 17, 2014 at 8:24 PM, Matthew Midgett
>> wrote:
>>
>>  If your going to use clvm look up a tool called luci by redhat.  It will
>>> help you setup your cluster and fencing agents. If you need help I'm
>>> sure I
>>> can find my install commands in history. Clvm allows to share a logical
>>> volume but you still have to provide the transport layer.  Fiber, nfs...
>>> and other technologies.
>>>
>>>
>>> Sent from my Galaxy S®III
>>>
>>>  Original message 
>>> From: rammohan ganapavarapu 
>>> Date:04/17/2014  10:32 PM  (GMT-05:00)
>>> To: users 
>>> Subject: Re: Cloudstack with iscsi storage
>>>
>>> Ok thank you, can I use nfs? Which one gives better performance nfs or
>>> clvm?
>>> On Apr 17, 2014 5:21 PM, "Nux!"  wrote:
>>>
>>>  On 18.04.2014 00:33, rammohan ganapavarapu wrote:

  Ilya,
>
> I am planning to use KVM as my hypervisor and CentOS6.4 please advice
>
 me.
>>>
 Ram
>
>  Then you want to use CLVM. See http://www.slideshare.net/
 MarcusLSorensen/cloud-stack-clvm

 And upgrade to CentOS 6.5.

 Lucian

 --
 Sent from the Delta quadrant using Borg technology!

 Nux!
 www.nux.ro


>


RE: Putting a bounty on the 4.3.0 debs with working vmware functionality...

2014-04-20 Thread Paul Angus
Hi Michael,

I usually build on CentOS, but I've had a run at the Ubuntu build, it all looks 
ok.

Have you got somewhere I can upload these debs to?

Regards

Paul Angus
Cloud Architect
S: +44 20 3603 0540 | M: +447711418784 | T: CloudyAngus
paul.an...@shapeblue.com

-Original Message-
From: Michael Phillips [mailto:mphilli7...@hotmail.com]
Sent: 19 April 2014 00:49
To: users@cloudstack.apache.org
Subject: RE: Putting a bounty on the 4.3.0 debs with working vmware 
functionality...

I followed the docs as well, which was the same procedure as it was to build on 
4.2.1

> Date: Fri, 18 Apr 2014 19:46:21 -0400
> From: ilya.mailing.li...@gmail.com
> To: users@cloudstack.apache.org
> Subject: Re: Putting a bounty on the 4.3.0 debs with working vmware 
> functionality...
>
> Michael,
>
> I've tried doing this on CentOS several weeks back - and i had no
> issues. I simply followed instructions in the docs.
>
> If you already waited for 2 weeks, i suggest waiting for 1 more. There
> is annoying bug with VMware which is being worked though it nothing
> build related.
>
> Regards
> ilya
>
>
>
> On 4/18/14, 3:32 PM, Michael Phillips wrote:
> > Been trying now for over 2 weeks to get working debs with vmware.
> > For some reason building the nonoss and packaging looks good to go,
> > however once I install or upgrade, my CS acts as if it has no vmware
> > functionality. IE.. I get the following error messages...Unknown API
> > Command Vmware or java.lang.classnotfoundexception
> > com.cloud.hypervisor.vmware.resource.vmwareresource
> > Stired of monkeying around with this. If someone can get me access 
> > to the 7 deb files with a working vmware stack we can negotiate a price
> > If interested email me direct at mphilli7...@hotmail.com Thanks!
> >
>

Need Enterprise Grade Support for Apache CloudStack?
Our CloudStack Infrastructure 
Support offers the 
best 24/7 SLA for CloudStack Environments.

Apache CloudStack Bootcamp training courses

**NEW!** CloudStack 4.2.1 training
28th-29th May 2014, Bangalore. 
Classromm
16th-20th June 2014, Region A. Instructor led, 
On-line
23rd-27th June 2014, Region B. Instructor led, 
On-line
15th-20th September 2014, Region A. Instructor led, 
On-line
22nd-27th September 2014, Region B. Instructor led, 
On-line
1st-6th December 2014, Region A. Instructor led, 
On-line
8th-12th December 2014, Region B. Instructor led, 
On-line

This email and any attachments to it may be confidential and are intended 
solely for the use of the individual to whom it is addressed. Any views or 
opinions expressed are solely those of the author and do not necessarily 
represent those of Shape Blue Ltd or related companies. If you are not the 
intended recipient of this email, you must neither take any action based upon 
its contents, nor copy or show it to anyone. Please contact the sender if you 
believe you have received this email in error. Shape Blue Ltd is a company 
incorporated in England & Wales. ShapeBlue Services India LLP is a company 
incorporated in India and is operated under license from Shape Blue Ltd. Shape 
Blue Brasil Consultoria Ltda is a company incorporated in Brasil and is 
operated under license from Shape Blue Ltd. ShapeBlue is a registered trademark.


Cloud Agent Error

2014-04-20 Thread Mo
I am attempting to assist myself in this, but I am not quite finding out
what I should do. I suspect this is exactly why I am getting a 404 error
for the CS UI:

https://www.dropbox.com/s/2xsrwj931hi4948/Screenshot%202014-04-20%2010.33.46.png


2014-04-20 10:31:25,965 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:31:30,967 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:31:30,969 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:31:35,970 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:31:35,972 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:31:40,973 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:31:40,974 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:31:45,975 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:31:45,977 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:31:50,978 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:31:50,980 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:31:55,981 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:31:55,983 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:32:00,984 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:32:00,985 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:32:05,987 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:32:05,988 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:32:10,989 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:32:10,991 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250
2014-04-20 10:32:15,992 INFO  [utils.nio.NioClient] (Agent-Selector:null)
Connecting to localhost:8250
2014-04-20 10:32:15,993 WARN  [utils.nio.NioConnection] (Agent-Selector:null)
Unable to connect to remote: is there a server running on port 8250


Any assistance would be great!

- Mo


Re: KVM - Migration of CLVM volumes to another primary storage fail

2014-04-20 Thread Nux!

On 20.04.2014 13:24, Salvatore Sciacco wrote:

2014-04-20 12:31 GMT+02:00 Nux! :

It looks like a bug, "qemu-img convert" should be used instead of "cp 
-f",

among others.



I suppose that some code was added to do a simple copy when format is 
the

same, this wasn't the case with 4.1.1 version.






Do you mind opening an issue in https://issues.apache.org/jira ?




Already did :-)

https://issues.apache.org/jira/browse/CLOUDSTACK-6462

Thanks

S.


Cool, I'll try to find out after the holidays if the problem exists in 
4.3 as well and if yes, bug some people about it.


Happy Easter :-)

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Re: KVM - Migration of CLVM volumes to another primary storage fail

2014-04-20 Thread Salvatore Sciacco
2014-04-20 12:31 GMT+02:00 Nux! :

> It looks like a bug, "qemu-img convert" should be used instead of "cp -f",
> among others.
>

I suppose that some code was added to do a simple copy when format is the
same, this wasn't the case with 4.1.1 version.



>
Do you mind opening an issue in https://issues.apache.org/jira ?
>

Already did :-)

https://issues.apache.org/jira/browse/CLOUDSTACK-6462

Thanks

S.


Re: KVM - Migration of CLVM volumes to another primary storage fail

2014-04-20 Thread Nux!

On 20.04.2014 10:57, Salvatore Sciacco wrote:

ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM

Since we upgraded from 4.1 to 4.2.1 moving volumes to a different 
primary
storage pool fail. I've enabled debug on the agents side and I think 
there

is a problem with the format  type conversion

Volume on database has format QCOW2

these are the parameters for the first step (CLVM -> NFS):

"srcTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"cda46430-52d7-4bf0-b0c2-adfc78dd011c","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"655d6965-b3f3-4118-a970-d50cf6afc365","id":211,"poolType":"CLVM","host":"localhost","path":"/FC10KY1","port":0}},"name":"ROOT-4450","size":5368709120,"path":"39a25daf-23a1-4b65-99ac-fb98469ac197","volumeId":5937,"vmName":"i-402-4450-VM","accountId":402,"format":"QCOW2","id":5937,"hypervisorType":"KVM"}}

"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"cda46430-52d7-4bf0-b0c2-adfc78dd011c","volumeType":"ROOT","dataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
192.168.11.6/home/a1iwstack
","_role":"Image"}},"name":"ROOT-4450","size":5368709120,"path":"volumes/402/5937","volumeId":5937,"vmName":"i-402-4450-VM","accountId":402,"format":"QCOW2","id":5937,"hypervisorType":"KVM"}}


Those commads are translated into the agent:
DEBUG [utils.script.Script] (agentRequest-Handler-1:null) Executing:
qemu-img info /dev/FC10KY1/39a25daf-23a1-4b65-99ac-fb98469ac197
DEBUG [utils.script.Script] (agentRequest-Handler-1:null) Execution is
successful.
DEBUG [utils.script.Script] (agentRequest-Handler-1:null) Executing: 
*/bin/bash

-c cp -f /dev/FC10KY1/39a25daf-23a1-4b65-99ac-fb98469ac197
/mnt/b8311c72-fe75-3832-98fc-975445028a12/5c713376-c418-478c-8a31-89c4181cb48e.qcow2*


With the result that the output file isn't a qcow2 file but a raw
partition, which in turn make the next step fail.
(NFS -> CLVM)


It looks like a bug, "qemu-img convert" should be used instead of "cp 
-f", among others.

Do you mind opening an issue in https://issues.apache.org/jira ?

Lucian

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


KVM - Migration of CLVM volumes to another primary storage fail

2014-04-20 Thread Salvatore Sciacco
ACS version: 4.2.1
Hypervisors: KVM
Storage pool type: CLVM

Since we upgraded from 4.1 to 4.2.1 moving volumes to a different primary
storage pool fail. I've enabled debug on the agents side and I think there
is a problem with the format  type conversion

Volume on database has format QCOW2

these are the parameters for the first step (CLVM -> NFS):

"srcTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"cda46430-52d7-4bf0-b0c2-adfc78dd011c","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"655d6965-b3f3-4118-a970-d50cf6afc365","id":211,"poolType":"CLVM","host":"localhost","path":"/FC10KY1","port":0}},"name":"ROOT-4450","size":5368709120,"path":"39a25daf-23a1-4b65-99ac-fb98469ac197","volumeId":5937,"vmName":"i-402-4450-VM","accountId":402,"format":"QCOW2","id":5937,"hypervisorType":"KVM"}}

"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"cda46430-52d7-4bf0-b0c2-adfc78dd011c","volumeType":"ROOT","dataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
192.168.11.6/home/a1iwstack
","_role":"Image"}},"name":"ROOT-4450","size":5368709120,"path":"volumes/402/5937","volumeId":5937,"vmName":"i-402-4450-VM","accountId":402,"format":"QCOW2","id":5937,"hypervisorType":"KVM"}}


Those commads are translated into the agent:
DEBUG [utils.script.Script] (agentRequest-Handler-1:null) Executing:
qemu-img info /dev/FC10KY1/39a25daf-23a1-4b65-99ac-fb98469ac197
DEBUG [utils.script.Script] (agentRequest-Handler-1:null) Execution is
successful.
DEBUG [utils.script.Script] (agentRequest-Handler-1:null) Executing: */bin/bash
-c cp -f /dev/FC10KY1/39a25daf-23a1-4b65-99ac-fb98469ac197
/mnt/b8311c72-fe75-3832-98fc-975445028a12/5c713376-c418-478c-8a31-89c4181cb48e.qcow2*


With the result that the output file isn't a qcow2 file but a raw
partition, which in turn make the next step fail.
(NFS -> CLVM)

DEBUG [utils.script.Script] (agentRequest-Handler-2:) Executing: qemu-img
info
/mnt/b8311c72-fe75-3832-98fc-975445028a12/b9303d8d-cd51-4b6c-a244-43c405df4238.qcow2
DEBUG [utils.script.Script] (agentRequest-Handler-2:) Execution is
successful.
DEBUG [utils.script.Script] (agentRequest-Handler-2:) Executing: qemu-img
convert -f qcow2 -O
raw/mnt/b8311c72-fe75-3832-98fc-975445028a12/b9303d8d-cd51-4b6c-a244-43c405df4238.qcow2
/dev/FCSTORAGE/da162325-467b-4e78-af07-4bad85470d66
DEBUG [utils.script.Script] (agentRequest-Handler-2:) Exit value is 1
DEBUG [utils.script.Script] (agentRequest-Handler-2:) qemu-img: Could not
open
'/mnt/b8311c72-fe75-3832-98fc-975445028a12/b9303d8d-cd51-4b6c-a244-43c405df4238.qcow2'qemu-img:
Could not open
'/mnt/b8311c72-fe75-3832-98fc-975445028a12/b9303d8d-cd51-4b6c-a244-43c405df4238.qcow2'
ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-2:) Failed
to convert
/mnt/b8311c72-fe75-3832-98fc-975445028a12/b9303d8d-cd51-4b6c-a244-43c405df4238.qcow2
to /dev/FCSTORAGE/da162325-467b-4e78-af07-4bad85470d66 the error was:
qemu-img: Could not open
'/mnt/b8311c72-fe75-3832-98fc-975445028a12/b9303d8d-cd51-4b6c-a244-43c405df4238.qcow2'qemu-img:
Could not open
'/mnt/b8311c72-fe75-3832-98fc-975445028a12/b9303d8d-cd51-4b6c-a244-43c405df4238.qcow2'

If I change on the database the format of the volume to RAW the effect is
even worse as *data is lost* in the process!

These are the parameter for the first step (CLVM => NFS)

"srcTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"cda46430-52d7-4bf0-b0c2-adfc78dd011c","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"655d6965-b3f3-4118-a970d50cf6afc365","id":211,"poolType":"CLVM","host":"localhost","path":"/FC10KY1","port":0}},"name":"ROOT-4450"
,"size":5368709120,"path":"39a25daf-23a1-4b65-99ac-fb98469ac197","volumeId":5937,"vmName":"i-4024450VM","accountId":402,
"format":"RAW","id":5937,"hypervisorType":"KVM"}},

"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"cda46430-52d7-4bf0-b0c2-adfc78dd011c","volumeType":"ROOT","dataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
192.168.11.6/home/a1iwstack
","_role":"Image"}},"name":"ROOT4450","size":5368709120,"path":"volumes/402/5937","volumeId":5937,"vmName":"i-402-4450-VM","accountId":402,
"format":"RAW","id":5937,"hypervisorType":"KVM"}}

this time the output is converted to qcow2!

DEBUG [utils.script.Script] (agentRequest-Handler-3:null) Executing:
qemu-img info /dev/FC10KY1/39a25daf-23a1-4b65-99ac-fb98469ac197
DEBUG [utils.script.Script] (agentRequest-Handler-3:null) Execution is
successful.
DEBUG [utils.script.Script] (agentRequest-Handler-3:null) *Executing:
qemu-img convert -f raw -O qcow2
*/dev/FC10KY1/39a25daf-23a1-4b65-99ac-fb98469ac197
/mnt/b8311c72-fe75-3832-98fc-975445028a12/01ab129f-aaf6-4b1a-8e2a-093bee0b811c.raw


and *data is lost* in the next step (NFS -> CLVM):

"srcTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"cda46430-52d7-4bf0-b0c2-adfc78dd011c","volumeType":"ROOT","dataStore":{"
com.cl
oud.agent.api.to.NfsTO":{"_url