[jira] [Commented] (CLOUDSTACK-9999) vpc tiers do not work if vpc has more than 8 tiers
[ https://issues.apache.org/jira/browse/CLOUDSTACK-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16685624#comment-16685624 ] Andrija Panic commented on CLOUDSTACK-: --- https://github.com/apache/cloudstack/pull/2180 http://docs.cloudstack.apache.org/projects/archived-cloudstack-release-notes/en/4.11/fixed_issues.html cheers -- Andrija Panić > vpc tiers do not work if vpc has more than 8 tiers > -- > > Key: CLOUDSTACK- > URL: https://issues.apache.org/jira/browse/CLOUDSTACK- > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) >Reporter: Wei Zhou >Assignee: Wei Zhou >Priority: Major > > in the VR, all guest IPs of tiers >8 should be applied on eth1*, but they are > applied on eth1 actually. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CLOUDSTACK-9999) vpc tiers do not work if vpc has more than 8 tiers
[ https://issues.apache.org/jira/browse/CLOUDSTACK-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349337#comment-16349337 ] Andrija Panic commented on CLOUDSTACK-: --- Thx! > vpc tiers do not work if vpc has more than 8 tiers > -- > > Key: CLOUDSTACK- > URL: https://issues.apache.org/jira/browse/CLOUDSTACK- > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) >Reporter: Wei Zhou >Assignee: Wei Zhou >Priority: Major > > in the VR, all guest IPs of tiers >8 should be applied on eth1*, but they are > applied on eth1 actually. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CLOUDSTACK-9999) vpc tiers do not work if vpc has more than 8 tiers
[ https://issues.apache.org/jira/browse/CLOUDSTACK-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16349173#comment-16349173 ] Andrija Panic commented on CLOUDSTACK-: --- We are now hitting this bug - any ETA for fixing it ? :) > vpc tiers do not work if vpc has more than 8 tiers > -- > > Key: CLOUDSTACK- > URL: https://issues.apache.org/jira/browse/CLOUDSTACK- > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) >Reporter: Wei Zhou >Assignee: Wei Zhou >Priority: Major > > in the VR, all guest IPs of tiers >8 should be applied on eth1*, but they are > applied on eth1 actually. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CLOUDSTACK-6203) KVM live migration improvement
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16087479#comment-16087479 ] Andrija Panic commented on CLOUDSTACK-6203: --- Hi Marcus, perhaps it's silly question, but inside agent.properties I dont see the "vm.migrate.autoconverge" line at all. (https://github.com/apache/cloudstack/blob/master/agent/conf/agent.properties) Is this implemented and in what ACS version please ? Thanks > KVM live migration improvement > -- > > Key: CLOUDSTACK-6203 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6203 > Project: CloudStack > Issue Type: Improvement > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Reporter: Marcus Sorensen >Assignee: Marcus Sorensen > Fix For: 4.4.0 > > > Run the KVM live migration in a thread so we can monitor it. This will allow > us to see how long migrations are taking and do things like pause the vm if > migration is stalling (per user defined time limit) to quickly complete > migration, or set the domain's max downtime during cut-over between machines > (higher values make migration of busy vms easier, lower values may make > migration stall). In the future we can add the autoconvergence flag that > stalls VMs for a few ticks to allow memory copy to catch up, but it will be > awhile before libvirt that's shipped in a distro supports it, so these > tunables may be useful now. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (CLOUDSTACK-6203) KVM live migration improvement
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16061847#comment-16061847 ] Andrija Panic commented on CLOUDSTACK-6203: --- Thanks a lot Marcus, we are on Ubuntu 14.04, so versions are up to date afaik, and I will check this autoconvergence.. Thanks! > KVM live migration improvement > -- > > Key: CLOUDSTACK-6203 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6203 > Project: CloudStack > Issue Type: Improvement > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Reporter: Marcus Sorensen >Assignee: Marcus Sorensen > Fix For: 4.4.0 > > > Run the KVM live migration in a thread so we can monitor it. This will allow > us to see how long migrations are taking and do things like pause the vm if > migration is stalling (per user defined time limit) to quickly complete > migration, or set the domain's max downtime during cut-over between machines > (higher values make migration of busy vms easier, lower values may make > migration stall). In the future we can add the autoconvergence flag that > stalls VMs for a few ticks to allow memory copy to catch up, but it will be > awhile before libvirt that's shipped in a distro supports it, so these > tunables may be useful now. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (CLOUDSTACK-6203) KVM live migration improvement
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16060088#comment-16060088 ] Andrija Panic commented on CLOUDSTACK-6203: --- Marcus, or someone, can you please provide example of sane values, for heavy DB server ? We applied this tunables on 4.5 and 4.8, and after it regular VMs (not IO busy) migration failed completely, and VMs end up in stopped state... I used 1 (10sec) for vm.migrate.downtime, and set 1 for vm.migrate.pauseafter. Thanks > KVM live migration improvement > -- > > Key: CLOUDSTACK-6203 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6203 > Project: CloudStack > Issue Type: Improvement > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Reporter: Marcus Sorensen >Assignee: Marcus Sorensen > Fix For: 4.4.0 > > > Run the KVM live migration in a thread so we can monitor it. This will allow > us to see how long migrations are taking and do things like pause the vm if > migration is stalling (per user defined time limit) to quickly complete > migration, or set the domain's max downtime during cut-over between machines > (higher values make migration of busy vms easier, lower values may make > migration stall). In the future we can add the autoconvergence flag that > stalls VMs for a few ticks to allow memory copy to catch up, but it will be > awhile before libvirt that's shipped in a distro supports it, so these > tunables may be useful now. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (CLOUDSTACK-9226) Wrong number of sockets reported
[ https://issues.apache.org/jira/browse/CLOUDSTACK-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15632278#comment-15632278 ] Andrija Panic commented on CLOUDSTACK-9226: --- Im not sure if this is the right place (I dont see other ticket for this) - on my host with E5-2650 v2 CPUs (native 2.6GHz, turbobust up to 3.4 GHz) - cloudstack reports 32 cores with 3.4 GHz speeds - and this effectively does over-provisioning of CPU (ACS think we have 32x3.4GHz, but in reality we have only 32x2.6GHz) Any info on this, this is killing performance for some hosts, that are near CPU limit... (we use KVM here, ACS 4.8, but same on 4.5) Thanks, Andrija > Wrong number of sockets reported > > > Key: CLOUDSTACK-9226 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9226 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Affects Versions: 4.6.0, 4.7.0 > Environment: KVM, CentOS 7 mgmt + HV >Reporter: Nux > Labels: dashboard, kvm, sockets, statistics > > Hello, > My current setup includes a dual cpu, quad core + HT, however in the ACS > dashboard the "CPU sockets" says only one. > This value is wrong and as I undestand it, it is taken from "virsh nodeinfo" > which is known to give misleading information, as it reports stuff "per NUMA > cell" > As per the man page of virsh, the number of real physical sockets should be > calculated as "NUMA cell" multiplied by "CPU sockets". > e.g. > virsh nodeinfo > CPU model: x86_64 > CPU(s): 16 > CPU frequency: 2393 MHz > CPU socket(s): 1 > Core(s) per socket: 4 > Thread(s) per core: 2 > NUMA cell(s):2 > physical cpus = "CPU socket(s): 1" * "NUMA cell(s):2" = 2 > (correct) > Additional information can be taken from "virsh capabilities|grep socket_id" > (xml output) e.g.: > virsh capabilities|grep socket_id > > > > > > > > > > > > > > > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8353) Including windows guest performance improvement flags like hv_vapic and hv_spinlock in CCP
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628582#comment-15628582 ] Andrija Panic commented on CLOUDSTACK-8353: --- I saw that part of code earlier, but this again doenst solve the issue - it references only "Windows 2008" as guest OS, not "Windows PV" - and to be honest no-one probably uses "Windows 2008" as OS type, because its not virtio (we are on KVM) - and also code only references (if I read code correclty, Im not developer) only centos 6.5 and 7, not i.e. Ubuntu as host etc. We have actually develop solution internally, similar code (and NOT yet tested(, but we are not checking host OS in the code (we assume all hots support hyperv enligment flags), so not sure if this is convinient for commnunity (if it passes our internal tests first...) I will try to update here, or have one of our devs comment after we do some testing. Thx Rajani > Including windows guest performance improvement flags like hv_vapic and > hv_spinlock in CCP > -- > > Key: CLOUDSTACK-8353 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8353 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Affects Versions: 4.5.0 >Reporter: Bharat Kumar >Assignee: Rajani Karuturi > > There is a bug in KVM that causes a BSOD for Windows 2008 R2 and 7 or > earlier. fix was added in libvirt 1.1.1 The fix requires enabling the > "hv_relaxed" option for the affected VMs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8353) Including windows guest performance improvement flags like hv_vapic and hv_spinlock in CCP
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628558#comment-15628558 ] Andrija Panic commented on CLOUDSTACK-8353: --- Hi Rajani, no it's not solving the issue - we have 1 DC upgraded to ACS 4.8.0.1 and using "Windows PV" as the OS type - and I can see there are zero of additional flags for HyperV Enligmnet enabled. So this is not addressed (at least properly) in 4.7 release as mentioned in separate ticket. I will try to change OS type to "Windows 2008" in ACS, to see if this is addressed with this kind of OS type. > Including windows guest performance improvement flags like hv_vapic and > hv_spinlock in CCP > -- > > Key: CLOUDSTACK-8353 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8353 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Affects Versions: 4.5.0 >Reporter: Bharat Kumar >Assignee: Rajani Karuturi > > There is a bug in KVM that causes a BSOD for Windows 2008 R2 and 7 or > earlier. fix was added in libvirt 1.1.1 The fix requires enabling the > "hv_relaxed" option for the affected VMs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8353) Including windows guest performance improvement flags like hv_vapic and hv_spinlock in CCP
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15544754#comment-15544754 ] Andrija Panic commented on CLOUDSTACK-8353: --- We would like to solve this internally. What I see as source of documentation is folowing: https://bugzilla.redhat.com/show_bug.cgi?id=990824 and similary https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1308341. Any opinions if we should add the needed flag to all OS types during or only for "Windows 7, Windows 2008 R2, and Windows PV" OS types ? We are going to folow up with our internal investgation, but any opinion is appreciated. > Including windows guest performance improvement flags like hv_vapic and > hv_spinlock in CCP > -- > > Key: CLOUDSTACK-8353 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8353 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Affects Versions: 4.5.0 >Reporter: Bharat Kumar >Assignee: Bharat Kumar > > There is a bug in KVM that causes a BSOD for Windows 2008 R2 and 7 or > earlier. fix was added in libvirt 1.1.1 The fix requires enabling the > "hv_relaxed" option for the affected VMs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8353) Including windows guest performance improvement flags like hv_vapic and hv_spinlock in CCP
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15542022#comment-15542022 ] Andrija Panic commented on CLOUDSTACK-8353: --- Guys, any progress on this ? We are on 4.5.1, and 1 major customer is regularyy affected with BSOD on Windows 2008 R2. Not sure what to do, or what a workarround would be - they have many, many servers, and its impossible to quickly upgrade to Windows 2012 R2 (which works fine) Any ideas, comments, please ? > Including windows guest performance improvement flags like hv_vapic and > hv_spinlock in CCP > -- > > Key: CLOUDSTACK-8353 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8353 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM >Affects Versions: 4.5.0 >Reporter: Bharat Kumar >Assignee: Bharat Kumar > > There is a bug in KVM that causes a BSOD for Windows 2008 R2 and 7 or > earlier. fix was added in libvirt 1.1.1 The fix requires enabling the > "hv_relaxed" option for the affected VMs. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7566) Many jobs getting stuck in pending state and cloud is unusable
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14611743#comment-14611743 ] Andrija Panic commented on CLOUDSTACK-7566: --- Rohit, is this maybe backported to 4.3.x ? I have this issue on 4.3.2 release... Thanks > Many jobs getting stuck in pending state and cloud is unusable > -- > > Key: CLOUDSTACK-7566 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7566 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server >Affects Versions: 4.3.0 >Reporter: Min Chen >Assignee: Min Chen >Priority: Blocker > Fix For: 4.5.0 > > > Many jobs are getting stuck with errors like: > 2014-09-09 18:55:41,964 WARN [jobs.impl.AsyncJobMonitor] > (Timer-1:ctx-1e7a8a7e) Task (job-355415) has been pending for 690 seconds > Even jobs that apparently succeed are getting the same error. Async job table > is not updated with complete even though job is completed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-4424) ceph:kvm:download volume created from snapshot failed with runtime exception
[ https://issues.apache.org/jira/browse/CLOUDSTACK-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14577016#comment-14577016 ] Andrija Panic commented on CLOUDSTACK-4424: --- I'm having same isses on ACS 4.3.2 - can anyone confirm this is expected on 4.3 ? Thanks > ceph:kvm:download volume created from snapshot failed with runtime exception > - > > Key: CLOUDSTACK-4424 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4424 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) >Affects Versions: 4.2.0 >Reporter: sadhu suresh >Assignee: Wido den Hollander > Fix For: 4.2.0 > > Attachments: management-server_1.rar > > > steps: > 1.deploy a vm on ceph enabled cluster > 2.perform snapshot on root volume > 3.create a volume from snapshot > 4.once its successful,try to download the volume > Actual results: > download volume fails with "Failed to copy the volume from the source primary > storage pool to secondary storage" > > content of management log: > ** > 2013-08-21 21:16:13,563 DEBUG [cloud.async.AsyncJobManagerImpl] > (Job-Executor-3:job-31 = [ a5680044-b669-4c7e-9ee7-961b5f855dd3 ]) Executing > org.apache.cloudstack.api.command.user.volume.ExtractVolumeCmd for job-31 = [ > a5680044-b669-4c7e-9ee7-961b5f855dd3 ] > 2013-08-21 21:16:14,130 DEBUG [storage.motion.AncientDataMotionStrategy] > (Job-Executor-3:job-31 = [ a5680044-b669-4c7e-9ee7-961b5f855dd3 ]) copyAsync > inspecting src type VOLUME copyAsync inspecting dest type VOLUME > 2013-08-21 21:16:14,307 DEBUG [agent.transport.Request] > (Job-Executor-3:job-31 = [ a5680044-b669-4c7e-9ee7-961b5f855dd3 ]) Seq > 4-462553323: Sending { Cmd , MgmtId: 7175246184473, via: 4, Ver: v1, Flags: > 100011, > [{"org.apache.cloudstack.storage.command.CopyCommand":{"srcTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"06445577-d626-4e49-9601-08005519ce8f","volumeType":"DATADISK","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"08d53ff3-8884-30a4-a1d1-ac621abcc688","id":3,"poolType":"RBD","host":"10.147.41.3","path":"cloudkvm","port":6789}},"name":"volfromsnapshot1","size":8598335488,"path":"47a4c967-25d0-4d4b-8c75-686caa54e5d3","volumeId":11,"accountId":2,"format":"RAW","id":11,"hypervisorType":"KVM"}},"destTO":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"06445577-d626-4e49-9601-08005519ce8f","volumeType":"DATADISK","dataStore":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.147.28.7/export/home/sadhu/asf/kvmsec","_role":"Image"}},"name":"volfromsnapshot1","size":8598335488,"path":"volumes/2/11","volumeId":11,"accountId":2,"format":"RAW","id":11,"hypervisorType":"KVM"}},"executeInSequence":false,"wait":10800}}] > } > 2013-08-21 21:16:14,841 DEBUG [agent.transport.Request] > (AgentManager-Handler-11:null) Seq 4-462553323: Processing: { Ans: , MgmtId: > 7175246184473, via: 4, Ver: v1, Flags: 10, > [{"org.apache.cloudstack.storage.command.CopyCmdAnswer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException: > Failed to copy cloudkvm/47a4c967-25d0-4d4b-8c75-686caa54e5d3 to > 8c9eaf72-20f1-486a-8cd2-da1b18fecabd.qcow2","wait":0}}] } > 2013-08-21 21:16:14,841 DEBUG [agent.transport.Request] > (Job-Executor-3:job-31 = [ a5680044-b669-4c7e-9ee7-961b5f855dd3 ]) Seq > 4-462553323: Received: { Ans: , MgmtId: 7175246184473, via: 4, Ver: v1, > Flags: 10, { CopyCmdAnswer } } > 2013-08-21 21:16:14,851 WARN > [storage.datastore.ObjectInDataStoreManagerImpl] (Job-Executor-3:job-31 = [ > a5680044-b669-4c7e-9ee7-961b5f855dd3 ]) Unsupported data object (VOLUME, > org.apache.cloudstack.storage.datastore.PrimaryDataStoreImpl@71fc8be7), no > need to delete from object in store ref table > 2013-08-21 21:16:14,950 ERROR [cloud.async.AsyncJobManagerImpl] > (Job-Executor-3:job-31 = [ a5680044-b669-4c7e-9ee7-961b5f855dd3 ]) Unexpected > exception while executing > org.apache.cloudstack.api.command.user.volume.ExtractVolumeCmd > com.cloud.utils.exception.CloudRuntimeException: Failed to copy the volume > from the source primary storage pool to secondary storage. > at > com.cloud.storage.VolumeManagerImpl.extractVolume(VolumeManagerImpl.java:2832) > at > com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125) > at > org.apache.cloudstack.api.command.user.volume.ExtractVolumeCmd.execute(ExtractVolumeCmd.java:130) > at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:158) > at > com.cloud.async.AsyncJobManagerImpl$1.run(AsyncJobManagerImpl.java:531) > at > java.util.con
[jira] [Commented] (CLOUDSTACK-8289) ec attribute not found when creating new region
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564803#comment-14564803 ] Andrija Panic commented on CLOUDSTACK-8289: --- Thx Daan. Is adding record to cloud.region table safe? I tested and it worked fine... > ec attribute not found when creating new region > --- > > Key: CLOUDSTACK-8289 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8289 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Test, UI >Reporter: Daan Hoogland > > this is related to a smoke test failing. When creating a new region filling > id, name and endpoint an exception occurs. > ERROR [c.c.a.ApiServer] (qtp1749052383-1263:ctx-44edf7d8 ctx-fd5fd6e1) > unhandled exception executing api command: [Ljava.lang.String;@2f24434 > com.cloud.utils.exception.CloudRuntimeException: Problem with getting the ec > attribute > at com.cloud.utils.db.GenericDaoBase.persist(GenericDaoBase.java:1403) > at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) > at > com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) > at > org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) > at com.sun.proxy.$Proxy131.persist(Unknown Source) > at > org.apache.cloudstack.region.RegionManagerImpl.addRegion(RegionManagerImpl.java:113) > at > org.apache.cloudstack.region.RegionServiceImpl.addRegion(RegionServiceImpl.java:87) > at > org.apache.cloudstack.api.command.admin.region.AddRegionCmd.execute(AddRegionCmd.java:89) > at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:141) > at com.cloud.api.ApiServer.queueCommand(ApiServer.java:699) > at com.cloud.api.ApiServer.handleRequest(ApiServer.java:524) > at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:283) > at com.cloud.api.ApiServlet$1.run(ApiServlet.java:127) > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) > at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:124) > at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:800) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.ecl
[jira] [Commented] (CLOUDSTACK-8289) ec attribute not found when creating new region
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14564538#comment-14564538 ] Andrija Panic commented on CLOUDSTACK-8289: --- Anyone have workarround for this? Present still on 4.5.1 > ec attribute not found when creating new region > --- > > Key: CLOUDSTACK-8289 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8289 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Test, UI >Reporter: Daan Hoogland > > this is related to a smoke test failing. When creating a new region filling > id, name and endpoint an exception occurs. > ERROR [c.c.a.ApiServer] (qtp1749052383-1263:ctx-44edf7d8 ctx-fd5fd6e1) > unhandled exception executing api command: [Ljava.lang.String;@2f24434 > com.cloud.utils.exception.CloudRuntimeException: Problem with getting the ec > attribute > at com.cloud.utils.db.GenericDaoBase.persist(GenericDaoBase.java:1403) > at sun.reflect.GeneratedMethodAccessor74.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) > at > com.cloud.utils.db.TransactionContextInterceptor.invoke(TransactionContextInterceptor.java:34) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) > at > org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) > at com.sun.proxy.$Proxy131.persist(Unknown Source) > at > org.apache.cloudstack.region.RegionManagerImpl.addRegion(RegionManagerImpl.java:113) > at > org.apache.cloudstack.region.RegionServiceImpl.addRegion(RegionServiceImpl.java:87) > at > org.apache.cloudstack.api.command.admin.region.AddRegionCmd.execute(AddRegionCmd.java:89) > at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:141) > at com.cloud.api.ApiServer.queueCommand(ApiServer.java:699) > at com.cloud.api.ApiServer.handleRequest(ApiServer.java:524) > at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:283) > at com.cloud.api.ApiServlet$1.run(ApiServlet.java:127) > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56) > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103) > at > org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53) > at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:124) > at com.cloud.api.ApiServlet.doGet(ApiServlet.java:86) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:800) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:587) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1125) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1059) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) > at org.eclipse.jetty.server.Server.handle
[jira] [Updated] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-8451: -- Description: When configuring Port FOrwarding or Static NAT on VPC VR, and connect from outside world to VPC IP address, traffic gets forwarded to VM behind VPC. But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - given result will show that remote connections come from the Source NAT IP of the VR, instead of the real remote client IP. Example: private VM: 192.168.10.10 Source NAT IP on VPC VR: 1.1.1.1 Additional Public IP on VPC VR. 1.1.1.2 Remote client public IP: 4.4.4.4 (external to VPC) Test: from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) inside 192.168.10.10 do "netstat -antup | grep 22" Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 We found a solution (somwhat tested, and not sure if this would break anything...) Problem is in VRs iptables NAT table, POSTROUTING chain, rule: SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 where 1.1.1.1 is public IP of VR eth2: is Public Interface of VR When this rule is deleted, NAT is working fine. This is serious issue for anyone using VPC, since there is no way to see real remote client IP, and this no firewall funtionality inside VM, SIP doesnt work, web server logs are useless etc. I also experienced this problem with 4.3.x releases. EDIT: this happens when using vlan://untagged for Public network - eth2 device gets passed to VR, and offending iptables rule is created. When using tagged vlan for Public network - everything is fine. was: When configuring Port FOrwarding or Static NAT on VPC VR, and connect from outside world to VPC IP address, traffic gets forwarded to VM behind VPC. But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - given result will show that remote connections come from the Source NAT IP of the VR, instead of the real remote client IP. Example: private VM: 192.168.10.10 Source NAT IP on VPC VR: 1.1.1.1 Additional Public IP on VPC VR. 1.1.1.2 Remote client public IP: 4.4.4.4 (external to VPC) Test: from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) inside 192.168.10.10 do "netstat -antup | grep 22" Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 We found a solution (somwhat tested, and not sure if this would break anything...) Problem is in VRs iptables NAT table, POSTROUTING chain, rule: SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 where 1.1.1.1 is public IP of VR eth2: is Public Interface of VR When this rule is deleted, NAT is working fine. This is serious issue for anyone using VPC, since there is no way to see real remote client IP, and this no firewall funtionality inside VM, SIP doesnt work, web server logs are useless etc. I also experienced this problem with 4.3.x releases. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic >Assignee: Rohit Yadav > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. > EDIT: this happens when using vlan://untagged for Public network - eth2 > device gets passed to VR, and offending iptables rule is created. > When using tagged vlan for Public ne
[jira] [Comment Edited] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560646#comment-14560646 ] Andrija Panic edited comment on CLOUDSTACK-8451 at 5/27/15 8:46 AM: Hi, I confirmed I dont have problems when deploy "public" network with tagged vlan, instead of untagged. Meaning, I dont have eth2 referenced in iptables rules, and remote IP shows correctly. So this is some regression, due to changing untagged URI from NULL to vlan://untagged or something - that happened between 4.2 and 4.3 releases if Im not mistaken. Anyone can reproduce please - just use untagged vlan for Public range, and the problem will arise. was (Author: andrija): Hi, I confirmed I dont have problems when deploy "public" network with tagged vlan, instead of untagged. Meaning, I dont have eth2 referenced in iptables rules, and remote IP shows correctly. So this is some regression, due to chaning untagged URI from NULL to vlan://untagged or something - that happened between 4.2 and 4.3 releases if Im not mistaken. Anyone can reproduce please - just use untagged vlan for Public range, and the problem will arise. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic >Assignee: Rohit Yadav > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560646#comment-14560646 ] Andrija Panic edited comment on CLOUDSTACK-8451 at 5/27/15 8:43 AM: Hi, I confirmed I dont have problems when deploy "public" network with tagged vlan, instead of untagged. Meaning, I dont have eth2 referenced in iptables rules, and remote IP shows correctly. So this is some regression, due to chaning untagged URI from NULL to vlan://untagged or something - that happened between 4.2 and 4.3 releases if Im not mistaken. Anyone can reproduce please - just use untagged vlan for Public range, and the problem will arise. was (Author: andrija): Hi, I confirmed I dont have problems when deploy "public" network with tagged vlan, instead of untagged. So this is some regression, due to chanign untagged URI from NULL to vlan://untagged or something - that happened between 4.2 and 4.3 releases if Im not mistaken. Anyone can reproduce please - just use untagged vlan for Public range, and the problem will arise. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic >Assignee: Rohit Yadav > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14560646#comment-14560646 ] Andrija Panic commented on CLOUDSTACK-8451: --- Hi, I confirmed I dont have problems when deploy "public" network with tagged vlan, instead of untagged. So this is some regression, due to chanign untagged URI from NULL to vlan://untagged or something - that happened between 4.2 and 4.3 releases if Im not mistaken. Anyone can reproduce please - just use untagged vlan for Public range, and the problem will arise. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic >Assignee: Rohit Yadav > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14559300#comment-14559300 ] Andrija Panic commented on CLOUDSTACK-8451: --- *From agent logs...(setting source nat for eth2)* 2015-05-26 17:30:24,873 DEBUG [resource.virtualnetwork.VirtualRoutingResource] (agentRequest-Handler-3:null) Executing: /usr/share/cloudstack-common/scripts/network/domr/router_proxy.sh vpc_snat.sh 169.254.2.49 -A -l XXX.YYY.147.26 -c eth2 2015-05-26 17:30:25,001 DEBUG [resource.virtualnetwork.VirtualRoutingResource] (agentRequest-Handler-3:null) Execution is successful. 2015-05-26 17:30:25,001 DEBUG [resource.virtualnetwork.VirtualRoutingResource] (agentRequest-Handler-3:null) iptables: Bad rule (does a matching rule exist in that chain?). iptables: No chain/target/match by that name. *From management logs (grep -i eth2 didnt give any explicit commands sent from management server side, nor id=2 or similar)* (mgmt log looks fine to me...) 2015-05-26 17:27:30,862 DEBUG [c.c.n.r.VpcVirtualNetworkApplianceManagerImpl] (Job-Executor-19:ctx-d99dacc4 ctx-bb38451d) Removing nic NicProfile[4903-2916-null-XXX.YYY.147.26-vlan://untagged of type Public from the nics passed on vm start. The nic will be plugged later 2015-05-26 17:27:30,866 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Job-Executor-19:ctx-d99dacc4 ctx-bb38451d) Boot Args for VM[DomainRouter|r-2916-VM]: vpccidr=10.0.0.0/8 domain=cs2cloud.internal dns1=8.8.8.8 dns2= template=domP name=r-2916-VM eth0ip=169.254.2.49 eth0mask=255.255.0.0 type=vpcrouter disable_rp_filter=true 2015-05-26 17:27:30,947 DEBUG [c.c.n.r.VpcVirtualNetworkApplianceManagerImpl] (Job-Executor-19:ctx-d99dacc4 ctx-bb38451d) Found 0 static routes to apply as a part of vpc route VM[DomainRouter|r-2916-VM] start 2015-05-26 17:27:30,968 DEBUG [c.c.a.t.Request] (Job-Executor-19:ctx-d99dacc4 ctx-bb38451d) Seq 793-1613229855: Sending { Cmd , MgmtId: 161344838950, via: 793(cs12.domain.net), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.StartCommand":{"vm":{"id":2916,"name":"r-2916-VM","type":"DomainRouter","cpus":1,"minSpeed":166,"maxSpeed":1000,"minRam":268435456,"maxRam":268435456,"arch":"x86_64","os":"Debian GNU/Linux 7(64-bit)","bootArgs":" vpccidr=10.0.0.0/8 domain=cs2cloud.internal dns1=8.8.8.8 dns2= template=domP name=r-2916-VM eth0ip=169.254.2.49 eth0mask=255.255.0.0 type=vpcrouter disable_rp_filter=true","rebootOnCrash":false,"enableHA":true,"limitCpuUse":false,"enableDynamicallyScaleVm":false,"vncPassword":"ca8af10f1fd5804c","params":{"memoryOvercommitRatio":"1.0","cpuOvercommitRatio":"6.0"},"uuid":"518baeec-df0c-413e-9f26-07b7fb823601","disks":[{"data":{"org.apache.cloudstack.storage.to.VolumeObjectTO":{"uuid":"59eab08e-4814-4e09-b1ee-34b357a430b2","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"5b93422e-1a66-353d-88a8-2203f79b1dc6","id":209,"poolType":"RBD","host":"cephmon.domain.net","path":"cloudstack","port":6789,"url":"RBD://cephmon.domain.net/cloudstack/?ROLE=Primary&STOREUUID=5b93422e-1a66-353d-88a8-2203f79b1dc6"}},"name":"ROOT-2916","size":262144,"path":"59eab08e-4814-4e09-b1ee-34b357a430b2","volumeId":8416,"vmName":"r-2916-VM","accountId":2,"format":"RAW","id":8416,"deviceId":0,"hypervisorType":"KVM"}},"diskSeq":0,"path":"59eab08e-4814-4e09-b1ee-34b357a430b2","type":"ROOT","_details":{"managed":"false","storagePort":"6789","storageHost":"cephmon.domain.net","volumeSize":"262144"}}],"nics":[{"deviceId":0,"networkRateMbps":-1,"defaultNic":false,"uuid":"2a657fb8-c645-47c2-a335-e2d2c7da030c","ip":"169.254.2.49","netmask":"255.255.0.0","gateway":"169.254.0.1","mac":"0e:00:a9:fe:02:31","broadcastType":"LinkLocal","type":"Control","isSecurityGroupEnabled":false}]},"hostIp":"10.xxx.yyy.120","executeInSequence":false,"wait":0}},{"com.cloud.agent.api.check.CheckSshCommand":{"ip":"169.254.2.49","port":3922,"interval":6,"retries":100,"name":"r-2916-VM","wait":0}},{"com.cloud.agent.api.GetDomRVersionCmd":{"accessDetails":{"router.ip":"169.254.2.49","router.name":"r-2916-VM"},"wait":0}},{"com.cloud.agent.api.PlugNicCommand":{"nic":{"deviceId":1,"networkRateMbps":9,"defaultNic":true,"uuid":"de8f637c-195d-4455-9035-81f8d4f74e09","ip":"XXX.YYY.147.26","netmask":"255.255.255.128","gateway":"XXX.YYY.147.1","mac":"06:f3:72:00:01:b2","broadcastType":"Vlan","type":"Public","broadcastUri":"vlan://untagged","isolationUri":"vlan://untagged","isSecurityGroupEnabled":false,"name":"breth1-500"},"instanceName":"r-2916-VM","vmType":"DomainRouter","wait":0}},{"com.cloud.agent.api.routing.IpAssocVpcCommand":{"ipAddresses":[{"accountId":2,"publicIp":"XXX.YYY.147.26","sourceNat":true,"add":true,"oneToOneNat":false,"firstIP":false,"broadcastUri":"vlan://untagged","vlanGateway":"XXX.YYY.147.1","vlanNetmask":"255.255.255.128","vifMacAddress":"06:f3:72:00:01:b2","networkRate":9,"trafficType":"Public"
[jira] [Commented] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14558971#comment-14558971 ] Andrija Panic commented on CLOUDSTACK-8451: --- I just created empty VPC (no networks attached, so only VR got created), and yes, the offending rule referencing eth2 (althought there is NO eth2 present at the moment) is present: root@r-31-VM:~# iptables -L -nv -t nat Chain POSTROUTING (policy ACCEPT 1 packets, 240 bytes) pkts bytes target prot opt in out source destination 8 572 SNAT all -- * eth10.0.0.0/00.0.0.0/0 to:XXX.X39.230.174 **0 0 SNAT all -- * eth20.0.0.0/00.0.0.0/0 to:XXX.X39.230.174** root@r-31-VM:~# ifconfig eth0 Link encap:Ethernet HWaddr 0e:00:a9:fe:00:af inet addr:169.254.0.175 Bcast:169.254.255.255 Mask:255.255.0.0 ... eth1 Link encap:Ethernet HWaddr 06:f6:68:00:00:71 inet addr:185.39.230.174 Bcast:185.39.230.191 Mask:255.255.255.224 ... loLink encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 ... BTW, from /var/log/messages... May 26 10:34:35 r-31-VM cloud: vpc_ipassoc.sh:Adding ip XXX.X39.230.174 on interface eth1 May 26 10:34:35 r-31-VM cloud: vpc_ipassoc.sh:Add routing XXX.X39.230.174 on interface eth1 May 26 10:34:35 r-31-VM cloud: vpc_privateGateway.sh:Added SourceNAT XXX.X39.230.174 on interface eth1 May 26 10:34:35 r-31-VM cloud: vpc_snat.sh:Added SourceNAT XXX.X39.230.174 on interface eth2 So for some reason, eth2 (which is not even present on system) gets SNAT provisioned... Will try to fix in systemvm/patches/debian/config/opt/cloud/bin/vpc_snat.sh if possible > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic >Assignee: Rohit Yadav > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-3265) [UI] [Health Check for NS LB]Failure to create a lb health check policy returns a API response in the UI
[ https://issues.apache.org/jira/browse/CLOUDSTACK-3265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14558953#comment-14558953 ] Andrija Panic commented on CLOUDSTACK-3265: --- Im afraid this is still present in 4.4.3 - is this solved also in 4.5.1 ? > [UI] [Health Check for NS LB]Failure to create a lb health check policy > returns a API response in the UI > > > Key: CLOUDSTACK-3265 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3265 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: UI >Affects Versions: 4.2.0 >Reporter: Abhinav Roy >Assignee: Jessica Wang >Priority: Minor > Fix For: 4.4.4 > > Attachments: lb healthchecks.jpg > > > Steps: > > 1. Create a LB rule with VR as LB provider. > 2. Attach VMs to the LB rule. > 3. Create Health check policy for that rule > Expected behaviour: > === > 1. Since VR is not the supported provider for LB health checks the policy > creation should fail gracefully. > Observed behaviour: > == > 1. The policy creation fails as expected but the API response is turned in > the UI which should not happen. A proper failure message should be displayed > instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8407) Presharedkey is not create during the creation of remote access vpn
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14558895#comment-14558895 ] Andrija Panic commented on CLOUDSTACK-8407: --- Same issue here, 4.4.3... Any proposed fix ? > Presharedkey is not create during the creation of remote access vpn > --- > > Key: CLOUDSTACK-8407 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8407 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Automation, Management Server >Affects Versions: 4.4.3, 4.5.1 >Reporter: Nicolas Grangier >Priority: Minor > > The presharedkey not appear/create during remote access vpn creation > Confirmed with cloudmonkey that the value is not present, > === > Steps to Reproduce: > === > 1.Go the network tab > 2.Configure the VPC > 3.Go in the router section, public IP adresses > 4.Click on the IP who is SourceNat, > 5.Activate the remote access VPN, > 6.It say : > Your Remote Access VPN is currently enabled and can be accessed via the IP > x.x.x.x. > Your IPSec pre-shared key is > undefined > Actual result : > Remote Access VPN is created in the database, but it didn't create the > presharedkey, i also use cloudmonkey to list remoteaccessvpns, the field > "presharedkey =" is even not created. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-8510) Last choosen VR Service Offering not taken into consideration while restarting VPC
Andrija Panic created CLOUDSTACK-8510: - Summary: Last choosen VR Service Offering not taken into consideration while restarting VPC Key: CLOUDSTACK-8510 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8510 Project: CloudStack Issue Type: Improvement Security Level: Public (Anyone can view this level - this is the default.) Components: Network Controller, Virtual Router Affects Versions: 4.4.3 Environment: NA Reporter: Andrija Panic We can have different/multiply System Offerings for VR, to i.e. provide differenet VR/NICs speed and different CPU/RAM speed as an option for customer to upgrade, from defauts. If we change VR System Offering (while VR is stoped) - it gets changed, fine. BUT when we/user restart VPC, then new VR gets created for customer, but with default Service Offering being selected. Improvement: When restarting VPC, first check existing Service Offering for existing VR, and then destroy/provision new VR with same Service Offering. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14555851#comment-14555851 ] Andrija Panic commented on CLOUDSTACK-7907: --- Tested here also, confirmed this fixes the breadcrumb issue :) Thx Rafael a lot :) > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: API >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554631#comment-14554631 ] Andrija Panic commented on CLOUDSTACK-7907: --- Thx a lot Ilya for clarification ! > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: API >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14554350#comment-14554350 ] Andrija Panic commented on CLOUDSTACK-7907: --- Sorry for late reply, but this tomcat 6.0.43 doesn;t solve our issue: - wget http://archive.apache.org/dist/tomcat/tomcat-6/v6.0.43/bin/apache-tomcat-6.0.43.tar.gz - tar --directory /opt -xzf apache-tomcat-6.0.43.tar.gz - mv /usr/share/tomcat6/lib /usr/share/tomcat6/lib.old - mv /usr/share/tomcat6/bin /usr/share/tomcat6/bin.old - ln -s /opt/apache-tomcat-6.0.43/lib /usr/share/tomcat6/lib - ln -s /opt/apache-tomcat-6.0.43/bin /usr/share/tomcat6/bin - service cloudstack-management restart Still from time to time, when clicking on "Instance" in breadcrumb, I get back to the homepage of ACS... This time Im testing on Ubuntu 14.04 as mgmt server... > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: API >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14550043#comment-14550043 ] Andrija Panic commented on CLOUDSTACK-8451: --- Hi Rohit. Actually we tested it and it seems that absolutely all works fine. The only thing we didnt test so far with this change is site to site vpn, but I expect that will be fine as well. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-8469) wrong global config mount.parent - /var/lib/cloud/mnt
Andrija Panic created CLOUDSTACK-8469: - Summary: wrong global config mount.parent - /var/lib/cloud/mnt Key: CLOUDSTACK-8469 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8469 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.4.3 Environment: NA Reporter: Andrija Panic Hi, defautl Global Config option: mount.paren has value of "mount.parent /var/lib/cloud/mnt". THis folder doesnt exist, and it seems its being used among other things when you use Swift as Secondary Storage - so problems arise... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539919#comment-14539919 ] Andrija Panic edited comment on CLOUDSTACK-8451 at 5/12/15 2:45 PM: We found the problem to be in folowing rules (the one with asterisk) When we remove this rule by hand - remote IP shows normal - her it seems that excep DNAT (Static NAT) we also do SNAT (source IP replacement for some reason) and that is the rule down there with asterisks, Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 SNAT all -- * eth110.10.10.10 0.0.0.0/0 to:XXX.39.228.156 134 9312 SNAT all -- * eth10.0.0.0/00.0.0.0/0 to:XXX.39.228.155 ** 7 705 SNAT all -- * eth20.0.0.0/00.0.0.0/0 to:XXX.39.228.155 ** 0 0 SNAT all -- * eth210.10.10.0/240.0.0.0/0 to:10.10.10.1 was (Author: andrija): We found the problem to be in folowing rules (the one with asterisk) When we remove this rule by hand - remote IP shows normal - her it seems that excep DNAT (Static NAT) we also do SNAT (source IP replacement for some reason) and that is the rule down there with asterisks, Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 SNAT all -- * eth110.10.10.10 0.0.0.0/0 to:XXX.39.228.156 134 9312 SNAT all -- * eth10.0.0.0/00.0.0.0/0 to:XXX.39.228.155 * 7 705 SNAT all -- * eth20.0.0.0/0 0.0.0.0/0to:XXX.39.228.155 ** 0 0 SNAT all -- * eth210.10.10.0/240.0.0.0/0 to:10.10.10.1 > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539907#comment-14539907 ] Andrija Panic edited comment on CLOUDSTACK-8451 at 5/12/15 2:44 PM: http://pastebin.com/ihjiDZ9h - iptables-save from inside VR on pastebin - this is brand new VPC (1 network, 1 VM in network) on 4.4.3 release. http://snag.gy/V949g.jpg - ACS setup and "proof" : XXX.39.228.155 - main VPC IP XXX.39.228.156 - additional IP, configured Static NAT to private VM 10.10.10.10 Connected to XXX39.228.156:22 - and done "netstat -antup | grep 22" - remote connection seems to come from XXX.39.228.155 - main VPC IP. This is ACS 4.4.3, Advanced Zone, KVM. VR interfaces: eth0 inet addr:169.254.3.236 Bcast:169.254.255.255 Mask:255.255.0.0 eth1 inet addr:XXX.39.228.155 Bcast:185.39.228.191 Mask:255.255.255.192 eth2 Link encap:Ethernet HWaddr 02:00:14:5e:00:02 inet addr:10.10.10.1 Bcast:10.10.10.255 Mask:255.255.255.0 was (Author: andrija): http://pastebin.com/ihjiDZ9h - iptables-save from inside VR on pastebin - this is brand new VPC (1 network, 1 VM in network) on 4.4.3 release. http://snag.gy/V949g.jpg - ACS setup and "proof" : XXX.39.228.155 - main VPC IP XXX.39.228.156 - additional IP, configured Static NAT to private VM 10.10.10.10 Connected to XXX39.228.156:22 - and done "netstat -antup | grep 22" - remote connection seems to come from XXX.39.228.155 - main VPC IP. This is ACS 4.4.3, Advanced Zone, KVM. eth0 inet addr:169.254.3.236 Bcast:169.254.255.255 Mask:255.255.0.0 eth1 inet addr:XXX.39.228.155 Bcast:185.39.228.191 Mask:255.255.255.192 eth2 Link encap:Ethernet HWaddr 02:00:14:5e:00:02 inet addr:10.10.10.1 Bcast:10.10.10.255 Mask:255.255.255.0 > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539907#comment-14539907 ] Andrija Panic edited comment on CLOUDSTACK-8451 at 5/12/15 2:44 PM: http://pastebin.com/ihjiDZ9h - iptables-save from inside VR on pastebin - this is brand new VPC (1 network, 1 VM in network) on 4.4.3 release. http://snag.gy/V949g.jpg - ACS setup and "proof" : XXX.39.228.155 - main VPC IP XXX.39.228.156 - additional IP, configured Static NAT to private VM 10.10.10.10 Connected to XXX39.228.156:22 - and done "netstat -antup | grep 22" - remote connection seems to come from XXX.39.228.155 - main VPC IP. This is ACS 4.4.3, Advanced Zone, KVM. VR interfaces: eth0 inet addr:169.254.3.236 Bcast:169.254.255.255 Mask:255.255.0.0 eth1 inet addr:XXX.39.228.155 Bcast:XXX.39.228.191 Mask:255.255.255.192 eth2 inet addr:10.10.10.1 Bcast:10.10.10.255 Mask:255.255.255.0 was (Author: andrija): http://pastebin.com/ihjiDZ9h - iptables-save from inside VR on pastebin - this is brand new VPC (1 network, 1 VM in network) on 4.4.3 release. http://snag.gy/V949g.jpg - ACS setup and "proof" : XXX.39.228.155 - main VPC IP XXX.39.228.156 - additional IP, configured Static NAT to private VM 10.10.10.10 Connected to XXX39.228.156:22 - and done "netstat -antup | grep 22" - remote connection seems to come from XXX.39.228.155 - main VPC IP. This is ACS 4.4.3, Advanced Zone, KVM. VR interfaces: eth0 inet addr:169.254.3.236 Bcast:169.254.255.255 Mask:255.255.0.0 eth1 inet addr:XXX.39.228.155 Bcast:185.39.228.191 Mask:255.255.255.192 eth2 Link encap:Ethernet HWaddr 02:00:14:5e:00:02 inet addr:10.10.10.1 Bcast:10.10.10.255 Mask:255.255.255.0 > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539907#comment-14539907 ] Andrija Panic edited comment on CLOUDSTACK-8451 at 5/12/15 2:43 PM: http://pastebin.com/ihjiDZ9h - iptables-save from inside VR on pastebin - this is brand new VPC (1 network, 1 VM in network) on 4.4.3 release. http://snag.gy/V949g.jpg - ACS setup and "proof" : XXX.39.228.155 - main VPC IP XXX.39.228.156 - additional IP, configured Static NAT to private VM 10.10.10.10 Connected to XXX39.228.156:22 - and done "netstat -antup | grep 22" - remote connection seems to come from XXX.39.228.155 - main VPC IP. This is ACS 4.4.3, Advanced Zone, KVM. eth0 inet addr:169.254.3.236 Bcast:169.254.255.255 Mask:255.255.0.0 eth1 inet addr:XXX.39.228.155 Bcast:185.39.228.191 Mask:255.255.255.192 eth2 Link encap:Ethernet HWaddr 02:00:14:5e:00:02 inet addr:10.10.10.1 Bcast:10.10.10.255 Mask:255.255.255.0 was (Author: andrija): http://pastebin.com/ihjiDZ9h - iptables-save from inside VR on pastebin - this is brand new VPC (1 network, 1 VM in network) on 4.4.3 release. http://snag.gy/V949g.jpg - ACS setup and "proof" : XXX.39.228.155 - main VPC IP XXX.39.228.156 - additional IP, configured Static NAT to private VM 10.10.10.10 Connected to XXX39.228.156:22 - and done "netstat -antup | grep 22" - remote connection seems to come from XXX.39.228.155 - main VPC IP. This is ACS 4.4.3, Advanced Zone, KVM. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539919#comment-14539919 ] Andrija Panic commented on CLOUDSTACK-8451: --- We found the problem to be in folowing rules (the one with asterisk) When we remove this rule by hand - remote IP shows normal - her it seems that excep DNAT (Static NAT) we also do SNAT (source IP replacement for some reason) and that is the rule down there with asterisks, Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 SNAT all -- * eth110.10.10.10 0.0.0.0/0 to:XXX.39.228.156 134 9312 SNAT all -- * eth10.0.0.0/00.0.0.0/0 to:XXX.39.228.155 * 7 705 SNAT all -- * eth20.0.0.0/0 0.0.0.0/0to:XXX.39.228.155 ** 0 0 SNAT all -- * eth210.10.10.0/240.0.0.0/0 to:10.10.10.1 > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14539907#comment-14539907 ] Andrija Panic commented on CLOUDSTACK-8451: --- http://pastebin.com/ihjiDZ9h - iptables-save from inside VR on pastebin - this is brand new VPC (1 network, 1 VM in network) on 4.4.3 release. http://snag.gy/V949g.jpg - ACS setup and "proof" : XXX.39.228.155 - main VPC IP XXX.39.228.156 - additional IP, configured Static NAT to private VM 10.10.10.10 Connected to XXX39.228.156:22 - and done "netstat -antup | grep 22" - remote connection seems to come from XXX.39.228.155 - main VPC IP. This is ACS 4.4.3, Advanced Zone, KVM. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-8451: -- Affects Version/s: 4.4.3 4.3.2 > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.4.3, 4.3.2, 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-8451: -- Description: When configuring Port FOrwarding or Static NAT on VPC VR, and connect from outside world to VPC IP address, traffic gets forwarded to VM behind VPC. But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - given result will show that remote connections come from the Source NAT IP of the VR, instead of the real remote client IP. Example: private VM: 192.168.10.10 Source NAT IP on VPC VR: 1.1.1.1 Additional Public IP on VPC VR. 1.1.1.2 Remote client public IP: 4.4.4.4 (external to VPC) Test: from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) inside 192.168.10.10 do "netstat -antup | grep 22" Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 We found a solution (somwhat tested, and not sure if this would break anything...) Problem is in VRs iptables NAT table, POSTROUTING chain, rule: SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 where 1.1.1.1 is public IP of VR eth2: is Public Interface of VR When this rule is deleted, NAT is working fine. This is serious issue for anyone using VPC, since there is no way to see real remote client IP, and this no firewall funtionality inside VM, SIP doesnt work, web server logs are useless etc. I also experienced this problem with 4.3.x releases. was: When configuring Port FOrwarding on VPC VR, and connect from outside world to VPC IP address, traffic gets forwarded to VM behind VPC. But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - given result will show that remote connections come from the Source NAT IP of the VR, instead of the real remote client IP. Example: private VM: 192.168.10.10 Source NAT IP on VPC VR: 1.1.1.1 Additional Public IP on VPC VR. 1.1.1.2 Remote client public IP: 4.4.4.4 (external to VPC) Test: from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) inside 192.168.10.10 do "netstat -antup | grep 22" Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 We found a solution (somwhat tested, and not sure if this would break anything...) Problem is in VRs iptables NAT table, POSTROUTING chain, rule: SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 where 1.1.1.1 is public IP of VR eth2: is Public Interface of VR When this rule is deleted, NAT is working fine. This is serious issue for anyone using VPC, since there is no way to see real remote client IP, and this no firewall funtionality inside VM, SIP doesnt work, web server logs are useless etc. I also experienced this problem with 4.3.x releases. > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding or Static NAT on VPC VR, and connect from > outside world to VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-8451) Static Nat show wrong IP in VM behind VPC
Andrija Panic created CLOUDSTACK-8451: - Summary: Static Nat show wrong IP in VM behind VPC Key: CLOUDSTACK-8451 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: KVM, Network Controller, Virtual Router Affects Versions: 4.5.1 Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT Reporter: Andrija Panic When configuring Port FOrwarding on VPC VR, and connect from outside world to VPC IP address, traffic gets forwarded to VM behind VPC. But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - given result will show that remote connections come from the Source NAT IP of the VR, instead of the real remote client IP. Example: private VM: 192.168.10.10 Source NAT IP on VPC VR: 1.1.1.1 Additional Public IP on VPC VR. 1.1.1.2 Remote client public IP: 4.4.4.4 (external to VPC) Test: from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) inside 192.168.10.10 do "netstat -antup | grep 22" Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 We found a solution (somwhat tested, and not sure if this would break anything...) Problem is in VRs iptables NAT table, POSTROUTING chain, rule: SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 where 1.1.1.1 is public IP of VR eth2: is Public Interface of VR When this rule is deleted, NAT is working fine. This is serious issue for anyone using VPC, since there is no way to see real remote client IP, and this no firewall funtionality inside VM, SIP doesnt work, web server logs are useless etc. I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-8451) Static Nat show wrong remote IP in VM behind VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-8451: -- Summary: Static Nat show wrong remote IP in VM behind VPC (was: Static Nat show wrong IP in VM behind VPC) > Static Nat show wrong remote IP in VM behind VPC > > > Key: CLOUDSTACK-8451 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8451 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: KVM, Network Controller, Virtual Router >Affects Versions: 4.5.1 > Environment: Ubuntu 14.04, ACS 4.5.1-SNAPSHOT >Reporter: Andrija Panic > > When configuring Port FOrwarding on VPC VR, and connect from outside world to > VPC IP address, traffic gets forwarded to VM behind VPC. > But if you run "netstat -antup | grep $PORT" (where port is i.e. ssh port) - > given result will show that remote connections come from the Source NAT IP of > the VR, instead of the real remote client IP. > Example: > private VM: 192.168.10.10 > Source NAT IP on VPC VR: 1.1.1.1 > Additional Public IP on VPC VR. 1.1.1.2 > Remote client public IP: 4.4.4.4 (external to VPC) > Test: > from 4.4.4.4 SSH to 1.1.1.2 port 22 (or any other port) > inside 192.168.10.10 do "netstat -antup | grep 22" > Result: Remote IP show is 1.1.1.1 instead of 4.4.4.4 > We found a solution (somwhat tested, and not sure if this would break > anything...) > Problem is in VRs iptables NAT table, POSTROUTING chain, rule: > SNAT all -- * eth2 0.0.0.0/0 0.0.0.0/0 to:1.1.1.1 > where 1.1.1.1 is public IP of VR > eth2: is Public Interface of VR > When this rule is deleted, NAT is working fine. > This is serious issue for anyone using VPC, since there is no way to see real > remote client IP, and this no firewall funtionality inside VM, SIP doesnt > work, web server logs are useless etc. > I also experienced this problem with 4.3.x releases. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-7926) Don't immediately delete volumes - have the pruge thread do it
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-7926: -- Description: Currently I have hit a bug, when I click on some instance, then on View Volumes, and then I get listed volumes that belong to some other VM - it already happened to me that I deleted the volumes - beacuse of ACS bug in GUI ! So, I suggest to consider maybe to implement purging volumes the same way it is implemented with VM-s - so the VM is not really deleted - and the purge thread in ACS will acually delete it when it runs... THis way, if wrong volume is deleted, we can recover it quickly... was: Currently I have hit a bug, when I click on some instance, then on View Volumes, and then I get listed volumes that belong to some other VM - it already happened to me that I deleted the volumes - beacuse of ACS bug in GUI ! So, I suggest to consider maybe to implement purging the same way it is implemented with VM-s - so the VM is not really deleted - and the purge thread in ACS will acually delete it when it runs... THis way, if wrong volume is deleted, we can recover it quickly... > Don't immediately delete volumes - have the pruge thread do it > -- > > Key: CLOUDSTACK-7926 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7926 > Project: CloudStack > Issue Type: Improvement > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Storage Controller >Affects Versions: 4.3.0, 4.4.1 > Environment: NA >Reporter: Andrija Panic > Labels: storage > > Currently I have hit a bug, when I click on some instance, then on View > Volumes, and then I get listed volumes that belong to some other VM - it > already happened to me that I deleted the volumes - beacuse of ACS bug in GUI > ! > So, I suggest to consider maybe to implement purging volumes the same way it > is implemented with VM-s - so the VM is not really deleted - and the purge > thread in ACS will acually delete it when it runs... > THis way, if wrong volume is deleted, we can recover it quickly... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-8333) Hide shared networks with no free IP, from GUI in "Add network to VM" dialog
Andrija Panic created CLOUDSTACK-8333: - Summary: Hide shared networks with no free IP, from GUI in "Add network to VM" dialog Key: CLOUDSTACK-8333 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8333 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Network Controller, UI Affects Versions: 4.3.2 Environment: NA Reporter: Andrija Panic When all IPs are consumed (no more IPs available) in the Shared Guest network then folowing is true in 4.3.2 - In the "Add Instance" wizard, this Guest netowrk is NOT displayed any more - FINE. - When editing existng VM, on the NIC tab, when clicking "Add network to VM" this Guest network IS available - and this results with an error since no free IPs in the pool. For consistency reasons, the network should also not be displayed in the "Add network to VM" dialog/popup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-6801) Public IP not assigned to eth1 on VR in VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14350202#comment-14350202 ] Andrija Panic commented on CLOUDSTACK-6801: --- Just to confirm - this solution (my previous comment) is still fine - works for me. BUT I have not upgraded from 4.3.0 to anything newer, which I plan to do pretty soon - so not sure if the 4.3.2 wil still respect my solution here... > Public IP not assigned to eth1 on VR in VPC > --- > > Key: CLOUDSTACK-6801 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6801 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Virtual Router >Affects Versions: 4.3.0 > Environment: CentOS, KVM. >Reporter: Andrija Panic >Priority: Blocker > Labels: publicip, virtualrouter, vpc > > Hi, > after upgrade from 4.2.1 to 4.3.0, Public IP on eth1 is missing on VR when > creating new (and on existing) VPCs, although eth1 seems present per > /proc/net/dev. > Mangement logs are fine, eth1 plugged in correct bridge, etc. > Manually adding IP on eth1 and starting eth1 does work. > From /var/log/messages inside VR: > May 28 18:27:36 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 0 seconds > May 28 18:27:37 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 1 seconds > May 28 18:27:38 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 2 seconds > May 28 18:27:39 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 3 seconds > May 28 18:27:40 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 4 seconds > May 28 18:27:41 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 5 seconds > May 28 18:27:42 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 6 seconds > May 28 18:27:43 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 7 seconds > May 28 18:27:44 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 8 seconds > May 28 18:27:45 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 9 seconds > May 28 18:27:46 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 10 seconds > May 28 18:27:47 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 11 seconds > May 28 18:27:48 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 12 seconds > May 28 18:27:49 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 13 seconds > May 28 18:27:50 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 14 seconds > May 28 18:27:51 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 15 seconds > May 28 18:27:52 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 16 seconds > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:interface ethnull never > appeared > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Adding ip 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Add routing 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_privateGateway.sh:Added SourceNAT > 46.232.x.246 on interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_snat.sh:Added SourceNAT 46.232.x.246 on > interface eth1 > May 28 18:27:54 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth2, gateway 10.0.1.1, network 10.0.1.1/24 > May 28 18:27:59 r-799-VM cloud: Setting up apache web server for eth2 > May 28 18:27:59 r-799-VM cloud: Setting up password service for network > 10.0.1.1/24, eth eth2 > May 28 18:27:59 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth3, gateway 10.0.3.1, network 10.0.3.1/24 > May 28 18:28:04 r-799-VM cloud: Setting up apache web server for eth3 > May 28 18:28:06 r-799-VM cloud: Setting up password service for network > 10.0.3.1/24, eth eth3 > May 28 18:28:06 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth4, gateway 10.0.4.1, network 10.0.4.1/24 > May 28 18:28:11 r-799-VM cloud: Setting up apache web server for eth4 > May 28 18:28:12 r-799-VM cloud: Setting up password service for network > 10.0.4.1/24, eth eth4 > May 28 18:28:13 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth5, gateway 10.0.6.1, network 10.0.6.1/24 > May 28 18:28:18 r-799-VM cloud: Setting up apache web server for eth5 > May 28 18:28:19 r-799-VM cloud: Setting up password service for network > 10.0.6.1/24, eth eth5 > Nothing else usefull in other logs... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14310691#comment-14310691 ] Andrija Panic commented on CLOUDSTACK-7907: --- Thanks for pointing this out. I'm currently on 6.0.24, so will do the update ASAP and let you know... thx > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: API >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-6463) password is not set for VMs created from password enabled template
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14258061#comment-14258061 ] Andrija Panic commented on CLOUDSTACK-6463: --- Interesting stuff here - for me, when this happens, it was enough just to reboot VM, and it will fetch the password that was obviously generated in the meantime, and there was a coresponding line in the /var/cache/cloud/password (first VM start - no line in VR, reboot VM, and the line was created in the meantime...) > password is not set for VMs created from password enabled template > -- > > Key: CLOUDSTACK-6463 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6463 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server >Affects Versions: 4.4.0 >Reporter: Harikrishna Patnala >Assignee: Harikrishna Patnala >Priority: Critical > Fix For: 4.4.0 > > > Repro steps: > 1.Create a password enabled template > 2. Create a VM from password enabled template > 3. Login to VM using the provided password > Bug: > Unable to login to VM using provided password . Neither able to login to VM > using standard password =password > Expected result: > Password should be set . > Additional information.. > Cannot find password entry at router vm at following location > /var/cache/cloud/password > Resetting password for VM is working fine. i.e if we try to reset Password > for this VM .and the password we get now will work and we will be able to > login to VM with the provided password. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251377#comment-14251377 ] Andrija Panic commented on CLOUDSTACK-7907: --- Ilya - I have just searched my API log - and there is NO line(s) when the issue happened (I expected to see 2 api lines with deployVm.., one that failed, and than the second that was executed fine - since I clicked on the URL in the firebug on the exact api URL) - but there is ony the line that was successfuly executed - so as Alex said - this seems like JS issue - the same seems true (although I could not catch the error) - for the second issue described here - breadcrumb issue - geting back to home page, instead of Instances list etc... > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: API >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-4735) Management IP address pool exhausted
[ https://issues.apache.org/jira/browse/CLOUDSTACK-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14250852#comment-14250852 ] Andrija Panic commented on CLOUDSTACK-4735: --- me too, me too (4.4.1) > Management IP address pool exhausted > > > Key: CLOUDSTACK-4735 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4735 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: XenServer >Affects Versions: 4.1.0 > Environment: Management server (CentOS 6.4 64 bit, CLoudStack 4.1.1), > XenServer 6.1 >Reporter: Daniel Hertanu > > With only one computing node in the Zone, rebooting it without enabling > maintenance mode on it, determines the management IP addresses pool to be > exhausted as a result of CloudStack attempting continuously to provision the > system VMs. Regardless the expunge delay or interval values, the management > IPs are not released anymore and the common error reported in the logs is: > 2013-09-24 14:56:24,410 INFO [cloud.vm.VirtualMachineManagerImpl] > (Job-Executor-22:job-72) Insufficient capacity > com.cloud.exception.InsufficientAddressCapacityException: Unable to get a > management ip addressScope=interface com.cloud.dc.Pod; id=1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CLOUDSTACK-4735) Management IP address pool exhausted
[ https://issues.apache.org/jira/browse/CLOUDSTACK-4735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14250852#comment-14250852 ] Andrija Panic edited comment on CLOUDSTACK-4735 at 12/18/14 12:06 AM: -- me too, me too (4.4.1) - me on KVM... was (Author: andrija): me too, me too (4.4.1) > Management IP address pool exhausted > > > Key: CLOUDSTACK-4735 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4735 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: XenServer >Affects Versions: 4.1.0 > Environment: Management server (CentOS 6.4 64 bit, CLoudStack 4.1.1), > XenServer 6.1 >Reporter: Daniel Hertanu > > With only one computing node in the Zone, rebooting it without enabling > maintenance mode on it, determines the management IP addresses pool to be > exhausted as a result of CloudStack attempting continuously to provision the > system VMs. Regardless the expunge delay or interval values, the management > IPs are not released anymore and the common error reported in the logs is: > 2013-09-24 14:56:24,410 INFO [cloud.vm.VirtualMachineManagerImpl] > (Job-Executor-22:job-72) Insufficient capacity > com.cloud.exception.InsufficientAddressCapacityException: Unable to get a > management ip addressScope=interface com.cloud.dc.Pod; id=1 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14249873#comment-14249873 ] Andrija Panic commented on CLOUDSTACK-7907: --- Yes...problem is: after only 10-15 (max 30sec) seconds of having issue, checked console, I clicked on the link in the console, so the API call was again executed - and new VM was deployed fine... > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: API >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14249648#comment-14249648 ] Andrija Panic commented on CLOUDSTACK-7907: --- Managed to capture the problem where deploying new VM and after pressing FINISH - simply nothing happens: Actually, the output from the Chrome Debuging tools, from Console is: "Failed to load resource: the server responded with a status of 501 (Not Implemented)" (and this is the URL/api call...for that error) https://domainnamehere.com/client/api?command=deployVirtualMachine&response=json&sessionkey=T8NpBoassMQje21Flr%2Bde%2BDn9Gg%3D&zoneid=3d1dcf11-d482-4f28-a2dd-6afcb51545d2&templateid=528a4bf0-05b7-4c18-959c-8236465fb3f6&hypervisor=KVM&serviceofferingid=6dadbc20-2020-4980-af15-5ce3c247e21c&diskofferingid=d1065467-c35a-4ef2-9b33-c45ed022e2c6&size=100&networkids=7e047b58-2685-412b-a36e-6e166f97f73c&displayname=app2&name=app2&_=1418807454035 I have then clicked on the link provided, and the API call was executed fine, and new VM created... > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: UI >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > Labels: ui > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14237817#comment-14237817 ] Andrija Panic commented on CLOUDSTACK-7907: --- I did not manage to capture anything in the firebug - call me stupid, but when the issue happens, I get nothing in the Console log... Tried more than once > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: UI >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > Labels: ui > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-8045) Renaming "Volume Snapshots" to "Volume Backups."
Andrija Panic created CLOUDSTACK-8045: - Summary: Renaming "Volume Snapshots" to "Volume Backups." Key: CLOUDSTACK-8045 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8045 Project: CloudStack Issue Type: Improvement Security Level: Public (Anyone can view this level - this is the default.) Components: Snapshot, Volumes Affects Versions: 4.3.1 Environment: NA Reporter: Andrija Panic Fix For: 4.5.0 There was a mailing thread long time ago = to acutally rename Volume Snapshot - to Volume Backup or similar - snapshots are being copied over to Secondary storage as backup of the Volumes - so there is NO good reason to call this snapshot, since we can't really do anything with them in terms of easy restore etc - so I guess this would be really the logic step to rename it. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7790) VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235553#comment-14235553 ] Andrija Panic commented on CLOUDSTACK-7790: --- Hi Rohit - I already have updated the docs - http://docs.cloudstack.apache.org/en/latest/networking/vxlan.html#important-note-on-mtu-size So if you are OK with that, please close the issue... > VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames > -- > > Key: CLOUDSTACK-7790 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7790 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Network Controller >Affects Versions: 4.4.1 > Environment: NOT important - CentOS 6.5, elrepo kernel 3.10 - >Reporter: Andrija Panic >Priority: Critical > Labels: frames, jumbo, vxlan > > By default, when using vxlan as isolation method for Guest traffic - > cloudstack created vxlan vbidge and interface, and set's MTU for those to > 1450 bytes. Problem is that default OS MTU for any VM is 1500 and all > packets get droped (except maybe DHCP and ping which uses smaller packets). > 1) Current proposed solution is to change MTU inside VM/template to 1450 - > which is absolutely NOT user friendly and degrades performance > 2) Better approach - set MTU on vxlan interface and bridge to a default value > of 1500, and ask ADMIN to increase MTU to 1600 bytes on physical interface > ethX or cloudbrX and enable at least 1600 frames on physical network > 3) Even better - add GUI component to CloudStack for a MTU value, so the > ADMIN can deploy JUMBRO frames accross whole Guest network - this should be > probably enabled per network offering, or similar. > Current setup, require MTU change inside VM is not a good solution, and does > not enable user to use JUMBRO frames at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7790) VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225954#comment-14225954 ] Andrija Panic commented on CLOUDSTACK-7790: --- Now I see the way to acually make this works, and that is to really increase MTU in the first place on the physical device (ethX or bridge). When vxlan driver creates new vxan interface and corresponding bridge - it will create MTU that is exactly 50bytes smaller than the MTU on the physical device ethX/bridge. This was never mentioned in the documentation in the first place. But at least there is a correct way to setup things. Will try to update docs. > VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames > -- > > Key: CLOUDSTACK-7790 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7790 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Network Controller >Affects Versions: 4.4.1 > Environment: NOT important - CentOS 6.5, elrepo kernel 3.10 - >Reporter: Andrija Panic >Priority: Critical > Labels: frames, jumbo, vxlan > > By default, when using vxlan as isolation method for Guest traffic - > cloudstack created vxlan vbidge and interface, and set's MTU for those to > 1450 bytes. Problem is that default OS MTU for any VM is 1500 and all > packets get droped (except maybe DHCP and ping which uses smaller packets). > 1) Current proposed solution is to change MTU inside VM/template to 1450 - > which is absolutely NOT user friendly and degrades performance > 2) Better approach - set MTU on vxlan interface and bridge to a default value > of 1500, and ask ADMIN to increase MTU to 1600 bytes on physical interface > ethX or cloudbrX and enable at least 1600 frames on physical network > 3) Even better - add GUI component to CloudStack for a MTU value, so the > ADMIN can deploy JUMBRO frames accross whole Guest network - this should be > probably enabled per network offering, or similar. > Current setup, require MTU change inside VM is not a good solution, and does > not enable user to use JUMBRO frames at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-7926) Don't immediately delete volumes - have the pruge thread do it
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-7926: -- Summary: Don't immediately delete volumes - have the pruge thread do it (was: don't teally delete volumes - have the pruge do it) > Don't immediately delete volumes - have the pruge thread do it > -- > > Key: CLOUDSTACK-7926 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7926 > Project: CloudStack > Issue Type: Improvement > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Storage Controller >Affects Versions: 4.3.0, 4.4.1 > Environment: NA >Reporter: Andrija Panic > Labels: storage > > Currently I have hit a bug, when I click on some instance, then on View > Volumes, and then I get listed volumes that belong to some other VM - it > already happened to me that I deleted the volumes - beacuse of ACS bug in GUI > ! > So, I suggest to consider maybe to implement purging the same way it is > implemented with VM-s - so the VM is not really deleted - and the purge > thread in ACS will acually delete it when it runs... > THis way, if wrong volume is deleted, we can recover it quickly... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-7926) don't teally delete volumes - have the pruge do it
Andrija Panic created CLOUDSTACK-7926: - Summary: don't teally delete volumes - have the pruge do it Key: CLOUDSTACK-7926 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7926 Project: CloudStack Issue Type: Improvement Security Level: Public (Anyone can view this level - this is the default.) Components: Storage Controller Affects Versions: 4.3.0, 4.4.1 Environment: NA Reporter: Andrija Panic Currently I have hit a bug, when I click on some instance, then on View Volumes, and then I get listed volumes that belong to some other VM - it already happened to me that I deleted the volumes - beacuse of ACS bug in GUI ! So, I suggest to consider maybe to implement purging the same way it is implemented with VM-s - so the VM is not really deleted - and the purge thread in ACS will acually delete it when it runs... THis way, if wrong volume is deleted, we can recover it quickly... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-7907) UI heavily broken
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14212528#comment-14212528 ] Andrija Panic commented on CLOUDSTACK-7907: --- Hi Stephen, thanks for geting back to me. What seems to me is that sometimes when there are network latencies (wireless) the first bug occurs...I will try to see when we encounter these issues again, and will try to capture then as you suggested. Regarding the second bug (breadcrumb) - even if you don't filter instances (so just click on Instances in the main menu on the left) and get to properties of some VM, then click back on the Instances in the breadcrumb - you get homepage... - I did not manage to understand when/why this occurs so far...but will also try to catch that with firebug... Thanks > UI heavily broken > - > > Key: CLOUDSTACK-7907 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: UI >Affects Versions: 4.3.0, 4.4.1 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > Labels: ui > > (A serious one, that we encounter pretty often): > Issue: I start new VM deloyment wizard, choose template etc at the end > when I click the finish, when the job should be sent to MGMT server - simply > nothing happens - so ajax does't get executed at all, no litle circle > spinning etc - no logs in mgmt server, simply ajax doesn't get > executed...Same behaviour sometimes happen when I click on "Configure" on the > VPC. > I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I > doubt anything has changed > OR > 2) > (not a big issue, however very annoying): > I filter instances by some account/domain, then click on some instance (view > it's properties or whatever), than in the breadcrumb I click back on > "instances", and instead of being show the page with all the filtered > instances, I get back to the home page of ACS... > So it doesn't really happens always, but randomly, with different browsers, > clearing all cache etc... > The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-7907) UI heavily broken
Andrija Panic created CLOUDSTACK-7907: - Summary: UI heavily broken Key: CLOUDSTACK-7907 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7907 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0, 4.4.1 Environment: not relevant Reporter: Andrija Panic Priority: Critical (A serious one, that we encounter pretty often): Issue: I start new VM deloyment wizard, choose template etc at the end when I click the finish, when the job should be sent to MGMT server - simply nothing happens - so ajax does't get executed at all, no litle circle spinning etc - no logs in mgmt server, simply ajax doesn't get executed...Same behaviour sometimes happen when I click on "Configure" on the VPC. I confirmed behaviour in acs 4.3.0 and I'm still checking in 4.4.1, but I doubt anything has changed OR 2) (not a big issue, however very annoying): I filter instances by some account/domain, then click on some instance (view it's properties or whatever), than in the breadcrumb I click back on "instances", and instead of being show the page with all the filtered instances, I get back to the home page of ACS... So it doesn't really happens always, but randomly, with different browsers, clearing all cache etc... The issue here is that nothing get's logged to MGMT log at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-7858) Implement separate network throttling rate on VR's Public NIC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-7858: -- Description: We need mechanism to define network throttling rate on Virtual Router Public NIC that is different from rate on other private NICs - currently when defining Network Throtling rates on guest network - all VR's NICs get this same rate. We need implementation, that enabled to (?not) throttle network between private interfaces, but again to enabled throtling only on Public network with dedicated rate. Also, it would nice to make it possible to change network rates in GUI, once the network offering is created (instead of hacking the database) was: We need mechanism to define network throtling rate on Virtual Router Public NIC that is different from rate on other private NICs - currently when defining Network Throtling rates on guest network - all VR's NICs get this same rate. We need implementation, that enabled to (?not) throttle network between private interfaces, but again to enabled throtling only on Public network. Also, it would nice to make it possible to change network rates in GUI, once the network offering is created (instead of hacking the database) Summary: Implement separate network throttling rate on VR's Public NIC (was: Implement separate network throtling rate on VR's Public NIC) > Implement separate network throttling rate on VR's Public NIC > - > > Key: CLOUDSTACK-7858 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7858 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Virtual Router >Affects Versions: 4.4.1 > Environment: NA >Reporter: Andrija Panic > Labels: network, throttle > > We need mechanism to define network throttling rate on Virtual Router Public > NIC that is different from rate on other private NICs - currently when > defining Network Throtling rates on guest network - all VR's NICs get this > same rate. > We need implementation, that enabled to (?not) throttle network between > private interfaces, but again to enabled throtling only on Public network > with dedicated rate. > Also, it would nice to make it possible to change network rates in GUI, once > the network offering is created (instead of hacking the database) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-7858) Implement separate network throtling rate on VR's Public NIC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-7858: -- Summary: Implement separate network throtling rate on VR's Public NIC (was: Implement network throtling on Public NIC on Virtual Router) > Implement separate network throtling rate on VR's Public NIC > > > Key: CLOUDSTACK-7858 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7858 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Virtual Router >Affects Versions: 4.4.1 > Environment: NA >Reporter: Andrija Panic > Labels: network, throttle > > We need mechanism to define network throtling rate on Virtual Router Public > NIC that is different from rate on other private NICs - currently when > defining Network Throtling rates on guest network - all VR's NICs get this > same rate. > We need implementation, that enabled to (?not) throttle network between > private interfaces, but again to enabled throtling only on Public network. > Also, it would nice to make it possible to change network rates in GUI, once > the network offering is created (instead of hacking the database) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-7858) Implement network throtling on Public NIC on Virtual Router
Andrija Panic created CLOUDSTACK-7858: - Summary: Implement network throtling on Public NIC on Virtual Router Key: CLOUDSTACK-7858 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7858 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Virtual Router Affects Versions: 4.4.1 Environment: NA Reporter: Andrija Panic We need mechanism to define network throtling rate on Virtual Router Public NIC that is different from rate on other private NICs - currently when defining Network Throtling rates on guest network - all VR's NICs get this same rate. We need implementation, that enabled to (?not) throttle network between private interfaces, but again to enabled throtling only on Public network. Also, it would nice to make it possible to change network rates in GUI, once the network offering is created (instead of hacking the database) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CLOUDSTACK-7790) VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames
[ https://issues.apache.org/jira/browse/CLOUDSTACK-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-7790: -- Description: By default, when using vxlan as isolation method for Guest traffic - cloudstack created vxlan vbidge and interface, and set's MTU for those to 1450 bytes. Problem is that default OS MTU for any VM is 1500 and all packets get droped (except maybe DHCP and ping which uses smaller packets). 1) Current proposed solution is to change MTU inside VM/template to 1450 - which is absolutely NOT user friendly and degrades performance 2) Better approach - set MTU on vxlan interface and bridge to a default value of 1500, and ask ADMIN to increase MTU to 1600 bytes on physical interface ethX or cloudbrX and enable at least 1600 frames on physical network 3) Even better - add GUI component to CloudStack for a MTU value, so the ADMIN can deploy JUMBRO frames accross whole Guest network - this should be probably enabled per network offering, or similar. Current setup, require MTU change inside VM is not a good solution, and does not enable user to use JUMBRO frames at all... was: By default, when using vxlan as isolation method for Guest traffic - cloudstack created vxlan vbidge and interface, and set's MTU for those to 1450 bytes. Problem is that default OS MTU for any VM is 1500 and all packets get droped (except maybe DHCP and ping which uses smaller packets). 1) Current proposed solution is to change MTU inside VM/template to 1450 - which is absolutely NOT user friendly and degrades performance 2) Better approach - set MTU on vxlan interface and bridge to a default value of 1500, and ask ADMIN to increase MTU to 1600 bytes on physical interface ethX or cloudbrX and enable at least 1600 frames on physical network 3) Even better - add GUI component to CloudStack for a MTU value, so the ADMIN can deploy JUMBRO frames accross whole Guest network - this should be probably enabled per network offering, or similar. Current setup, requiring MTU change inside VM is not a good solution, and does not enable user to use JUMBRO frames at all... > VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames > -- > > Key: CLOUDSTACK-7790 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7790 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Network Controller >Affects Versions: 4.4.1 > Environment: NOT important - CentOS 6.5, elrepo kernel 3.10 - >Reporter: Andrija Panic >Priority: Critical > Labels: frames, jumbo, vxlan > > By default, when using vxlan as isolation method for Guest traffic - > cloudstack created vxlan vbidge and interface, and set's MTU for those to > 1450 bytes. Problem is that default OS MTU for any VM is 1500 and all > packets get droped (except maybe DHCP and ping which uses smaller packets). > 1) Current proposed solution is to change MTU inside VM/template to 1450 - > which is absolutely NOT user friendly and degrades performance > 2) Better approach - set MTU on vxlan interface and bridge to a default value > of 1500, and ask ADMIN to increase MTU to 1600 bytes on physical interface > ethX or cloudbrX and enable at least 1600 frames on physical network > 3) Even better - add GUI component to CloudStack for a MTU value, so the > ADMIN can deploy JUMBRO frames accross whole Guest network - this should be > probably enabled per network offering, or similar. > Current setup, require MTU change inside VM is not a good solution, and does > not enable user to use JUMBRO frames at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CLOUDSTACK-7790) VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames
Andrija Panic created CLOUDSTACK-7790: - Summary: VXLAN interface MTU change from 1450 to 1500 and JUMBRO frames Key: CLOUDSTACK-7790 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7790 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Network Controller Affects Versions: 4.4.1 Environment: NOT important - CentOS 6.5, elrepo kernel 3.10 - Reporter: Andrija Panic Priority: Critical By default, when using vxlan as isolation method for Guest traffic - cloudstack created vxlan vbidge and interface, and set's MTU for those to 1450 bytes. Problem is that default OS MTU for any VM is 1500 and all packets get droped (except maybe DHCP and ping which uses smaller packets). 1) Current proposed solution is to change MTU inside VM/template to 1450 - which is absolutely NOT user friendly and degrades performance 2) Better approach - set MTU on vxlan interface and bridge to a default value of 1500, and ask ADMIN to increase MTU to 1600 bytes on physical interface ethX or cloudbrX and enable at least 1600 frames on physical network 3) Even better - add GUI component to CloudStack for a MTU value, so the ADMIN can deploy JUMBRO frames accross whole Guest network - this should be probably enabled per network offering, or similar. Current setup, requiring MTU change inside VM is not a good solution, and does not enable user to use JUMBRO frames at all... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-6814) Detected overlapping subnets in differents vlans
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169213#comment-14169213 ] Andrija Panic commented on CLOUDSTACK-6814: --- Hi, any update on this ? True, this is similar issue, but it says in https://issues.apache.org/jira/browse/CLOUDSTACK-4282 that it is resolved in 4.2 verisons, which, for my case is not true - this is overlaping IP range between guest and Public network, not between 2 guest networks... Any plan on fixing this, or not - I would really appreciate info, to be able to proceed with another Cloudstack deployment... > Detected overlapping subnets in differents vlans > > > Key: CLOUDSTACK-6814 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6814 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server, Network Controller >Affects Versions: 4.3.0 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > Labels: guestnetwork, network, overlap, publicip > > I have both Public IP(untagged) and Guest IP range (vlan 500) on same > physical network device eth1 (infrastrucure-zones-physical network-eth1public > tag...) Don't ask me how/why, but it works and it had worked from CS 4.0.0 > till now... > In previous versions I was able to add few additional IP addresses from the > /24 subnet to Guest IP range.. > In 4.3, there is an error message saying that Guest IP range and Public IP > range has overlaping subnets - which IS true - but since those networks ARE > on different vlans completely, I'm not sure why there is such check at all > (overlaping subnet check). Different vlans means different broadcast domains, > why checking IP parameters across different vlans... > Existing database records - first row is Public IP range, rest is all > smaller ranges of IP addresses added few times for Guest IP range. > mysql> select id,uuid,vlan_id,vlan_gateway,vlan_netmask,description from > cloud.vlan; > ++--+-+--+---+---+ > | id | uuid | vlan_id | vlan_gateway > | vlan_netmask | description | > ++--+-+--+---+---+ > | 1 | 10a1e453-7369-4645-9e0f-4936c18bfeac | vlan://untagged | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.240-46.232.xxx.248 | > | 3 | 76c30667-e4c9-4bfe-84cc-3c8e5c608770 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.220-46.232.xxx.238 | > | 4 | e2b2b09b-81f2-4ec0-9323-b4c626fcd63b | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.210-46.232.xxx.219 | > | 5 | f810fd59-ea8a-44fb-850e-58eb791191f0 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.202-46.232.xxx.209 | > | 8 | f0bec296-3ac8-483c-a23a-b36213fdf846 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.131-46.232.xxx.201 | > ++--+-+--+---+---+ > Now when I want to add new range 46.232.xxx.100-46.232.xxx.130 to eather > Public or Guest network - I can't do that and getting folowing error (tried > adding it to Public range here): > "The IP range with tag: 500 in zone DC-ZURICH-GLATTBRUGG has overlapped with > the subnet. Please specify a different gateway/netmask." > This subnet check across differenet vlans should be removed, and I'm stuck > with over 90% used IP addresses, and can't add more from same /24 range that > we got... -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CLOUDSTACK-6814) Detected overlapping subnets in differents vlans
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14016611#comment-14016611 ] Andrija Panic commented on CLOUDSTACK-6814: --- I was able to manually "fix" my issue: I made new records/rows in "vlan" and "user_ip_addresses" tables and after that I have sucessfully used that single ip adress (I added single IP address instead of whole range, for testing purposes) 1. generate UUID on Management server console with: uuidgen 2. Then duplicate 1st row from "vlan" table (in my case 1st row correspond to Public range, that I want to extend with additional IP range) 3. Then duplicate 1 row with some IP address from user_ip_addresses table, that belongs to my "Public" network - I used existing IP that was free/not allocated, and just changed "uuid" and "mac" fields...CS GUI shows these fine now, as new range... This should be fixed in my opinion - I don't understand comparing IP parameters accross different broadcast domains (vlans)... Andrija > Detected overlapping subnets in differents vlans > > > Key: CLOUDSTACK-6814 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6814 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server, Network Controller >Affects Versions: 4.3.0 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > Labels: guestnetwork, network, overlap, publicip > > I have both Public IP(untagged) and Guest IP range (vlan 500) on same > physical network device eth1 (infrastrucure-zones-physical network-eth1public > tag...) Don't ask me how/why, but it works and it had worked from CS 4.0.0 > till now... > In previous versions I was able to add few additional IP addresses from the > /24 subnet to Guest IP range.. > In 4.3, there is an error message saying that Guest IP range and Public IP > range has overlaping subnets - which IS true - but since those networks ARE > on different vlans completely, I'm not sure why there is such check at all > (overlaping subnet check). Different vlans means different broadcast domains, > why checking IP parameters across different vlans... > Existing database records - first row is Public IP range, rest is all > smaller ranges of IP addresses added few times for Guest IP range. > mysql> select id,uuid,vlan_id,vlan_gateway,vlan_netmask,description from > cloud.vlan; > ++--+-+--+---+---+ > | id | uuid | vlan_id | vlan_gateway > | vlan_netmask | description | > ++--+-+--+---+---+ > | 1 | 10a1e453-7369-4645-9e0f-4936c18bfeac | vlan://untagged | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.240-46.232.xxx.248 | > | 3 | 76c30667-e4c9-4bfe-84cc-3c8e5c608770 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.220-46.232.xxx.238 | > | 4 | e2b2b09b-81f2-4ec0-9323-b4c626fcd63b | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.210-46.232.xxx.219 | > | 5 | f810fd59-ea8a-44fb-850e-58eb791191f0 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.202-46.232.xxx.209 | > | 8 | f0bec296-3ac8-483c-a23a-b36213fdf846 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.131-46.232.xxx.201 | > ++--+-+--+---+---+ > Now when I want to add new range 46.232.xxx.100-46.232.xxx.130 to eather > Public or Guest network - I can't do that and getting folowing error (tried > adding it to Public range here): > "The IP range with tag: 500 in zone DC-ZURICH-GLATTBRUGG has overlapped with > the subnet. Please specify a different gateway/netmask." > This subnet check across differenet vlans should be removed, and I'm stuck > with over 90% used IP addresses, and can't add more from same /24 range that > we got... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CLOUDSTACK-6464) [KVM:basic zone- upgrade to 4.3],after any vm restart,all the nics are plugged to default bridge even though trafiic labels are being used
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14016580#comment-14016580 ] Andrija Panic edited comment on CLOUDSTACK-6464 at 6/3/14 2:57 PM: --- Can you check your agent conf if there are changes to it ? I also use advanced zone, kvm traffic labels, and I don't have this bug for some reason...but I'm using vlan separation... And I'm using 1 bridge for guest traffic (private though) and second bridge for another guest traffic (public) and also first bridge for management/storage traffic... was (Author: andrija): Can you check your agent conf if there are changes to it ? I also use advanced zone, kvm traffic labels, and I don't have this bug for some reason...but I'm using vlan separation... > [KVM:basic zone- upgrade to 4.3],after any vm restart,all the nics are > plugged to default bridge even though trafiic labels are being used > -- > > Key: CLOUDSTACK-6464 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6464 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server >Affects Versions: 4.3.0 >Reporter: sadhu suresh >Priority: Critical > Fix For: 4.4.0 > > > Steps: > 1. create a KVM basic zone with 2 nics on host (pre 4.3 build) > 2.use cloudbr0 for management and cloudbr1 for guest by specifying the > traffic labels in the physical networks. > 3.deploy few vms > 4.upgrade to felton GA build as per the Upgrade instructions. > actual result: > Upgrade successful but all the vnets that were attached to cloudbr1 before > upgrade are attached to cloudbr0. > Due to this network connectivity is lost. > Expected result: > Even after upgrade ,all the vnets should be attached to the same bridge as > before upgrade. > ex: > before Upgrade : this vms(i-5-616-VM) nic was attached to cloudbr1 and after > upgrade and VM stop/start. > the network rules are getting programmed in cloudbr0 .check below output > ,984 DEBUG [kvm.resource.LibvirtComputingResource] > (agentRequest-Handler-2:null) Executing: > /usr/share/cloudstack-common/scripts/vm/network/security_group.py > default_network_rules --vmname i-5-616-VM --vmid 616 --vmip 10.x.x245 --vmmac > 06:14:48:00:00:7f --vif vnet15 --brname cloudbr0 --nicsecips 0: > dumpxml output for i-5-616-VM after upgrade(& after VM restart) > * > virsh # dumpxml 38 > > i-5-616-VM > 87557942-1393-49b3-a73e-ae24c40541d1 > Other CentOS (64-bit) > 2097152 > 2097152 > 1 > > 1000 > > > hvm > > > > > > > > > > > > destroy > restart > destroy > > /usr/libexec/qemu-kvm > > > file='/mnt/041e5d8e-d9c1-346d-aea9-cd9c7b80a211/75544e9d-a4c9-4a94-943e-b20827676a27'/> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > its also applicable to new vm deployments. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CLOUDSTACK-6464) [KVM:basic zone- upgrade to 4.3],after any vm restart,all the nics are plugged to default bridge even though trafiic labels are being used
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14016580#comment-14016580 ] Andrija Panic edited comment on CLOUDSTACK-6464 at 6/3/14 2:54 PM: --- Can you check your agent conf if there are changes to it ? I also use advanced zone, kvm traffic labels, and I don't have this bug for some reason...but I'm using vlan separation... was (Author: andrija): Can you check your agent conf if there are changes to it ? I also use advanced zone, kvm traffic labels, and I don't have this bug for some reason... > [KVM:basic zone- upgrade to 4.3],after any vm restart,all the nics are > plugged to default bridge even though trafiic labels are being used > -- > > Key: CLOUDSTACK-6464 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6464 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server >Affects Versions: 4.3.0 >Reporter: sadhu suresh >Priority: Critical > Fix For: 4.4.0 > > > Steps: > 1. create a KVM basic zone with 2 nics on host (pre 4.3 build) > 2.use cloudbr0 for management and cloudbr1 for guest by specifying the > traffic labels in the physical networks. > 3.deploy few vms > 4.upgrade to felton GA build as per the Upgrade instructions. > actual result: > Upgrade successful but all the vnets that were attached to cloudbr1 before > upgrade are attached to cloudbr0. > Due to this network connectivity is lost. > Expected result: > Even after upgrade ,all the vnets should be attached to the same bridge as > before upgrade. > ex: > before Upgrade : this vms(i-5-616-VM) nic was attached to cloudbr1 and after > upgrade and VM stop/start. > the network rules are getting programmed in cloudbr0 .check below output > ,984 DEBUG [kvm.resource.LibvirtComputingResource] > (agentRequest-Handler-2:null) Executing: > /usr/share/cloudstack-common/scripts/vm/network/security_group.py > default_network_rules --vmname i-5-616-VM --vmid 616 --vmip 10.x.x245 --vmmac > 06:14:48:00:00:7f --vif vnet15 --brname cloudbr0 --nicsecips 0: > dumpxml output for i-5-616-VM after upgrade(& after VM restart) > * > virsh # dumpxml 38 > > i-5-616-VM > 87557942-1393-49b3-a73e-ae24c40541d1 > Other CentOS (64-bit) > 2097152 > 2097152 > 1 > > 1000 > > > hvm > > > > > > > > > > > > destroy > restart > destroy > > /usr/libexec/qemu-kvm > > > file='/mnt/041e5d8e-d9c1-346d-aea9-cd9c7b80a211/75544e9d-a4c9-4a94-943e-b20827676a27'/> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > its also applicable to new vm deployments. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CLOUDSTACK-6464) [KVM:basic zone- upgrade to 4.3],after any vm restart,all the nics are plugged to default bridge even though trafiic labels are being used
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14016580#comment-14016580 ] Andrija Panic commented on CLOUDSTACK-6464: --- Can you check your agent conf if there are changes to it ? I also use advanced zone, kvm traffic labels, and I don't have this bug for some reason... > [KVM:basic zone- upgrade to 4.3],after any vm restart,all the nics are > plugged to default bridge even though trafiic labels are being used > -- > > Key: CLOUDSTACK-6464 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6464 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server >Affects Versions: 4.3.0 >Reporter: sadhu suresh >Priority: Critical > Fix For: 4.4.0 > > > Steps: > 1. create a KVM basic zone with 2 nics on host (pre 4.3 build) > 2.use cloudbr0 for management and cloudbr1 for guest by specifying the > traffic labels in the physical networks. > 3.deploy few vms > 4.upgrade to felton GA build as per the Upgrade instructions. > actual result: > Upgrade successful but all the vnets that were attached to cloudbr1 before > upgrade are attached to cloudbr0. > Due to this network connectivity is lost. > Expected result: > Even after upgrade ,all the vnets should be attached to the same bridge as > before upgrade. > ex: > before Upgrade : this vms(i-5-616-VM) nic was attached to cloudbr1 and after > upgrade and VM stop/start. > the network rules are getting programmed in cloudbr0 .check below output > ,984 DEBUG [kvm.resource.LibvirtComputingResource] > (agentRequest-Handler-2:null) Executing: > /usr/share/cloudstack-common/scripts/vm/network/security_group.py > default_network_rules --vmname i-5-616-VM --vmid 616 --vmip 10.x.x245 --vmmac > 06:14:48:00:00:7f --vif vnet15 --brname cloudbr0 --nicsecips 0: > dumpxml output for i-5-616-VM after upgrade(& after VM restart) > * > virsh # dumpxml 38 > > i-5-616-VM > 87557942-1393-49b3-a73e-ae24c40541d1 > Other CentOS (64-bit) > 2097152 > 2097152 > 1 > > 1000 > > > hvm > > > > > > > > > > > > destroy > restart > destroy > > /usr/libexec/qemu-kvm > > > file='/mnt/041e5d8e-d9c1-346d-aea9-cd9c7b80a211/75544e9d-a4c9-4a94-943e-b20827676a27'/> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > its also applicable to new vm deployments. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CLOUDSTACK-6814) Detected overlapping subnets in differents vlans
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14016340#comment-14016340 ] Andrija Panic commented on CLOUDSTACK-6814: --- I did try to add additional IP addresses from GUI, will try to capture management log: I tried adding "Public" IP range: 46.232.xxx.106-46.232.xxx.130, same gateway and netmask, untagged (empty field), as is existing Public range (first raw in original table submited previously): management logs: 2014-06-03 11:10:11,099 DEBUG [c.c.a.ApiServlet] (catalina-exec-23:ctx-038e027a) ===START=== 89.216.xxx.189 -- GET command=createVlanIpRange&zoneId=3d1dcf11-d482-4f28-a2dd-6afcb51545d2&vlan=untagged&gateway=46.232.xxx.1&netmask=255.255.255.0&startip=46.232.xxx.106&endip=46.232.xxx.130&forVirtualNetwork=true&response=json&sessionkey=aqaaC3snjAp%2B86Dr17D1JWt4O4M%3D&_=1401786613222 2014-06-03 11:10:11,110 DEBUG [c.c.c.ConfigurationManagerImpl] (catalina-exec-23:ctx-038e027a ctx-08ddd172) Access granted to Acct[1272e979-0ccc-4644-b96b-735cb6f81821-admin] to zone:1 by AffinityGroupAccessChecker 2014-06-03 11:10:11,113 DEBUG [c.c.u.d.T.Transaction] (catalina-exec-23:ctx-038e027a ctx-08ddd172) Rolling back the transaction: Time = 3 Name = catalina-exec-23; called by -TransactionLegacy.rollback:896-TransactionLegacy.removeUpTo:839-TransactionLegacy.close:663-Transaction.execute:41-Transaction.execute:46-ConfigurationManagerImpl.commitVlan:2718-ConfigurationManagerImpl.createVlanAndPublicIpRange:2709-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:57-DelegatingMethodAccessorImpl.invoke:43-Method.invoke:622-AopUtils.invokeJoinpointUsingReflection:317 2014-06-03 11:10:11,119 INFO [c.c.a.ApiServer] (catalina-exec-23:ctx-038e027a ctx-08ddd172) The IP range with tag: 500 in zone DC-ZURICH-GLATTBRUGG has overlapped with the subnet. Please specify a different gateway/netmask. 2014-06-03 11:10:11,119 DEBUG [c.c.a.ApiServlet] (catalina-exec-23:ctx-038e027a ctx-08ddd172) ===END=== 89.216.xxx.189 -- GET command=createVlanIpRange&zoneId=3d1dcf11-d482-4f28-a2dd-6afcb51545d2&vlan=untagged&gateway=46.232.xxx.1&netmask=255.255.255.0&startip=46.232.xxx.106&endip=46.232.xxx.130&forVirtualNetwork=true&response=json&sessionkey=aqaaC3snjAp%2B86Dr17D1JWt4O4M%3D&_=1401786613222 > Detected overlapping subnets in differents vlans > > > Key: CLOUDSTACK-6814 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6814 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server, Network Controller >Affects Versions: 4.3.0 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > Labels: guestnetwork, network, overlap, publicip > > I have both Public IP(untagged) and Guest IP range (vlan 500) on same > physical network device eth1 (infrastrucure-zones-physical network-eth1public > tag...) Don't ask me how/why, but it works and it had worked from CS 4.0.0 > till now... > In previous versions I was able to add few additional IP addresses from the > /24 subnet to Guest IP range.. > In 4.3, there is an error message saying that Guest IP range and Public IP > range has overlaping subnets - which IS true - but since those networks ARE > on different vlans completely, I'm not sure why there is such check at all > (overlaping subnet check). Different vlans means different broadcast domains, > why checking IP parameters across different vlans... > Existing database records - first row is Public IP range, rest is all > smaller ranges of IP addresses added few times for Guest IP range. > mysql> select id,uuid,vlan_id,vlan_gateway,vlan_netmask,description from > cloud.vlan; > ++--+-+--+---+---+ > | id | uuid | vlan_id | vlan_gateway > | vlan_netmask | description | > ++--+-+--+---+---+ > | 1 | 10a1e453-7369-4645-9e0f-4936c18bfeac | vlan://untagged | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.240-46.232.xxx.248 | > | 3 | 76c30667-e4c9-4bfe-84cc-3c8e5c608770 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.220-46.232.xxx.238 | > | 4 | e2b2b09b-81f2-4ec0-9323-b4c626fcd63b | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.210-46.232.xxx.219 | > | 5 | f810fd59-ea8a-44fb-850e-58eb791191f0 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.202-46.232.xxx.209 | > | 8 | f0bec296-3ac8-483c-a23a-b36213fdf846 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46
[jira] [Commented] (CLOUDSTACK-6814) Detected overlapping subnets in differents vlans
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14016303#comment-14016303 ] Andrija Panic commented on CLOUDSTACK-6814: --- Anybody ? This is a in my opinion a bug, comparing IP ranges across different broadcast domains doesn't make any sense to me? thanks > Detected overlapping subnets in differents vlans > > > Key: CLOUDSTACK-6814 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6814 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Management Server, Network Controller >Affects Versions: 4.3.0 > Environment: not relevant >Reporter: Andrija Panic >Priority: Critical > Labels: guestnetwork, network, overlap, publicip > > I have both Public IP(untagged) and Guest IP range (vlan 500) on same > physical network device eth1 (infrastrucure-zones-physical network-eth1public > tag...) Don't ask me how/why, but it works and it had worked from CS 4.0.0 > till now... > In previous versions I was able to add few additional IP addresses from the > /24 subnet to Guest IP range.. > In 4.3, there is an error message saying that Guest IP range and Public IP > range has overlaping subnets - which IS true - but since those networks ARE > on different vlans completely, I'm not sure why there is such check at all > (overlaping subnet check). Different vlans means different broadcast domains, > why checking IP parameters across different vlans... > Existing database records - first row is Public IP range, rest is all > smaller ranges of IP addresses added few times for Guest IP range. > mysql> select id,uuid,vlan_id,vlan_gateway,vlan_netmask,description from > cloud.vlan; > ++--+-+--+---+---+ > | id | uuid | vlan_id | vlan_gateway > | vlan_netmask | description | > ++--+-+--+---+---+ > | 1 | 10a1e453-7369-4645-9e0f-4936c18bfeac | vlan://untagged | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.240-46.232.xxx.248 | > | 3 | 76c30667-e4c9-4bfe-84cc-3c8e5c608770 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.220-46.232.xxx.238 | > | 4 | e2b2b09b-81f2-4ec0-9323-b4c626fcd63b | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.210-46.232.xxx.219 | > | 5 | f810fd59-ea8a-44fb-850e-58eb791191f0 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.202-46.232.xxx.209 | > | 8 | f0bec296-3ac8-483c-a23a-b36213fdf846 | 500 | 46.232.xxx.1 > | 255.255.255.0 | 46.232.xxx.131-46.232.xxx.201 | > ++--+-+--+---+---+ > Now when I want to add new range 46.232.xxx.100-46.232.xxx.130 to eather > Public or Guest network - I can't do that and getting folowing error (tried > adding it to Public range here): > "The IP range with tag: 500 in zone DC-ZURICH-GLATTBRUGG has overlapped with > the subnet. Please specify a different gateway/netmask." > This subnet check across differenet vlans should be removed, and I'm stuck > with over 90% used IP addresses, and can't add more from same /24 range that > we got... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CLOUDSTACK-6814) Detected overlapping subnets in differents vlans
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrija Panic updated CLOUDSTACK-6814: -- Description: I have both Public IP(untagged) and Guest IP range (vlan 500) on same physical network device eth1 (infrastrucure-zones-physical network-eth1public tag...) Don't ask me how/why, but it works and it had worked from CS 4.0.0 till now... In previous versions I was able to add few additional IP addresses from the /24 subnet to Guest IP range.. In 4.3, there is an error message saying that Guest IP range and Public IP range has overlaping subnets - which IS true - but since those networks ARE on different vlans completely, I'm not sure why there is such check at all (overlaping subnet check). Different vlans means different broadcast domains, why checking IP parameters across different vlans... Existing database records - first row is Public IP range, rest is all smaller ranges of IP addresses added few times for Guest IP range. mysql> select id,uuid,vlan_id,vlan_gateway,vlan_netmask,description from cloud.vlan; ++--+-+--+---+---+ | id | uuid | vlan_id | vlan_gateway | vlan_netmask | description | ++--+-+--+---+---+ | 1 | 10a1e453-7369-4645-9e0f-4936c18bfeac | vlan://untagged | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.240-46.232.xxx.248 | | 3 | 76c30667-e4c9-4bfe-84cc-3c8e5c608770 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.220-46.232.xxx.238 | | 4 | e2b2b09b-81f2-4ec0-9323-b4c626fcd63b | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.210-46.232.xxx.219 | | 5 | f810fd59-ea8a-44fb-850e-58eb791191f0 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.202-46.232.xxx.209 | | 8 | f0bec296-3ac8-483c-a23a-b36213fdf846 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.131-46.232.xxx.201 | ++--+-+--+---+---+ Now when I want to add new range 46.232.xxx.100-46.232.xxx.130 to eather Public or Guest network - I can't do that and getting folowing error (tried adding it to Public range here): "The IP range with tag: 500 in zone DC-ZURICH-GLATTBRUGG has overlapped with the subnet. Please specify a different gateway/netmask." This subnet check across differenet vlans should be removed, and I'm stuck with over 90% used IP addresses, and can't add more from same /24 range that we got... was: I have both Public IP(untagged) and Guest IP range (vlan 500) on same physical network device eth1 (infrastrucure-zones-physical network-eth1public tag...) Don't ask me how/why, but it works and it had worked from CS 4.0.0 till now... In previous versions I was able to add few additional IP addresses from the /24 subnet to Guest IP range.. In 4.3, there is an error message saying that Guest IP range and Public IP range has overlaping subnets - which IS true - but since those networks ARE on different vlans completely, I'm not sure why there is such check at all (overlaping subnet check). Different vlans means different broadcast domains, why checking IP parameters across different vlans... Existing database records - first row is Public IP range, rest is all smaller ranges of IP addresses added few times for Guest IP range. mysql> select id,uuid,vlan_id,vlan_gateway,vlan_netmask,description from cloud.vlan; ++--+-+--+---+---+ | id | uuid | vlan_id | vlan_gateway | vlan_netmask | description | ++--+-+--+---+---+ | 1 | 10a1e453-7369-4645-9e0f-4936c18bfeac | vlan://untagged | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.240-46.232.xxx.248 | | 3 | 76c30667-e4c9-4bfe-84cc-3c8e5c608770 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.220-46.232.xxx.238 | | 4 | e2b2b09b-81f2-4ec0-9323-b4c626fcd63b | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.210-46.232.xxx.219 | | 5 | f810fd59-ea8a-44fb-850e-58eb791191f0 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.202-46.232.xxx.209 | | 8 | f0bec296-3ac8-483c-a23a-b36213fdf846 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.131-46.232.xxx.201 | ++--+-+--+---+---+ Now when I want to add new range 46.232.xxx.100-46.232.xxx.130 to eather Public or Guest network - I ca
[jira] [Created] (CLOUDSTACK-6814) Detected overlapping subnets in differents vlans
Andrija Panic created CLOUDSTACK-6814: - Summary: Detected overlapping subnets in differents vlans Key: CLOUDSTACK-6814 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6814 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Management Server, Network Controller Affects Versions: 4.3.0 Environment: not relevant Reporter: Andrija Panic Priority: Critical I have both Public IP(untagged) and Guest IP range (vlan 500) on same physical network device eth1 (infrastrucure-zones-physical network-eth1public tag...) Don't ask me how/why, but it works and it had worked from CS 4.0.0 till now... In previous versions I was able to add few additional IP addresses from the /24 subnet to Guest IP range.. In 4.3, there is an error message saying that Guest IP range and Public IP range has overlaping subnets - which IS true - but since those networks ARE on different vlans completely, I'm not sure why there is such check at all (overlaping subnet check). Different vlans means different broadcast domains, why checking IP parameters across different vlans... Existing database records - first row is Public IP range, rest is all smaller ranges of IP addresses added few times for Guest IP range. mysql> select id,uuid,vlan_id,vlan_gateway,vlan_netmask,description from cloud.vlan; ++--+-+--+---+---+ | id | uuid | vlan_id | vlan_gateway | vlan_netmask | description | ++--+-+--+---+---+ | 1 | 10a1e453-7369-4645-9e0f-4936c18bfeac | vlan://untagged | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.240-46.232.xxx.248 | | 3 | 76c30667-e4c9-4bfe-84cc-3c8e5c608770 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.220-46.232.xxx.238 | | 4 | e2b2b09b-81f2-4ec0-9323-b4c626fcd63b | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.210-46.232.xxx.219 | | 5 | f810fd59-ea8a-44fb-850e-58eb791191f0 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.202-46.232.xxx.209 | | 8 | f0bec296-3ac8-483c-a23a-b36213fdf846 | 500 | 46.232.xxx.1 | 255.255.255.0 | 46.232.xxx.131-46.232.xxx.201 | ++--+-+--+---+---+ Now when I want to add new range 46.232.xxx.100-46.232.xxx.130 to eather Public or Guest network - I can't do that and getting folowing error (tried adding it to Public range here): The IP range with tag: 500 in zone DC-ZURICH-GLATTBRUGG has overlapped with the subnet. Please specify a different gateway/netmask. This subnet check across differenet vlans should be removed, and I'm stuck with over 90% used IP addresses, and can't add more from same /24 range that we got... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CLOUDSTACK-6801) Public IP not assigned to eth1 on VR in VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14013979#comment-14013979 ] Andrija Panic commented on CLOUDSTACK-6801: --- This is bug present wjile to using untagged vlan for Public IP network - issue is (temporarily?) respolved by replacing "untagged" with "vlan://untagged" inside vlan database, vlan_id field. Verified on DEV mailing list together with Joris van Lieshout, Daan, Marcus,etc. Marcus explanation: " In 4.3, some changes were made to BroadcastDomainType, to standardize Broadcast URIs to prepend vlan://. The issue is that your IpAssocVpcCommand doesn't use this new format for the broadcastUri it passes, so it fails to map the plugged device into the broadcastUriToNicNum map, resulting in ethnull." > Public IP not assigned to eth1 on VR in VPC > --- > > Key: CLOUDSTACK-6801 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6801 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Virtual Router >Affects Versions: 4.3.0 > Environment: CentOS, KVM. >Reporter: Andrija Panic >Priority: Blocker > Labels: publicip, virtualrouter, vpc > > Hi, > after upgrade from 4.2.1 to 4.3.0, Public IP on eth1 is missing on VR when > creating new (and on existing) VPCs, although eth1 seems present per > /proc/net/dev. > Mangement logs are fine, eth1 plugged in correct bridge, etc. > Manually adding IP on eth1 and starting eth1 does work. > From /var/log/messages inside VR: > May 28 18:27:36 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 0 seconds > May 28 18:27:37 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 1 seconds > May 28 18:27:38 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 2 seconds > May 28 18:27:39 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 3 seconds > May 28 18:27:40 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 4 seconds > May 28 18:27:41 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 5 seconds > May 28 18:27:42 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 6 seconds > May 28 18:27:43 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 7 seconds > May 28 18:27:44 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 8 seconds > May 28 18:27:45 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 9 seconds > May 28 18:27:46 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 10 seconds > May 28 18:27:47 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 11 seconds > May 28 18:27:48 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 12 seconds > May 28 18:27:49 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 13 seconds > May 28 18:27:50 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 14 seconds > May 28 18:27:51 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 15 seconds > May 28 18:27:52 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 16 seconds > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:interface ethnull never > appeared > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Adding ip 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Add routing 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_privateGateway.sh:Added SourceNAT > 46.232.x.246 on interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_snat.sh:Added SourceNAT 46.232.x.246 on > interface eth1 > May 28 18:27:54 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth2, gateway 10.0.1.1, network 10.0.1.1/24 > May 28 18:27:59 r-799-VM cloud: Setting up apache web server for eth2 > May 28 18:27:59 r-799-VM cloud: Setting up password service for network > 10.0.1.1/24, eth eth2 > May 28 18:27:59 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth3, gateway 10.0.3.1, network 10.0.3.1/24 > May 28 18:28:04 r-799-VM cloud: Setting up apache web server for eth3 > May 28 18:28:06 r-799-VM cloud: Setting up password service for network > 10.0.3.1/24, eth eth3 > May 28 18:28:06 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth4, gateway 10.0.4.1, network 10.0.4.1/24 > May 28 18:28:11 r-799-VM cloud: Setting up apache web server for eth4 > May 28 18:28:12 r-799-VM cloud: Setting up password service for network > 10.0.4.1/24, eth eth4 > May 28 18:28:13 r-799-VM cloud: vpc_guestnw.sh: Create network on interface >
[jira] [Comment Edited] (CLOUDSTACK-6801) Public IP not assigned to eth1 on VR in VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14013979#comment-14013979 ] Andrija Panic edited comment on CLOUDSTACK-6801 at 5/30/14 5:36 PM: This is bug present while using untagged vlan for Public IP network - issue is (temporarily?) respolved by replacing "untagged" with "vlan://untagged" inside vlan database, vlan_id field. Verified on DEV mailing list together with Joris van Lieshout, Daan, Marcus,etc. Marcus explanation: " In 4.3, some changes were made to BroadcastDomainType, to standardize Broadcast URIs to prepend vlan://. The issue is that your IpAssocVpcCommand doesn't use this new format for the broadcastUri it passes, so it fails to map the plugged device into the broadcastUriToNicNum map, resulting in ethnull." was (Author: andrija): This is bug present wjile to using untagged vlan for Public IP network - issue is (temporarily?) respolved by replacing "untagged" with "vlan://untagged" inside vlan database, vlan_id field. Verified on DEV mailing list together with Joris van Lieshout, Daan, Marcus,etc. Marcus explanation: " In 4.3, some changes were made to BroadcastDomainType, to standardize Broadcast URIs to prepend vlan://. The issue is that your IpAssocVpcCommand doesn't use this new format for the broadcastUri it passes, so it fails to map the plugged device into the broadcastUriToNicNum map, resulting in ethnull." > Public IP not assigned to eth1 on VR in VPC > --- > > Key: CLOUDSTACK-6801 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6801 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Virtual Router >Affects Versions: 4.3.0 > Environment: CentOS, KVM. >Reporter: Andrija Panic >Priority: Blocker > Labels: publicip, virtualrouter, vpc > > Hi, > after upgrade from 4.2.1 to 4.3.0, Public IP on eth1 is missing on VR when > creating new (and on existing) VPCs, although eth1 seems present per > /proc/net/dev. > Mangement logs are fine, eth1 plugged in correct bridge, etc. > Manually adding IP on eth1 and starting eth1 does work. > From /var/log/messages inside VR: > May 28 18:27:36 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 0 seconds > May 28 18:27:37 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 1 seconds > May 28 18:27:38 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 2 seconds > May 28 18:27:39 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 3 seconds > May 28 18:27:40 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 4 seconds > May 28 18:27:41 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 5 seconds > May 28 18:27:42 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 6 seconds > May 28 18:27:43 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 7 seconds > May 28 18:27:44 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 8 seconds > May 28 18:27:45 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 9 seconds > May 28 18:27:46 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 10 seconds > May 28 18:27:47 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 11 seconds > May 28 18:27:48 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 12 seconds > May 28 18:27:49 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 13 seconds > May 28 18:27:50 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 14 seconds > May 28 18:27:51 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 15 seconds > May 28 18:27:52 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 16 seconds > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:interface ethnull never > appeared > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Adding ip 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Add routing 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_privateGateway.sh:Added SourceNAT > 46.232.x.246 on interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_snat.sh:Added SourceNAT 46.232.x.246 on > interface eth1 > May 28 18:27:54 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth2, gateway 10.0.1.1, network 10.0.1.1/24 > May 28 18:27:59 r-799-VM cloud: Setting up apache web server for eth2 > May 28 18:27:59 r-799-VM cloud: Setting up password service for network > 10.0.1.1/24, eth eth2 > May
[jira] [Commented] (CLOUDSTACK-6801) Public IP not assigned to eth1 on VR in VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14012178#comment-14012178 ] Andrija Panic commented on CLOUDSTACK-6801: --- Hi Jayapal, Will try this, but this was not the problem for last few ACS versions I have used. Yes, Public vlan is untagged, I will try this as a workarround - not sure it is possible to change existing Public IP vlan from untagged to tagged ? Regards Andrija > Public IP not assigned to eth1 on VR in VPC > --- > > Key: CLOUDSTACK-6801 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6801 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Virtual Router >Affects Versions: 4.3.0 > Environment: CentOS, KVM. >Reporter: Andrija Panic >Priority: Blocker > Labels: publicip, virtualrouter, vpc > > Hi, > after upgrade from 4.2.1 to 4.3.0, Public IP on eth1 is missing on VR when > creating new (and on existing) VPCs, although eth1 seems present per > /proc/net/dev. > Mangement logs are fine, eth1 plugged in correct bridge, etc. > Manually adding IP on eth1 and starting eth1 does work. > From /var/log/messages inside VR: > May 28 18:27:36 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 0 seconds > May 28 18:27:37 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 1 seconds > May 28 18:27:38 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 2 seconds > May 28 18:27:39 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 3 seconds > May 28 18:27:40 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 4 seconds > May 28 18:27:41 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 5 seconds > May 28 18:27:42 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 6 seconds > May 28 18:27:43 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 7 seconds > May 28 18:27:44 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 8 seconds > May 28 18:27:45 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 9 seconds > May 28 18:27:46 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 10 seconds > May 28 18:27:47 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 11 seconds > May 28 18:27:48 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 12 seconds > May 28 18:27:49 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 13 seconds > May 28 18:27:50 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 14 seconds > May 28 18:27:51 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 15 seconds > May 28 18:27:52 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 16 seconds > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:interface ethnull never > appeared > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Adding ip 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Add routing 46.232.x.246 on > interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_privateGateway.sh:Added SourceNAT > 46.232.x.246 on interface ethnull > May 28 18:27:53 r-799-VM cloud: vpc_snat.sh:Added SourceNAT 46.232.x.246 on > interface eth1 > May 28 18:27:54 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth2, gateway 10.0.1.1, network 10.0.1.1/24 > May 28 18:27:59 r-799-VM cloud: Setting up apache web server for eth2 > May 28 18:27:59 r-799-VM cloud: Setting up password service for network > 10.0.1.1/24, eth eth2 > May 28 18:27:59 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth3, gateway 10.0.3.1, network 10.0.3.1/24 > May 28 18:28:04 r-799-VM cloud: Setting up apache web server for eth3 > May 28 18:28:06 r-799-VM cloud: Setting up password service for network > 10.0.3.1/24, eth eth3 > May 28 18:28:06 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth4, gateway 10.0.4.1, network 10.0.4.1/24 > May 28 18:28:11 r-799-VM cloud: Setting up apache web server for eth4 > May 28 18:28:12 r-799-VM cloud: Setting up password service for network > 10.0.4.1/24, eth eth4 > May 28 18:28:13 r-799-VM cloud: vpc_guestnw.sh: Create network on interface > eth5, gateway 10.0.6.1, network 10.0.6.1/24 > May 28 18:28:18 r-799-VM cloud: Setting up apache web server for eth5 > May 28 18:28:19 r-799-VM cloud: Setting up password service for network > 10.0.6.1/24, eth eth5 > Nothing else usefull in other logs... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CLOUDSTACK-6801) Public IP not assigned to eth1 on VR in VPC
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14011493#comment-14011493 ] Andrija Panic commented on CLOUDSTACK-6801: --- Tried to add new IP and Port Forward rule: Efectively- this failed inside VR, but was "sucessfull" as per CS GUI. /var/log/message May 28 19:27:52 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 0 seconds May 28 19:27:53 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 1 seconds May 28 19:27:54 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 2 seconds May 28 19:27:55 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 3 seconds May 28 19:27:56 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 4 seconds May 28 19:27:57 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 5 seconds May 28 19:27:58 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 6 seconds May 28 19:27:59 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 7 seconds May 28 19:28:00 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 8 seconds May 28 19:28:01 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 9 seconds May 28 19:28:02 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 10 seconds May 28 19:28:03 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 11 seconds May 28 19:28:04 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 12 seconds May 28 19:28:05 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 13 seconds May 28 19:28:06 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 14 seconds May 28 19:28:07 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 15 seconds May 28 19:28:08 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 16 seconds May 28 19:28:09 r-799-VM cloud: vpc_ipassoc.sh:interface ethnull never appeared May 28 19:28:09 r-799-VM cloud: vpc_ipassoc.sh:Adding ip 46.232.x.248 on interface ethnull May 28 19:28:09 r-799-VM cloud: vpc_ipassoc.sh:Add routing 46.232.x.248 on interface ethnull May 28 19:28:09 r-799-VM cloud: vpc_portforwarding.sh: creating port fwd entry for PAT: public ip=46.232.x.248 instance ip=10.0.6.112 proto=tcp port= dport= op=-A May 28 19:28:09 r-799-VM cloud: vpc_portforwarding.sh: creating port fwd entry for PAT: public ip=46.232.x.248 instance ip=10.0.6.112 proto=tcp port= dport= op=-D May 28 19:28:09 r-799-VM cloud: vpc_portforwarding.sh: done port fwd entry for PAT: public ip=46.232.x.248 op=-D result=1 May 28 19:28:09 r-799-VM cloud: vpc_portforwarding.sh: done port fwd entry for PAT: public ip=46.232.x.248 op=-A result=0 > Public IP not assigned to eth1 on VR in VPC > --- > > Key: CLOUDSTACK-6801 > URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6801 > Project: CloudStack > Issue Type: Bug > Security Level: Public(Anyone can view this level - this is the > default.) > Components: Virtual Router >Affects Versions: 4.3.0 > Environment: CentOS, KVM. >Reporter: Andrija Panic >Priority: Blocker > Labels: publicip, virtualrouter, vpc > > Hi, > after upgrade from 4.2.1 to 4.3.0, Public IP on eth1 is missing on VR when > creating new (and on existing) VPCs, although eth1 seems present per > /proc/net/dev. > Mangement logs are fine, eth1 plugged in correct bridge, etc. > Manually adding IP on eth1 and starting eth1 does work. > From /var/log/messages inside VR: > May 28 18:27:36 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 0 seconds > May 28 18:27:37 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 1 seconds > May 28 18:27:38 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 2 seconds > May 28 18:27:39 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 3 seconds > May 28 18:27:40 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 4 seconds > May 28 18:27:41 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 5 seconds > May 28 18:27:42 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 6 seconds > May 28 18:27:43 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 7 seconds > May 28 18:27:44 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 8 seconds > May 28 18:27:45 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 9 seconds > May 28 18:27:46 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull > to appear, 10 seconds > May 28 18:27:
[jira] [Created] (CLOUDSTACK-6801) Public IP not assigned to eth1 on VR in VPC
Andrija Panic created CLOUDSTACK-6801: - Summary: Public IP not assigned to eth1 on VR in VPC Key: CLOUDSTACK-6801 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6801 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Virtual Router Affects Versions: 4.3.0 Environment: CentOS, KVM. Reporter: Andrija Panic Priority: Blocker Hi, after upgrade from 4.2.1 to 4.3.0, Public IP on eth1 is missing on VR when creating new (and on existing) VPCs, although eth1 seems present per /proc/net/dev. Mangement logs are fine, eth1 plugged in correct bridge, etc. Manually adding IP on eth1 and starting eth1 does work. >From /var/log/messages inside VR: May 28 18:27:36 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 0 seconds May 28 18:27:37 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 1 seconds May 28 18:27:38 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 2 seconds May 28 18:27:39 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 3 seconds May 28 18:27:40 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 4 seconds May 28 18:27:41 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 5 seconds May 28 18:27:42 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 6 seconds May 28 18:27:43 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 7 seconds May 28 18:27:44 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 8 seconds May 28 18:27:45 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 9 seconds May 28 18:27:46 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 10 seconds May 28 18:27:47 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 11 seconds May 28 18:27:48 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 12 seconds May 28 18:27:49 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 13 seconds May 28 18:27:50 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 14 seconds May 28 18:27:51 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 15 seconds May 28 18:27:52 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 16 seconds May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:interface ethnull never appeared May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Adding ip 46.232.x.246 on interface ethnull May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Add routing 46.232.x.246 on interface ethnull May 28 18:27:53 r-799-VM cloud: vpc_privateGateway.sh:Added SourceNAT 46.232.x.246 on interface ethnull May 28 18:27:53 r-799-VM cloud: vpc_snat.sh:Added SourceNAT 46.232.x.246 on interface eth1 May 28 18:27:54 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth2, gateway 10.0.1.1, network 10.0.1.1/24 May 28 18:27:59 r-799-VM cloud: Setting up apache web server for eth2 May 28 18:27:59 r-799-VM cloud: Setting up password service for network 10.0.1.1/24, eth eth2 May 28 18:27:59 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth3, gateway 10.0.3.1, network 10.0.3.1/24 May 28 18:28:04 r-799-VM cloud: Setting up apache web server for eth3 May 28 18:28:06 r-799-VM cloud: Setting up password service for network 10.0.3.1/24, eth eth3 May 28 18:28:06 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth4, gateway 10.0.4.1, network 10.0.4.1/24 May 28 18:28:11 r-799-VM cloud: Setting up apache web server for eth4 May 28 18:28:12 r-799-VM cloud: Setting up password service for network 10.0.4.1/24, eth eth4 May 28 18:28:13 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth5, gateway 10.0.6.1, network 10.0.6.1/24 May 28 18:28:18 r-799-VM cloud: Setting up apache web server for eth5 May 28 18:28:19 r-799-VM cloud: Setting up password service for network 10.0.6.1/24, eth eth5 Nothing else usefull in other logs... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CLOUDSTACK-6800) Public IP not assigned to VR (vpc)
Andrija Panic created CLOUDSTACK-6800: - Summary: Public IP not assigned to VR (vpc) Key: CLOUDSTACK-6800 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6800 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Network Devices, Virtual Router Affects Versions: 4.3.0 Environment: CentOS 6.5, Libvirt 1.2.3. ACS 4.3.1 Reporter: Andrija Panic Priority: Blocker Hi, after upgrade from 4.2.1 to 4.3.0, Public IP on eth1 is missing, although eth1 seems present per /proc/net/dev. Mangement logs are fine, eth1 plugged in correct bridge, etc. Manually adding IP on eth1 and starting eth1 does work. >From /var/log/messages inside VR: May 28 18:27:36 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 0 seconds May 28 18:27:37 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 1 seconds May 28 18:27:38 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 2 seconds May 28 18:27:39 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 3 seconds May 28 18:27:40 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 4 seconds May 28 18:27:41 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 5 seconds May 28 18:27:42 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 6 seconds May 28 18:27:43 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 7 seconds May 28 18:27:44 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 8 seconds May 28 18:27:45 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 9 seconds May 28 18:27:46 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 10 seconds May 28 18:27:47 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 11 seconds May 28 18:27:48 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 12 seconds May 28 18:27:49 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 13 seconds May 28 18:27:50 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 14 seconds May 28 18:27:51 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 15 seconds May 28 18:27:52 r-799-VM cloud: vpc_ipassoc.sh:Waiting for interface ethnull to appear, 16 seconds May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:interface ethnull never appeared May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Adding ip 46.232.x.246 on interface ethnull May 28 18:27:53 r-799-VM cloud: vpc_ipassoc.sh:Add routing 46.232.x.246 on interface ethnull May 28 18:27:53 r-799-VM cloud: vpc_privateGateway.sh:Added SourceNAT 46.232.x.246 on interface ethnull May 28 18:27:53 r-799-VM cloud: vpc_snat.sh:Added SourceNAT 46.232.x.246 on interface eth1 May 28 18:27:54 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth2, gateway 10.0.1.1, network 10.0.1.1/24 May 28 18:27:59 r-799-VM cloud: Setting up apache web server for eth2 May 28 18:27:59 r-799-VM cloud: Setting up password service for network 10.0.1.1/24, eth eth2 May 28 18:27:59 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth3, gateway 10.0.3.1, network 10.0.3.1/24 May 28 18:28:04 r-799-VM cloud: Setting up apache web server for eth3 May 28 18:28:06 r-799-VM cloud: Setting up password service for network 10.0.3.1/24, eth eth3 May 28 18:28:06 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth4, gateway 10.0.4.1, network 10.0.4.1/24 May 28 18:28:11 r-799-VM cloud: Setting up apache web server for eth4 May 28 18:28:12 r-799-VM cloud: Setting up password service for network 10.0.4.1/24, eth eth4 May 28 18:28:13 r-799-VM cloud: vpc_guestnw.sh: Create network on interface eth5, gateway 10.0.6.1, network 10.0.6.1/24 May 28 18:28:18 r-799-VM cloud: Setting up apache web server for eth5 May 28 18:28:19 r-799-VM cloud: Setting up password service for network 10.0.6.1/24, eth eth5 Nothing else usefull in other logs... -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CLOUDSTACK-5543) Cant delete Secondary Storage NFS when ISOs exist on it
Andrija Panic created CLOUDSTACK-5543: - Summary: Cant delete Secondary Storage NFS when ISOs exist on it Key: CLOUDSTACK-5543 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5543 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.2.0 Environment: CentOS 6.5, NFS used for Secondary Storage Reporter: Andrija Panic Not possible to remove Secondry Storage NFS because ISO files still exist on it. Details: After moving all public VM templates from this "old" NFS to the new NFS SS (by rsync and update template_store_ref table to point to new NFS store - because public temlates are NOT replicated the same way ISOs are, although they should be per my knowledge), it is still not possible to delete/remove the NFS secondary storage from GUI, because ISOs still exist on this "old" NFS storage. There is no option to remove ISO files references to 1 particular NFS storage. So eather enable GUI option to remove ISOs from storage, prior to it's removal, or complete ignore the fact that the ISOs exit on it (since ISOs are replicated to available secondary storages in zone). I managed to resolve my problem with: delete FROM cloud.template_store_ref where store_id=ID_OF_NFS_TO_BE_DELETE and template_id in (select id from vm_template where type='user' and format='iso'); -- This message was sent by Atlassian JIRA (v6.1.4#6159)