[jira] [Commented] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2016-02-09 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15138908#comment-15138908
 ] 

France commented on CLOUDSTACK-3367:


:-/ 3 years after, the same issue persists. And it is not just this one.
This is why, we have given up on CS and are slowly migrating to Proxmox VE.

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0, 4.5.0, 4.3.1
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
>Assignee: Abhinandan Prateek
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-16 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14248231#comment-14248231
 ] 

France commented on CLOUDSTACK-8014:


I have just tested this and it works as stated by Rohit.
I have updated field "removed" with NULL. Then I was able to create volume.
This bug can be closed. Solution is confirmed.

Later today I will enter the old value into removed field, for that snapshot, 
to keep DB consistent.
We gonna have 3.4.2 soon anytime, which contains this fix.

Thank you Rohit for your efforts and squashing that bug. It is very appreciated.


> Failed to create a volume from snapshot - Discrepency in the resource count ?
> -
>
> Key: CLOUDSTACK-8014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.1
>Reporter: France
>Assignee: Rohit Yadav
>
> As sent to mailing list:
> Hi all.
> We are on XS 6.0.2+Hotfixes and CS 4.3.1.
> (All errors we are getting nowadays have come up only after upgrade from 
> 4.1.1, 4.1.1 worked perfectly.)
> After successfully creating s snapshot, I want to create a volume from it, so 
> it can be downloaded offsite.
> After clicking “Create volume” I get an error Failed to create a volume.
> If i got to list of volumes, there is a volume with name as defined, but 
> Status is empty, and only buttons for attach and destroy exist.
> I have taken a look at catalina.out and management-server.log. Here is the 
> log detailing a failure.
> Can you see the problem? How should I fix it? Please advise me.
> Should I create a bug report?
> I have also manually checked space on all storages and it seems there is LOTS 
> of free space. Resources based on CS Web GUI are:
> Primary Storage Allocated 15%
> Secondary Storage 5%
> 2014-12-03 12:20:42,493 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
> (consoleproxy-1:ctx-2d51baa2) Zone 1 is ready to launch console proxy
> 2014-12-03 12:20:42,580 DEBUG [c.c.s.s.SecondaryStorageManagerImpl] 
> (secstorage-1:ctx-53354d18) Zone 1 is ready to launch secondary storage VM
> 2014-12-03 12:20:42,894 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49) ===START===  111.client.ip.111 -- GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:42,909 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49 ctx-74e696ca) ===END=== 111.client.ip.111 -- 
> GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:46,482 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-107:ctx-4a28e57c) ===START===  111.client.ip.111 -- GET 
> command=createVolume&response=json&sessionkey=CENSORED%3D&snapshotid=49e4bba9-844d-4b7b-aca2-95a318f17dc4&name=testbrisi567&_=1417605818552
> 2014-12-03 12:20:46,490 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,494 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,507 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> com.cloud.storage.SnapshotVO$$EnhancerByCGLIB$$3490cea0@60c95391 granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,571 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Job-Executor-166:ctx-d915e890) Add job-2804 into job monitoring
> 2014-12-03 12:20:46,571 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Job-Executor-166:ctx-d915e890) Executing AsyncJobVO {id:2804, userId: 59, 
> accountId: 60, instanceType: Volume, instanceId: 617, cmd: 
> org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
> {"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2014-12-03 12:20:46,577 DEBUG [c.c.u.AccountManagerImpl] 
> (Job-Executor-166:ctx-d915e890 ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6

[jira] [Commented] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-12 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14243978#comment-14243978
 ] 

France commented on CLOUDSTACK-8014:


Hi Rohit,
thank you again for your time. It is much appreciated.

Before I "undelete" that snapshot in DB, please answer this.
If this was the case, should not creating a template from this snapshot also 
fail (due to deleted template)?
In my case id does not. Only when creating volume we get a failure.

> Failed to create a volume from snapshot - Discrepency in the resource count ?
> -
>
> Key: CLOUDSTACK-8014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.1
>Reporter: France
>Assignee: Rohit Yadav
>
> As sent to mailing list:
> Hi all.
> We are on XS 6.0.2+Hotfixes and CS 4.3.1.
> (All errors we are getting nowadays have come up only after upgrade from 
> 4.1.1, 4.1.1 worked perfectly.)
> After successfully creating s snapshot, I want to create a volume from it, so 
> it can be downloaded offsite.
> After clicking “Create volume” I get an error Failed to create a volume.
> If i got to list of volumes, there is a volume with name as defined, but 
> Status is empty, and only buttons for attach and destroy exist.
> I have taken a look at catalina.out and management-server.log. Here is the 
> log detailing a failure.
> Can you see the problem? How should I fix it? Please advise me.
> Should I create a bug report?
> I have also manually checked space on all storages and it seems there is LOTS 
> of free space. Resources based on CS Web GUI are:
> Primary Storage Allocated 15%
> Secondary Storage 5%
> 2014-12-03 12:20:42,493 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
> (consoleproxy-1:ctx-2d51baa2) Zone 1 is ready to launch console proxy
> 2014-12-03 12:20:42,580 DEBUG [c.c.s.s.SecondaryStorageManagerImpl] 
> (secstorage-1:ctx-53354d18) Zone 1 is ready to launch secondary storage VM
> 2014-12-03 12:20:42,894 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49) ===START===  111.client.ip.111 -- GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:42,909 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49 ctx-74e696ca) ===END=== 111.client.ip.111 -- 
> GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:46,482 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-107:ctx-4a28e57c) ===START===  111.client.ip.111 -- GET 
> command=createVolume&response=json&sessionkey=CENSORED%3D&snapshotid=49e4bba9-844d-4b7b-aca2-95a318f17dc4&name=testbrisi567&_=1417605818552
> 2014-12-03 12:20:46,490 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,494 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,507 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> com.cloud.storage.SnapshotVO$$EnhancerByCGLIB$$3490cea0@60c95391 granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,571 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Job-Executor-166:ctx-d915e890) Add job-2804 into job monitoring
> 2014-12-03 12:20:46,571 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Job-Executor-166:ctx-d915e890) Executing AsyncJobVO {id:2804, userId: 59, 
> accountId: 60, instanceType: Volume, instanceId: 617, cmd: 
> org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
> {"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2014-12-03 12:20:46,577 DEBUG [c.c.u.AccountManagerImpl] 
> (Job-Executor-166:ctx-d915e890 ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,578 DEBUG [o.a.c.f.j.i.AsyncJobManage

[jira] [Commented] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-11 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14242404#comment-14242404
 ] 

France commented on CLOUDSTACK-8014:


The template from which this VM was created, has actually been deleted, based 
on the name of the template (deleteD :-)
At first account was not able to create VM, cause it had template limit set to 
0.
I have since increased this maximum template value to 2 and was able to create 
template from that Snapshot _successfully_.
Then I went on to proceed to create Volume from that snapshot, which _failed 
again_ with below error.
I also tried to download Template, which failed because it was not extractable.

I would think, this bug is still not solved, and unless you are convinced 
otherwise, would recommend to re-open it. I can re-open it, if you wish.
_if additional info is required, including monitored access to this ACS setup, 
it can be arranged. Just let me know._

I believe/guess the key to this issue lies within these two lines:
//
2014-12-11 11:46:43,667 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Removing pool Pool[208|IscsiLUN] 
from avoid set, must have been inserted when searching for another disk's tag
2014-12-11 11:46:43,667 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Removing pool Pool[209|IscsiLUN] 
from avoid set, must have been inserted when searching for another disk's tag
//
I guess it has something to do with storage tags. Looks like it can not find a 
suitable deployment?

2014-12-11 11:46:43,659 DEBUG [o.a.c.s.a.LocalStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) LocalStoragePoolAllocator trying to 
find storage pool to fit the vm
2014-12-11 11:46:43,660 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) ClusterScopeStoragePoolAllocator 
looking for storage pool
2014-12-11 11:46:43,660 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Looking for pools in dc: 1  pod:1  
cluster:null having tags:[HAiscsi2]
2014-12-11 11:46:43,662 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Found pools matching tags: 
[Pool[208|IscsiLUN], Pool[209|IscsiLUN]]
2014-12-11 11:46:43,667 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Adding pool Pool[206|IscsiLUN] to 
avoid set since it did not match tags
2014-12-11 11:46:43,667 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Adding pool Pool[207|IscsiLUN] to 
avoid set since it did not match tags
2014-12-11 11:46:43,667 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Adding pool 
Pool[210|NetworkFilesystem] to avoid set since it did not match tags
2014-12-11 11:46:43,667 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Removing pool Pool[208|IscsiLUN] 
from avoid set, must have been inserted when searching for another disk's tag
2014-12-11 11:46:43,667 DEBUG [o.a.c.s.a.ClusterScopeStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Removing pool Pool[209|IscsiLUN] 
from avoid set, must have been inserted when searching for another disk's tag
2014-12-11 11:46:43,668 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Checking if storage pool is 
suitable, name: null ,poolId: 208
2014-12-11 11:46:43,673 DEBUG [c.c.s.StorageManagerImpl] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Checking pool 208 for storage, 
totalSize: 1520242262016, usedBytes: 887716052992, usedPct: 0.5839306505101337, 
disable threshold: 0.95
2014-12-11 11:46:43,678 DEBUG [c.c.s.VolumeApiServiceImpl] 
(Job-Executor-56:ctx-3adba05f ctx-f361c79d) Failed to create volume: 621
java.lang.NullPointerException
at 
com.cloud.storage.StorageManagerImpl.storagePoolHasEnoughSpace(StorageManagerImpl.java:1570)
at 
org.apache.cloudstack.storage.allocator.AbstractStoragePoolAllocator.filter(AbstractStoragePoolAllocator.java:199)
at 
org.apache.cloudstack.storage.allocator.ClusterScopeStoragePoolAllocator.select(ClusterScopeStoragePoolAllocator.java:110)
at 
org.apache.cloudstack.storage.allocator.AbstractStoragePoolAllocator.allocateToPool(AbstractStoragePoolAllocator.java:109)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.findStoragePool(VolumeOrchestrator.java:256)
at 
org.apache.cloudstack.engine.orchestration.VolumeOrchestrator.createVolumeFromSnapshot(VolumeOrchestrator.java:339)
at 
com.cloud.storage.VolumeApiServiceImpl.createVolumeFromSnapshot(VolumeApiServiceImpl.java:785)
at 
com.cloud.storage.VolumeApiServiceImpl.createVolume(VolumeApiServiceImpl.java:735)

[jira] [Issue Comment Deleted] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-11 Thread France (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

France updated CLOUDSTACK-8014:
---
Comment: was deleted

(was: Tnx Rohit for your efforts.
I will read your responses with great cate and try to do what you ask of me, 
then report back on the jira.
This is just a private mail to you, so you know I am working on it, the problem 
is that i have to deal with the customers concurrently on completely different 
field. :-/

Regards,
F.



)

> Failed to create a volume from snapshot - Discrepency in the resource count ?
> -
>
> Key: CLOUDSTACK-8014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.1
>Reporter: France
>Assignee: Rohit Yadav
>
> As sent to mailing list:
> Hi all.
> We are on XS 6.0.2+Hotfixes and CS 4.3.1.
> (All errors we are getting nowadays have come up only after upgrade from 
> 4.1.1, 4.1.1 worked perfectly.)
> After successfully creating s snapshot, I want to create a volume from it, so 
> it can be downloaded offsite.
> After clicking “Create volume” I get an error Failed to create a volume.
> If i got to list of volumes, there is a volume with name as defined, but 
> Status is empty, and only buttons for attach and destroy exist.
> I have taken a look at catalina.out and management-server.log. Here is the 
> log detailing a failure.
> Can you see the problem? How should I fix it? Please advise me.
> Should I create a bug report?
> I have also manually checked space on all storages and it seems there is LOTS 
> of free space. Resources based on CS Web GUI are:
> Primary Storage Allocated 15%
> Secondary Storage 5%
> 2014-12-03 12:20:42,493 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
> (consoleproxy-1:ctx-2d51baa2) Zone 1 is ready to launch console proxy
> 2014-12-03 12:20:42,580 DEBUG [c.c.s.s.SecondaryStorageManagerImpl] 
> (secstorage-1:ctx-53354d18) Zone 1 is ready to launch secondary storage VM
> 2014-12-03 12:20:42,894 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49) ===START===  111.client.ip.111 -- GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:42,909 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49 ctx-74e696ca) ===END=== 111.client.ip.111 -- 
> GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:46,482 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-107:ctx-4a28e57c) ===START===  111.client.ip.111 -- GET 
> command=createVolume&response=json&sessionkey=CENSORED%3D&snapshotid=49e4bba9-844d-4b7b-aca2-95a318f17dc4&name=testbrisi567&_=1417605818552
> 2014-12-03 12:20:46,490 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,494 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,507 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> com.cloud.storage.SnapshotVO$$EnhancerByCGLIB$$3490cea0@60c95391 granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,571 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Job-Executor-166:ctx-d915e890) Add job-2804 into job monitoring
> 2014-12-03 12:20:46,571 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Job-Executor-166:ctx-d915e890) Executing AsyncJobVO {id:2804, userId: 59, 
> accountId: 60, instanceType: Volume, instanceId: 617, cmd: 
> org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
> {"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2014-12-03 12:20:46,577 DEBUG [c.c.u.AccountManagerImpl] 
> (Job-Executor-166:ctx-d915e890 ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,578 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl

[jira] [Commented] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-10 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14241094#comment-14241094
 ] 

France commented on CLOUDSTACK-8014:


Tnx Rohit for your efforts.
I will read your responses with great cate and try to do what you ask of me, 
then report back on the jira.
This is just a private mail to you, so you know I am working on it, the problem 
is that i have to deal with the customers concurrently on completely different 
field. :-/

Regards,
F.





> Failed to create a volume from snapshot - Discrepency in the resource count ?
> -
>
> Key: CLOUDSTACK-8014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.1
>Reporter: France
>
> As sent to mailing list:
> Hi all.
> We are on XS 6.0.2+Hotfixes and CS 4.3.1.
> (All errors we are getting nowadays have come up only after upgrade from 
> 4.1.1, 4.1.1 worked perfectly.)
> After successfully creating s snapshot, I want to create a volume from it, so 
> it can be downloaded offsite.
> After clicking “Create volume” I get an error Failed to create a volume.
> If i got to list of volumes, there is a volume with name as defined, but 
> Status is empty, and only buttons for attach and destroy exist.
> I have taken a look at catalina.out and management-server.log. Here is the 
> log detailing a failure.
> Can you see the problem? How should I fix it? Please advise me.
> Should I create a bug report?
> I have also manually checked space on all storages and it seems there is LOTS 
> of free space. Resources based on CS Web GUI are:
> Primary Storage Allocated 15%
> Secondary Storage 5%
> 2014-12-03 12:20:42,493 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
> (consoleproxy-1:ctx-2d51baa2) Zone 1 is ready to launch console proxy
> 2014-12-03 12:20:42,580 DEBUG [c.c.s.s.SecondaryStorageManagerImpl] 
> (secstorage-1:ctx-53354d18) Zone 1 is ready to launch secondary storage VM
> 2014-12-03 12:20:42,894 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49) ===START===  111.client.ip.111 -- GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:42,909 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49 ctx-74e696ca) ===END=== 111.client.ip.111 -- 
> GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:46,482 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-107:ctx-4a28e57c) ===START===  111.client.ip.111 -- GET 
> command=createVolume&response=json&sessionkey=CENSORED%3D&snapshotid=49e4bba9-844d-4b7b-aca2-95a318f17dc4&name=testbrisi567&_=1417605818552
> 2014-12-03 12:20:46,490 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,494 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,507 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> com.cloud.storage.SnapshotVO$$EnhancerByCGLIB$$3490cea0@60c95391 granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,571 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Job-Executor-166:ctx-d915e890) Add job-2804 into job monitoring
> 2014-12-03 12:20:46,571 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Job-Executor-166:ctx-d915e890) Executing AsyncJobVO {id:2804, userId: 59, 
> accountId: 60, instanceType: Volume, instanceId: 617, cmd: 
> org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
> {"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2014-12-03 12:20:46,577 DEBUG [c.c.u.AccountManagerImpl] 
> (Job-Executor-166:ctx-d915e890 ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,578 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (http

[jira] [Commented] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-09 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14239395#comment-14239395
 ] 

France commented on CLOUDSTACK-8014:


I am able to create it for _most_ virtual instances also.
However that is not the case with the above one.

Until upgrade to 4.3.1 i nave never observed resource discrepancy message.
Do you believe it is not working, due to that error (message)?
I can create a volume from another snapshot, and check if the same message is 
logged, when it works.
Please let me know, if you want that, or if you would like to see any other 
information about my setup.

Thank you for your time.

> Failed to create a volume from snapshot - Discrepency in the resource count ?
> -
>
> Key: CLOUDSTACK-8014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.1
>Reporter: France
>
> As sent to mailing list:
> Hi all.
> We are on XS 6.0.2+Hotfixes and CS 4.3.1.
> (All errors we are getting nowadays have come up only after upgrade from 
> 4.1.1, 4.1.1 worked perfectly.)
> After successfully creating s snapshot, I want to create a volume from it, so 
> it can be downloaded offsite.
> After clicking “Create volume” I get an error Failed to create a volume.
> If i got to list of volumes, there is a volume with name as defined, but 
> Status is empty, and only buttons for attach and destroy exist.
> I have taken a look at catalina.out and management-server.log. Here is the 
> log detailing a failure.
> Can you see the problem? How should I fix it? Please advise me.
> Should I create a bug report?
> I have also manually checked space on all storages and it seems there is LOTS 
> of free space. Resources based on CS Web GUI are:
> Primary Storage Allocated 15%
> Secondary Storage 5%
> 2014-12-03 12:20:42,493 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
> (consoleproxy-1:ctx-2d51baa2) Zone 1 is ready to launch console proxy
> 2014-12-03 12:20:42,580 DEBUG [c.c.s.s.SecondaryStorageManagerImpl] 
> (secstorage-1:ctx-53354d18) Zone 1 is ready to launch secondary storage VM
> 2014-12-03 12:20:42,894 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49) ===START===  111.client.ip.111 -- GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:42,909 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49 ctx-74e696ca) ===END=== 111.client.ip.111 -- 
> GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:46,482 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-107:ctx-4a28e57c) ===START===  111.client.ip.111 -- GET 
> command=createVolume&response=json&sessionkey=CENSORED%3D&snapshotid=49e4bba9-844d-4b7b-aca2-95a318f17dc4&name=testbrisi567&_=1417605818552
> 2014-12-03 12:20:46,490 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,494 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,507 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> com.cloud.storage.SnapshotVO$$EnhancerByCGLIB$$3490cea0@60c95391 granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,571 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Job-Executor-166:ctx-d915e890) Add job-2804 into job monitoring
> 2014-12-03 12:20:46,571 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Job-Executor-166:ctx-d915e890) Executing AsyncJobVO {id:2804, userId: 59, 
> accountId: 60, instanceType: Volume, instanceId: 617, cmd: 
> org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
> {"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2014-12-03 12:20:46,577 DEBUG [c.c.u.AccountManagerImpl] 
> (Job-Executor-166:ctx-d915e890 ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] gra

[jira] [Created] (CLOUDSTACK-8044) Failed to create snapshot due to an internal error creating snapshot for volume 372 -> Failure from sparse_dd: Fatal error: exception Invalid_argument("index out of

2014-12-08 Thread France (JIRA)
France created CLOUDSTACK-8044:
--

 Summary: Failed to create snapshot due to an internal error 
creating snapshot for volume 372 -> Failure from sparse_dd: Fatal error: 
exception Invalid_argument("index out of bounds")  
 Key: CLOUDSTACK-8044
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8044
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.1
 Environment: XS 6.0.2+Hotfixes, ACS 4.3.1


Reporter: France


After upgrading from CS 4.1.1 to CS 4.3.1, one of the instances can not get a 
snapshot. There were/are some errors for all instances, as described here:
https://issues.apache.org/jira/browse/CLOUDSTACK-8013
but this is currently the only instance, where snapshots are actually not 
working. This instance has had its root disk size extended and 
cloud.volumes.size field updated manually in the DB accordingly after the 
upgrade.
Suresh Babu, has supposedly tried the same procedure, but failed to get the 
error as we did.

Here is the management server log:
014-12-08 08:46:35,858 DEBUG [c.c.a.ApiServlet] 
(http-6443-exec-121:ctx-5b80bdcb) ===START===  XX.XX.XX.XX -- GET  
command=createSnapshot&volumeid=e1cf5716-4af1-47cf-8f8d-84979ca55183&quiescevm=false&response=json&sessionkey=CENSOREDo%3D&_=1418024978323
2014-12-08 08:46:35,869 DEBUG [c.c.u.AccountManagerImpl] 
(http-6443-exec-121:ctx-5b80bdcb ctx-992184e6) Access to 
Acct[2f00e8d9-77b7-41eb-9aa4-2bf884268a3d-leoL] granted to 
Acct[2f00e8d9-77b7-41eb-9aa4-2bf884268a3d-leoL] by DomainChecker
2014-12-08 08:46:35,882 DEBUG [c.c.u.AccountManagerImpl] 
(http-6443-exec-121:ctx-5b80bdcb ctx-992184e6) Access to 
org.apache.cloudstack.storage.volume.VolumeObject@6329ffac granted to 
Acct[2f00e8d9-77b7-41eb-9aa4-2bf884268a3d-leoL] by DomainChecker
2014-12-08 08:46:35,951 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(Job-Executor-20:ctx-f5bed97c) Add job-2834 into job monitoring
2014-12-08 08:46:35,951 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(Job-Executor-20:ctx-f5bed97c) Executing AsyncJobVO {id:2834, userId: 43, 
accountId: 45, instanceType: Snapshot, instanceId: 1224, cmd: 
org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd, cmdInfo: 
{"id":"1224","response":"json","sessionkey":"CENSOREDo\u003d","cmdEventType":"SNAPSHOT.CREATE","ctxUserId":"43","httpmethod":"GET","volumeid":"e1cf5716-4af1-47cf-8f8d-84979ca55183","_":"1418024978323","quiescevm":"false","ctxAccountId":"45","ctxStartEventId":"62631"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, lastPolled: 
null, created: null}
2014-12-08 08:46:35,951 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(http-6443-exec-121:ctx-5b80bdcb ctx-992184e6) submit async job-2834, details: 
AsyncJobVO {id:2834, userId: 43, accountId: 45, instanceType: Snapshot, 
instanceId: 1224, cmd: 
org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd, cmdInfo: 
{"id":"1224","response":"json","sessionkey":"CENSOREDo\u003d","cmdEventType":"SNAPSHOT.CREATE","ctxUserId":"43","httpmethod":"GET","volumeid":"e1cf5716-4af1-47cf-8f8d-84979ca55183","_":"1418024978323","quiescevm":"false","ctxAccountId":"45","ctxStartEventId":"62631"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, lastPolled: 
null, created: null}
2014-12-08 08:46:35,953 DEBUG [c.c.a.ApiServlet] 
(http-6443-exec-121:ctx-5b80bdcb ctx-992184e6) ===END===  XX.XX.XX.XX -- GET  
command=createSnapshot&volumeid=e1cf5716-4af1-47cf-8f8d-84979ca55183&quiescevm=false&response=json&sessionkey=CENSOREDo%3D&_=1418024978323
2014-12-08 08:46:35,959 DEBUG [c.c.u.AccountManagerImpl] 
(Job-Executor-20:ctx-f5bed97c ctx-992184e6) Access to 
Acct[2f00e8d9-77b7-41eb-9aa4-2bf884268a3d-leoL] granted to 
Acct[2f00e8d9-77b7-41eb-9aa4-2bf884268a3d-leoL] by DomainChecker
2014-12-08 08:46:35,983 INFO  [o.a.c.a.c.u.s.CreateSnapshotCmd] 
(Job-Executor-20:ctx-f5bed97c ctx-992184e6) VOLSS: createSnapshotCmd 
starts:1418024795983
2014-12-08 08:46:36,073 DEBUG [c.c.a.t.Request] (Job-Executor-20:ctx-f5bed97c 
ctx-992184e6) Seq 4-2104434319: Sending  { Cmd , MgmtId: 95545481387, via: 
4(x4.c.some.domain), Ver: v1, Flags: 100011, 
[{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"volume":{"uuid":"e1cf5716-4af1-47cf-8f8d-84979ca55183","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"90c57ad9-fd30-39a9-930b-2163adc8798d","id":209,"poolType":"IscsiLUN","host":"some.storage.fqdn.1","path":"/iqn.2010-03.c.some.domain:storage.c.some.domain.s3/1","port":3260,"url":"IscsiLUN://some.storage.fqdn.1//iqn.2010-03.c.some.domain:storage.c.some

[jira] [Commented] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-05 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14235534#comment-14235534
 ] 

France commented on CLOUDSTACK-8014:


Compute offering which was used to create this root volume has storage tag: 
HAiscsi2. 
Root disk currently resides on s2iscsi which has storage tag: HAiscsi2.
Snapshot is currently residing on one of the secondary NFS storages, which are 
without tags.
Related SQL querry (SELECT id,tag FROM cloud.storage_pool_view;) for storage 
IDs and tags returns:
'206','HAiscsi'
'207','HAiscsi'
'208','HAiscsi2'
'209','HAiscsi2'
'210','nonHAnfs'
Those IDs are probably the ones used in the log.


> Failed to create a volume from snapshot - Discrepency in the resource count ?
> -
>
> Key: CLOUDSTACK-8014
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8014
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.3.1
>Reporter: France
>
> As sent to mailing list:
> Hi all.
> We are on XS 6.0.2+Hotfixes and CS 4.3.1.
> (All errors we are getting nowadays have come up only after upgrade from 
> 4.1.1, 4.1.1 worked perfectly.)
> After successfully creating s snapshot, I want to create a volume from it, so 
> it can be downloaded offsite.
> After clicking “Create volume” I get an error Failed to create a volume.
> If i got to list of volumes, there is a volume with name as defined, but 
> Status is empty, and only buttons for attach and destroy exist.
> I have taken a look at catalina.out and management-server.log. Here is the 
> log detailing a failure.
> Can you see the problem? How should I fix it? Please advise me.
> Should I create a bug report?
> I have also manually checked space on all storages and it seems there is LOTS 
> of free space. Resources based on CS Web GUI are:
> Primary Storage Allocated 15%
> Secondary Storage 5%
> 2014-12-03 12:20:42,493 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
> (consoleproxy-1:ctx-2d51baa2) Zone 1 is ready to launch console proxy
> 2014-12-03 12:20:42,580 DEBUG [c.c.s.s.SecondaryStorageManagerImpl] 
> (secstorage-1:ctx-53354d18) Zone 1 is ready to launch secondary storage VM
> 2014-12-03 12:20:42,894 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49) ===START===  111.client.ip.111 -- GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:42,909 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-108:ctx-a228ce49 ctx-74e696ca) ===END=== 111.client.ip.111 -- 
> GET 
> command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
> 2014-12-03 12:20:46,482 DEBUG [c.c.a.ApiServlet] 
> (http-6443-exec-107:ctx-4a28e57c) ===START===  111.client.ip.111 -- GET 
> command=createVolume&response=json&sessionkey=CENSORED%3D&snapshotid=49e4bba9-844d-4b7b-aca2-95a318f17dc4&name=testbrisi567&_=1417605818552
> 2014-12-03 12:20:46,490 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,494 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,507 DEBUG [c.c.u.AccountManagerImpl] 
> (http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
> com.cloud.storage.SnapshotVO$$EnhancerByCGLIB$$3490cea0@60c95391 granted to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
> 2014-12-03 12:20:46,571 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
> (Job-Executor-166:ctx-d915e890) Add job-2804 into job monitoring
> 2014-12-03 12:20:46,571 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
> (Job-Executor-166:ctx-d915e890) Executing AsyncJobVO {id:2804, userId: 59, 
> accountId: 60, instanceType: Volume, instanceId: 617, cmd: 
> org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
> {"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
>  cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
> null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, 
> lastPolled: null, created: null}
> 2014-12-03 12:20:46,577 DEBUG [c.c.u.AccountManagerImpl] 
> (Job-Executor-166:ctx-d915e890 ctx-d7a11540) Access to 
> Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userN

[jira] [Commented] (CLOUDSTACK-5890) Error while collecting disk stats from vm

2014-12-03 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233048#comment-14233048
 ] 

France commented on CLOUDSTACK-5890:


Can someone please tell me, if this fix got into 4.3 branch or not? I am not 
sure, while i think it did.

> Error while collecting disk stats from vm
> -
>
> Key: CLOUDSTACK-5890
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5890
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.2.1, 4.3.0
>Reporter: Daan Hoogland
>
> "Error while collecting disk stats from" is thrown a lot of times.
> 2014-01-16 00:00:56,006 WARN  [xen.resource.CitrixResourceBase] 
> (DirectAgent-137:null) Error while collecting disk stats from :
> You gave an invalid object reference.  The object may have recently been 
> deleted.  The class parameter gives the type of reference given, and the 
> handle parameter echoes the bad value given.
> at com.xensource.xenapi.Types.checkResponse(Types.java:209)
> at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
> at 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:909)
> at com.xensource.xenapi.VBDMetrics.getIoReadKbs(VBDMetrics.java:210)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.getVmStats(CitrixResourceBase.java:2791)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:2691)
> at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:497)
> at 
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:59)
> at 
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> at java.lang.Thread.run(Thread.java:679)
> xapi returns HANDLE_INVALID, after which the whole collection process for 
> stops for this host as it is caught outside the loop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-8014) Failed to create a volume from snapshot - Discrepency in the resource count ?

2014-12-03 Thread France (JIRA)
France created CLOUDSTACK-8014:
--

 Summary: Failed to create a volume from snapshot - Discrepency in 
the resource count ?
 Key: CLOUDSTACK-8014
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8014
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.1
Reporter: France


As sent to mailing list:

Hi all.

We are on XS 6.0.2+Hotfixes and CS 4.3.1.
(All errors we are getting nowadays have come up only after upgrade from 4.1.1, 
4.1.1 worked perfectly.)

After successfully creating s snapshot, I want to create a volume from it, so 
it can be downloaded offsite.
After clicking “Create volume” I get an error Failed to create a volume.
If i got to list of volumes, there is a volume with name as defined, but Status 
is empty, and only buttons for attach and destroy exist.

I have taken a look at catalina.out and management-server.log. Here is the log 
detailing a failure.
Can you see the problem? How should I fix it? Please advise me.
Should I create a bug report?

I have also manually checked space on all storages and it seems there is LOTS 
of free space. Resources based on CS Web GUI are:
Primary Storage Allocated 15%
Secondary Storage 5%

2014-12-03 12:20:42,493 DEBUG [c.c.c.ConsoleProxyManagerImpl] 
(consoleproxy-1:ctx-2d51baa2) Zone 1 is ready to launch console proxy
2014-12-03 12:20:42,580 DEBUG [c.c.s.s.SecondaryStorageManagerImpl] 
(secstorage-1:ctx-53354d18) Zone 1 is ready to launch secondary storage VM
2014-12-03 12:20:42,894 DEBUG [c.c.a.ApiServlet] 
(http-6443-exec-108:ctx-a228ce49) ===START===  111.client.ip.111 -- GET 
command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
2014-12-03 12:20:42,909 DEBUG [c.c.a.ApiServlet] 
(http-6443-exec-108:ctx-a228ce49 ctx-74e696ca) ===END=== 111.client.ip.111 -- 
GET 
command=listZones&available=true&response=json&sessionkey=CENSORED%3D&_=1417605814964
2014-12-03 12:20:46,482 DEBUG [c.c.a.ApiServlet] 
(http-6443-exec-107:ctx-4a28e57c) ===START===  111.client.ip.111 -- GET 
command=createVolume&response=json&sessionkey=CENSORED%3D&snapshotid=49e4bba9-844d-4b7b-aca2-95a318f17dc4&name=testbrisi567&_=1417605818552
2014-12-03 12:20:46,490 DEBUG [c.c.u.AccountManagerImpl] 
(http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
2014-12-03 12:20:46,494 DEBUG [c.c.u.AccountManagerImpl] 
(http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
2014-12-03 12:20:46,507 DEBUG [c.c.u.AccountManagerImpl] 
(http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) Access to 
com.cloud.storage.SnapshotVO$$EnhancerByCGLIB$$3490cea0@60c95391 granted to 
Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
2014-12-03 12:20:46,571 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(Job-Executor-166:ctx-d915e890) Add job-2804 into job monitoring
2014-12-03 12:20:46,571 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(Job-Executor-166:ctx-d915e890) Executing AsyncJobVO {id:2804, userId: 59, 
accountId: 60, instanceType: Volume, instanceId: 617, cmd: 
org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
{"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, lastPolled: 
null, created: null}
2014-12-03 12:20:46,577 DEBUG [c.c.u.AccountManagerImpl] 
(Job-Executor-166:ctx-d915e890 ctx-d7a11540) Access to 
Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] granted to 
Acct[7d6da3bc-0d70-44eb-8a70-422e4eea184e-userName] by DomainChecker
2014-12-03 12:20:46,578 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(http-6443-exec-107:ctx-4a28e57c ctx-d7a11540) submit async job-2804, details: 
AsyncJobVO {id:2804, userId: 59, accountId: 60, instanceType: Volume, 
instanceId: 617, cmd: 
org.apache.cloudstack.api.command.user.volume.CreateVolumeCmd, cmdInfo: 
{"id":"617","response":"json","sessionkey":"CENSORED\u003d","cmdEventType":"VOLUME.CREATE","ctxUserId":"59","snapshotid":"49e4bba9-844d-4b7b-aca2-95a318f17dc4","name":"testbrisi567","httpmethod":"GET","_":"1417605818552","ctxAccountId":"60","ctxStartEventId":"62488"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, lastPolled: 
null, created: null}
2014-12

[jira] [Created] (CLOUDSTACK-8013) Snapshosts errors after upgrade 4.1.1 to 4.3.1

2014-12-03 Thread France (JIRA)
France created CLOUDSTACK-8013:
--

 Summary: Snapshosts errors after upgrade 4.1.1 to 4.3.1
 Key: CLOUDSTACK-8013
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-8013
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.1
 Environment: XS 6.0.2+Hotfixes
Reporter: France


This was sent to mailing list a few months back, but received no reply.
Snapshots were also not working for VMs who have had snapshots earlier. (New 
VMs and VMS withour previous snapshots have had them working after upgrade.)
In usage database snapshots have had the size of 0.
In order to get them working, I have had manually created snapshots a few times 
(5+) and then deleted failed snapshots. For one VM, this is still not working. 
Will post another bug for that and link it in comments.



Hi guys,

after upgrading ACS from 4.1.1 to 4.3.1 and upgrading XS 6.0.2 to latest 
hotfixes and manually replacing NFSSR.py with one from ACS 4.3.1, amongst other 
errors, I see this error.
Snapshot is created successfully. I have created templates from it and used 
them to create new VMs, but in the log there is an error. (Also snapshots are 
not working on volumes which have had snapshots before the upgrade. But i will 
deal with that issue later. They might even be related.)

What should I check to provide you with more info about the issue, so we can 
address it?
Google search came up empty for this issue. :-( Please help me sort this out.
To me, it looks like Xen removed 
/dev/VG_XenStorage-197c3580-fc82-35e5-356d-1f0909a9cfd9/VHD-3d732e11-8697-4a45-9518-44d4154ee6d6
 after the copy to secondary storage but before cloudstack ordered the removal.
Should I open a bug for it?


This is where i believe the snapshot was ordered:
/
2014-10-07 16:51:16,919 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(http-6443-exec-42:ctx-6a64da65 ctx-e324e472) submit async job-2171, details: 
AsyncJobVO {id:2171, userId: 28, accountId: 30, instanceType: Snapshot, 
instanceId: 977, cmd: 
org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd, cmdInfo: 
{"id":"977","response":"json","sessionkey”:"X","cmdEventType":"SNAPSHOT.CREATE","ctxUserId":"28","httpmethod":"GET","_":"1412693528063","volumeid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","quiescevm":"false","ctxAccountId":"30","ctxStartEventId":"60842"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, lastPolled: 
null, created: null}
2014-10-07 16:51:16,920 INFO  [o.a.c.f.j.i.AsyncJobMonitor] 
(Job-Executor-84:ctx-649dbc0e) Add job-2171 into job monitoring
2014-10-07 16:51:16,920 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl] 
(Job-Executor-84:ctx-649dbc0e) Executing AsyncJobVO {id:2171, userId: 28, 
accountId: 30, instanceType: Snapshot, instanceId: 977, cmd: 
org.apache.cloudstack.api.command.user.snapshot.CreateSnapshotCmd, cmdInfo: 
{"id":"977","response":"json","sessionkey”:"X","cmdEventType":"SNAPSHOT.CREATE","ctxUserId":"28","httpmethod":"GET","_":"1412693528063","volumeid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","quiescevm":"false","ctxAccountId":"30","ctxStartEventId":"60842"},
 cmdVersion: 0, status: IN_PROGRESS, processStatus: 0, resultCode: 0, result: 
null, initMsid: 95545481387, completeMsid: null, lastUpdated: null, lastPolled: 
null, created: null}
2014-10-07 16:51:16,920 DEBUG [c.c.a.ApiServlet] 
(http-6443-exec-42:ctx-6a64da65 ctx-e324e472) ===END=== XX.XX.XX.XX -- GET  
command=createSnapshot&volumeid=72d5b956-fbaf-44c2-8aeb-df49354ddbb3&quiescevm=false&response=json&sessionkey=XX%3D&_=1412693528063
2014-10-07 16:51:16,928 DEBUG [c.c.u.AccountManagerImpl] 
(Job-Executor-84:ctx-649dbc0e ctx-e324e472) Access to 
Acct[7ff6cd6b-7400-4d44-980b-9dc3115264eb-XX] granted to 
Acct[7ff6cd6b-7400-4d44-980b-9dc3115264eb-] by DomainChecker
2014-10-07 16:51:16,953 INFO  [o.a.c.a.c.u.s.CreateSnapshotCmd] 
(Job-Executor-84:ctx-649dbc0e ctx-e324e472) VOLSS: createSnapshotCmd 
starts:1412693476953
2014-10-07 16:51:17,290 DEBUG [c.c.a.t.Request] (Job-Executor-84:ctx-649dbc0e 
ctx-e324e472) Seq 2-1824254062: Sending  { Cmd , MgmtId: 95545481387, via: 
2(x2.XXX), Ver: v1, Flags: 100011, 
[{"org.apache.cloudstack.storage.command.CreateObjectCommand":{"data":{"org.apache.cloudstack.storage.to.SnapshotObjectTO":{"volume":{"uuid":"72d5b956-fbaf-44c2-8aeb-df49354ddbb3","volumeType":"ROOT","dataStore":{"org.apache.cloudstack.storage.to.PrimaryDataStoreTO":{"uuid":"1c8dbf77-2b23-3244-91ee-5037cb2a55a8","id":208,"poolType":"IscsiLUN","host":"s2.v.XXX","path":"/iqn.2010-02.XXX:storage.XXX.s2/1","port":3260,"url":"IscsiLUN://s2.v.XXX//iqn.2010-02.XXX:storage.XXX.s2/1/?ROLE=Primary&STOREUUID=1c8dbf77-2b23-3244-91ee-5037cb2a

[jira] [Commented] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2014-12-03 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14232986#comment-14232986
 ] 

France commented on CLOUDSTACK-3367:


Soon will be in third year of this critical bug reported...
...still no one cares, if failure on one of primary storages, which happens to 
be non redundant, hard reboots WHOLE cloud.
Or is this not the case anymore with new releases ad it has been fixed?

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0, 4.5.0, 4.3.1
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7635) CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.

2014-12-03 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14232980#comment-14232980
 ] 

France commented on CLOUDSTACK-7635:


Here is the follow up from the mailing list rajaniATapache.org:

This might be due the preflighted requests [1]. Changing content type to
text/plain might fix it. Looking at the access log will show if an OPTIONS
request is being sent by Firefox.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS

Suresh Sadhu, should have all the logs and screenshots.


> CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.
> 
>
> Key: CLOUDSTACK-7635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, UI
>Affects Versions: 4.3.1
> Environment: CentOS 6.5
>Reporter: France
>Assignee: Stephen Turner
>
> CS 4.3 GUI Import certificate fails in Firefox 32.0.2 on OSX with error 
> message:  "Failed to update SSL Certificate." 
> There is nothing in management or catalina log, regarding when this happens.
> However it works normally with latest Chrome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7635) CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.

2014-11-28 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14228229#comment-14228229
 ] 

France commented on CLOUDSTACK-7635:


A while ago was reproduced by Suresh B., I wonder why he did not update this 
bug status.

> CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.
> 
>
> Key: CLOUDSTACK-7635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, UI
>Affects Versions: 4.3.1
> Environment: CentOS 6.5
>Reporter: France
>Assignee: Stephen Turner
>
> CS 4.3 GUI Import certificate fails in Firefox 32.0.2 on OSX with error 
> message:  "Failed to update SSL Certificate." 
> There is nothing in management or catalina log, regarding when this happens.
> However it works normally with latest Chrome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7635) CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.

2014-09-30 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14152906#comment-14152906
 ] 

France commented on CLOUDSTACK-7635:


Send me private mail, will give you access to my desktop and we can do a test 
together on my own install.

> CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.
> 
>
> Key: CLOUDSTACK-7635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, UI
>Affects Versions: 4.3.1
> Environment: CentOS 6.5
>Reporter: France
>Assignee: Stephen Turner
>
> CS 4.3 GUI Import certificate fails in Firefox 32.0.2 on OSX with error 
> message:  "Failed to update SSL Certificate." 
> There is nothing in management or catalina log, regarding when this happens.
> However it works normally with latest Chrome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7635) CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.

2014-09-29 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151610#comment-14151610
 ] 

France commented on CLOUDSTACK-7635:


I would not like to do it in production environment again if not absolutely 
necessary.
I can however try to add it to your test environment, if you give me access.
We can even do it together using RDP, VNC, teamviewer or something. Contact me 
to my private mail.

> CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.
> 
>
> Key: CLOUDSTACK-7635
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7635
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, UI
>Affects Versions: 4.3.1
> Environment: CentOS 6.5
>Reporter: France
>Assignee: Stephen Turner
>
> CS 4.3 GUI Import certificate fails in Firefox 32.0.2 on OSX with error 
> message:  "Failed to update SSL Certificate." 
> There is nothing in management or catalina log, regarding when this happens.
> However it works normally with latest Chrome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CLOUDSTACK-7635) CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.

2014-09-26 Thread France (JIRA)
France created CLOUDSTACK-7635:
--

 Summary: CS 4.3.1 GUI Import certificate fails in Firefox 32.0.2.
 Key: CLOUDSTACK-7635
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7635
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.3.1
 Environment: CentOS 6.5
Reporter: France
Priority: Minor


CS 4.3 GUI Import certificate fails in Firefox 32.0.2 on OSX with error 
message:  "Failed to update SSL Certificate." 

There is nothing in management or catalina log, regarding when this happens.

However it works normally with latest Chrome.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2014-09-24 Thread France (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

France updated CLOUDSTACK-3367:
---
Affects Version/s: 4.3.1
   4.5.0

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0, 4.5.0, 4.3.1
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2014-09-24 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146276#comment-14146276
 ] 

France commented on CLOUDSTACK-3367:


Anyone willing to pick this up?
It has been well over a year by now. :-(

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0, 4.5.0, 4.3.1
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-7517) FTP modules are not loaded in VR

2014-09-23 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14144657#comment-14144657
 ] 

France commented on CLOUDSTACK-7517:


What can we do on 4.3.1 so we can get a fixed version of VRs?

> FTP modules are not loaded in VR
> 
>
> Key: CLOUDSTACK-7517
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-7517
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Network Controller
>Affects Versions: 4.2.0
>Reporter: Jayapal Reddy
>Assignee: Jayapal Reddy
> Fix For: 4.5.0
>
>
> To have Active FTP working in isolated networks VRs need the have the 
> following modules loaded
> modprobe nf_nat_ftp
> root@r-7-QA:~# lsmod | grep ftp
> root@r-7-QA:~# modprobe nt_nat_ftp
> FATAL: Module nt_nat_ftp not found.
> root@r-7-QA:~# modprobe nf_nat_ftp
> root@r-7-QA:~# lsmod | grep ftp
> nf_nat_ftp 12420  0 
> nf_conntrack_ftp   12533  1 nf_nat_ftp
> nf_nat 17924  2 nf_nat_ftp,iptable_nat
> nf_conntrack   43121  7 
> nf_conntrack_ftp,nf_nat_ftp,nf_conntrack_ipv4,nf_nat,iptable_nat,xt_state,xt_connmark



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CLOUDSTACK-6060) Excessive use of LVM snapshots on XenServer, that leads to snapshot failure and unnecessary disk usage.

2014-05-22 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14005741#comment-14005741
 ] 

France commented on CLOUDSTACK-6060:


Hopefully some developer will focus on this bug, but as with others i 
submitted, they seem just to be ignored, so i highly doubt that will happen. 
:-( If this is a big problem for you, try asking on development mailing list 
for someone to take a look at it.

I guess there is a coalesce solution you can try manually, but i don't dare to 
do it myself. Google it:
http://www.google.si/search?q=xenserver+coalesce

in case you do fix it, please write your manual solution here, so every one 
else can benefit.

> Excessive use of LVM snapshots on XenServer, that leads to snapshot failure 
> and unnecessary disk usage.
> ---
>
> Key: CLOUDSTACK-6060
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6060
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.1
> Environment: CS 4.1.1, XS S602E027
>Reporter: France
>
> When user created multiple snapshots in CS GUI (in my case 3 daily, 2 weekly 
> and 2 monthly) snapshot creation soon failed, because the maximum amount of 
> LVM snapshots on XenServer was reached.
> From SMlog on XenServer:
> [9294] 2014-02-07 15:16:58.326838 * vdi_snapshot: EXCEPTION 
> SR.SROSError, The snapshot chain is too long
>   File "/opt/xensource/sm/SRCommand.py", line 94, in run
> return self._run_locked(sr)
>   File "/opt/xensource/sm/SRCommand.py", line 131, in _run_locked
> return self._run(sr, target)
>   File "/opt/xensource/sm/SRCommand.py", line 170, in _run
> return target.snapshot(self.params['sr_uuid'], self.vdi_uuid)
>   File "/opt/xensource/sm/LVHDSR.py", line 1440, in snapshot
> return self._snapshot(snapType)
>   File "/opt/xensource/sm/LVHDSR.py", line 1509, in _snapshot
> raise xs_errors.XenError('SnapshotChainTooLong')
>   File "/opt/xensource/sm/xs_errors.py", line 49, in __init__
> raise SR.SROSError(errorcode, errormessage)
> From CS:
> WARN  [xen.resource.CitrixResourceBase] (DirectAgent-150:) 
> ManageSnapshotCommand operation: create Failed for snapshotId: 489, reason: 
> SR_BACKEND_FAILURE_109The snapshot chain is too long
> SR_BACKEND_FAILURE_109The snapshot chain is too long
>   at com.xensource.xenapi.Types.checkResponse(Types.java:1936)
>   at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
>   at 
> com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:909)
>   at com.xensource.xenapi.VDI.miamiSnapshot(VDI.java:1217)
>   at com.xensource.xenapi.VDI.snapshot(VDI.java:1192)
>   at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:6293)
>   at 
> com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:487)
>   at 
> com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:73)
>   at 
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:701)
> Here is the snapshot list for the VM:
> [root@x1 ~]# xe vdi-list is-a-snapshot=true  | grep XZY
>   name-label ( RW): XZY_ROOT-385_20140125020342
>   name-label ( RW): XZY_ROOT-385_20140121020342
>   name-label ( RW): XZY_ROOT-385_20140121020342
>   name-label ( RW): XZY_ROOT-385_20140124020342
>   name-label ( RW): XZY_ROOT-385_20140122020342
>   name-label ( RW): XZY_ROOT-385_20140125020342
>   name-label ( RW): XZY_ROOT-385_20140123020342
>   name-label ( RW): XZY_ROOT-385_20140122020342
>   name-label ( RW): XZY_ROOT-385_20140125020342
>   name-label ( RW): XZY_ROOT-385_20140124020342
>   name-label ( RW): XZY_ROOT-385_20140120020341
>   name-label ( RW): XZY_ROOT-3

[jira] [Commented] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2014-03-05 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13920762#comment-13920762
 ] 

France commented on CLOUDSTACK-3367:


Just an idea for whomever picks this issue up (if anyone at all :( ).
Before killing the whole hypervisor host, maybe live migrate instances who's 
private storage is still functioning to another hypervisor.

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2014-03-03 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13917844#comment-13917844
 ] 

France commented on CLOUDSTACK-3367:


LOL, we're rolling backwards on the issue. We just lost assignee. :-)

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (CLOUDSTACK-6060) Excessive use of LVM snapshots on XenServer, that leads to snapshot failure and unnecessary disk usage.

2014-02-07 Thread France (JIRA)
France created CLOUDSTACK-6060:
--

 Summary: Excessive use of LVM snapshots on XenServer, that leads 
to snapshot failure and unnecessary disk usage.
 Key: CLOUDSTACK-6060
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6060
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server, XenServer
Affects Versions: 4.1.1
 Environment: CS 4.1.1, XS S602E027
Reporter: France


When user created multiple snapshots in CS GUI (in my case 3 daily, 2 weekly 
and 2 monthly) snapshot creation soon failed, because the maximum amount of LVM 
snapshots on XenServer was reached.

>From SMlog on XenServer:
[9294] 2014-02-07 15:16:58.326838   * vdi_snapshot: EXCEPTION 
SR.SROSError, The snapshot chain is too long
  File "/opt/xensource/sm/SRCommand.py", line 94, in run
return self._run_locked(sr)
  File "/opt/xensource/sm/SRCommand.py", line 131, in _run_locked
return self._run(sr, target)
  File "/opt/xensource/sm/SRCommand.py", line 170, in _run
return target.snapshot(self.params['sr_uuid'], self.vdi_uuid)
  File "/opt/xensource/sm/LVHDSR.py", line 1440, in snapshot
return self._snapshot(snapType)
  File "/opt/xensource/sm/LVHDSR.py", line 1509, in _snapshot
raise xs_errors.XenError('SnapshotChainTooLong')
  File "/opt/xensource/sm/xs_errors.py", line 49, in __init__
raise SR.SROSError(errorcode, errormessage)

>From CS:
WARN  [xen.resource.CitrixResourceBase] (DirectAgent-150:) 
ManageSnapshotCommand operation: create Failed for snapshotId: 489, reason: 
SR_BACKEND_FAILURE_109The snapshot chain is too long
SR_BACKEND_FAILURE_109The snapshot chain is too long
at com.xensource.xenapi.Types.checkResponse(Types.java:1936)
at com.xensource.xenapi.Connection.dispatch(Connection.java:368)
at 
com.cloud.hypervisor.xen.resource.XenServerConnectionPool$XenServerConnection.dispatch(XenServerConnectionPool.java:909)
at com.xensource.xenapi.VDI.miamiSnapshot(VDI.java:1217)
at com.xensource.xenapi.VDI.snapshot(VDI.java:1192)
at 
com.cloud.hypervisor.xen.resource.CitrixResourceBase.execute(CitrixResourceBase.java:6293)
at 
com.cloud.hypervisor.xen.resource.CitrixResourceBase.executeRequest(CitrixResourceBase.java:487)
at 
com.cloud.hypervisor.xen.resource.XenServer56Resource.executeRequest(XenServer56Resource.java:73)
at 
com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:701)

Here is the snapshot list for the VM:
[root@x1 ~]# xe vdi-list is-a-snapshot=true  | grep XZY
  name-label ( RW): XZY_ROOT-385_20140125020342
  name-label ( RW): XZY_ROOT-385_20140121020342
  name-label ( RW): XZY_ROOT-385_20140121020342
  name-label ( RW): XZY_ROOT-385_20140124020342
  name-label ( RW): XZY_ROOT-385_20140122020342
  name-label ( RW): XZY_ROOT-385_20140125020342
  name-label ( RW): XZY_ROOT-385_20140123020342
  name-label ( RW): XZY_ROOT-385_20140122020342
  name-label ( RW): XZY_ROOT-385_20140125020342
  name-label ( RW): XZY_ROOT-385_20140124020342
  name-label ( RW): XZY_ROOT-385_20140120020341
  name-label ( RW): XZY_ROOT-385_20140123020342
  name-label ( RW): XZY_ROOT-385_20140124020342
  name-label ( RW): XZY_ROOT-385_20140121020342
  name-label ( RW): XZY_ROOT-385_20140122020342
  name-label ( RW): XZY_ROOT-385_20140120020341
  name-label ( RW): XZY_ROOT-385_20140122020342
  name-label ( RW): XZY_ROOT-385_20140120020341
  name-label ( RW): XZY_ROOT-385_20140123020342
  name-label ( RW): XZY_ROOT-385_20140123020342
  name-label ( RW): XZY_ROOT-385_20140122020342
  name-label ( RW): XZY_ROOT-385_20140120020341
  name-label ( RW): XZY_ROOT-385_20140123020342
  name-label ( RW): XZY_ROOT-385_20140124020342
  name-label ( RW): XZY_ROOT-385_20140121020342
  name-label ( RW): XZY_ROOT-385_20140124020342
  name-label ( RW): XZY_ROOT-385_20140121020342
  name-label ( RW): XZY

[jira] [Commented] (CLOUDSTACK-4233) Upload NFSSR.py and others to XenServer hipervisor automatically

2013-09-25 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13777477#comment-13777477
 ] 

France commented on CLOUDSTACK-4233:


I'm actually looking at file:
/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/patch
on CS 4.1.1.

To mee it seems this functionality is already in place.
Maybe we just need to call this each time a host is added?

> Upload NFSSR.py and others to XenServer hipervisor automatically
> 
>
> Key: CLOUDSTACK-4233
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4233
> Project: CloudStack
>  Issue Type: New Feature
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: pre-4.0.0, 4.0.0, 4.0.1, 4.0.2, 4.1.0, 4.1.1
> Environment: CentOS 6.4+
>Reporter: France
>Priority: Minor
>
> When installing XenServer 6.0.2 hotfixes, they can overwrite NFSSR.py and 
> other manually changed files.
> I see no reason, why we should not add a check, when a host is (re)connected, 
> to check md5sum of 
> /opt/xensource/sm/NFSSR.py
> /opt/xensource/bin/setupxenserver.sh
> /opt/xensource/bin/make_migratable.sh
> /opt/xensource/bin/cloud-clean-vlan.sh
> and if it's different than the one on management server, upload fixed version 
> it to hipervisor. :-)
> I had to learn to update that by hand the hard way quite some time ago, so i 
> now regularly check after applying hotfixes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4675) Virtual Router only with DHCP should not have DNS service

2013-09-14 Thread France (JIRA)
France created CLOUDSTACK-4675:
--

 Summary: Virtual Router only with DHCP should not have DNS service
 Key: CLOUDSTACK-4675
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4675
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Virtual Router
Affects Versions: 4.1.1
Reporter: France


When one creates a virtual router using only DHCP as service one gets also DNS 
service, because dnsmasq.conf service has DNS service enabled. It can be 
disabled by setting port=0, but it's not.

This assumption that there is no open recursive DNS service present, can lead 
user to exposing open resursive DNS server to untrusted hosts, which then abuse 
it for DNS amplification attack.

Please actually disable DNS service, if it's not selected when creating network 
offering.

As a workaround i've added below commands to rc.local. Fix directly 
dnsmasql.conf gets reverted by some cloud init scripts.
iptables -I INPUT -p udp --dport 53 -j DROP
iptables -I INPUT -p tcp --dport 53 -j DROP


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4233) Upload NFSSR.py and others to XenServer hipervisor automatically

2013-08-10 Thread France (JIRA)
France created CLOUDSTACK-4233:
--

 Summary: Upload NFSSR.py and others to XenServer hipervisor 
automatically
 Key: CLOUDSTACK-4233
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4233
 Project: CloudStack
  Issue Type: New Feature
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.1.0, 4.0.2, 4.0.1, 4.0.0, pre-4.0.0, 4.1.1
 Environment: CentOS 6.4+
Reporter: France
Priority: Minor


When installing XenServer 6.0.2 hotfixes, they can overwrite NFSSR.py and other 
manually changed files.
I see no reason, why we should not add a check, when a host is (re)connected, 
to check md5sum of 
/opt/xensource/sm/NFSSR.py
/opt/xensource/bin/setupxenserver.sh
/opt/xensource/bin/make_migratable.sh
/opt/xensource/bin/cloud-clean-vlan.sh
and if it's different than the one on management server, upload fixed version 
it to hipervisor. :-)

I had to learn to update that by hand the hard way quite some time ago, so i 
now regularly check after applying hotfixes. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-4232) Mail subject in Thunderbird displays =?ANSI_X3.4-1968?Q?

2013-08-10 Thread France (JIRA)
France created CLOUDSTACK-4232:
--

 Summary: Mail subject in Thunderbird displays =?ANSI_X3.4-1968?Q?
 Key: CLOUDSTACK-4232
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-4232
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server
Affects Versions: 4.1.0, 4.0.2, 4.0.1, 4.0.0, pre-4.0.0, 4.1.1
 Environment: CentOS 6.4+
Reporter: France
Priority: Minor


When management server sends an email regarding an event, subject read in 
latest Thunderbird is all mangled up. Here is an example:
=?ANSI_X3.4-1968?Q?Unable_to_restart_v-358-VM_which?= 
=?ANSI_X3.4-1968?Q?_was_running_on_host_name:_x1.c.i?= 
=?ANSI_X3.4-1968?Q?CENSURED(id:3),_availability_zone:_I?= 
=?ANSI_X3.4-1968?Q?CENSURED=3Fka_CENSURED,_pod:_CENSURED_KV2_#1?=

Also local chars like šđžćč don't display correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2013-07-26 Thread France (JIRA)

 [ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

France updated CLOUDSTACK-3367:
---

Issue Type: Bug  (was: Improvement)

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2013-07-26 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13720748#comment-13720748
 ] 

France commented on CLOUDSTACK-3367:


I agree with your path to fix, but i disagree that not killing VMs, which have 
no issues, is an improvement or a new feature.
If you kill/destroy/stop something that's working normally and you should not 
have, it's definitely a bug. A mayor bug. :-)

> When one primary storage fails, all XenServer hosts get rebooted, killing all 
> VMs, even those not on this primary storage.
> --
>
> Key: CLOUDSTACK-3367
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
> Project: CloudStack
>  Issue Type: Improvement
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server, XenServer
>Affects Versions: 4.1.0, 4.2.0
> Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
> 4.1.0
>Reporter: France
> Fix For: Future
>
>
> As the title says: if only one of the primary storages fails, all XenServer 
> hosts get rebooted one by one. Because i have many primary storages, which 
> are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
> overkill. Please disable this or implement just stopping/killing the VMs 
> running on that storage and try to re-attach that storage only.
> Problem was reported on the mailing list, as well as a workaround for 
> XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
> now is as follows:
> 1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting 
> out the two entries which have "reboot -f"
> 2. Identify the PID of the script  - pidof -x xenheartbeat.sh
> 3. Restart the Script  - kill 
> 4. Force reconnect Host from the UI,  the script will then re-launch on 
> reconnect

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (CLOUDSTACK-3367) When one primary storage fails, all XenServer hosts get rebooted, killing all VMs, even those not on this primary storage.

2013-07-04 Thread France (JIRA)
France created CLOUDSTACK-3367:
--

 Summary: When one primary storage fails, all XenServer hosts get 
rebooted, killing all VMs, even those not on this primary storage.
 Key: CLOUDSTACK-3367
 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3367
 Project: CloudStack
  Issue Type: Bug
  Security Level: Public (Anyone can view this level - this is the default.)
  Components: Management Server, XenServer
Affects Versions: 4.1.0, 4.2.0
 Environment: CentOS 6.3, XenServer 6.0.2 + all hotfixes, CloudStack 
4.1.0
Reporter: France


As the title says: if only one of the primary storages fails, all XenServer 
hosts get rebooted one by one. Because i have many primary storages, which 
are/were running fine with other VMs, rebooting XenServer Hipervisor is an 
overkill. Please disable this or implement just stopping/killing the VMs 
running on that storage and try to re-attach that storage only.

Problem was reported on the mailing list, as well as a workaround for 
XenServer. So i'm not the only one hit by this "bug/feature". Workaround for 
now is as follows:

1. Modify /opt/xensource/bin/xenheartbeat.sh on all your Hosts, commenting out 
the two entries which have "reboot -f"
2. Identify the PID of the script  - pidof -x xenheartbeat.sh
3. Restart the Script  - kill 
4. Force reconnect Host from the UI,  the script will then re-launch on 
reconnect


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-3138) Flaws in upgrade documentation from 3.0.2 -> 4.1.0

2013-06-27 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-3138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13694562#comment-13694562
 ] 

France commented on CLOUDSTACK-3138:


I was upgrading to 4.1 and used instructions form 
http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.0/html/Release_Notes/index.html
 so the title is correct.
Upgrade to 4.0.* isn't possible anyway, because there is no DB upgrade schema. 
Fix was only included in 4.1.

> Flaws in upgrade documentation from 3.0.2 -> 4.1.0
> --
>
> Key: CLOUDSTACK-3138
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-3138
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Doc
>Affects Versions: 4.1.0
>Reporter: Joe Brockmeier
>Priority: Critical
>  Labels: documentation
> Fix For: 4.1.1
>
>
> Reported on the mailing list (http://markmail.org/message/ussthbb6sx6kjm2j)
> there are many errors in release notes for upgrade form CS 3.0.2 to 4.0.1.
> Here are just a few, from the top of my head. I suggest you correct them.
> 1. Location of config files is not at /etc/cloud/ but rather at 
> /etc/cloudstack now.
> 2. components.xml is nowhere to be found in /etc/cloudstack
> 3. server.xml generation failed, because i had enabled ssl in it. It 
> required me to generate them from scratch.
> 4. There were no instructions for enabling https, anywhere. I had to fix 
> server.xml and tomcat6.xml to use my certificate.
> 5. cloud-sysvmadm is nonexistent. I think there is cloudstack-sys.. Also 
> switches are wrong.
> 6. Python and bash scripts are now located 
> /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/ 
> instead 
> of /usr/lib64/cloud/common/ scripts/vm/hypervisor/ 
> xenserver/xenserver60/ as documentation would tell you.
> 7. for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk 
> '{print $NF}'`; do xe pbd-plug uuid=$pbd ; doesn't work:
> [root@x1 ~]# for pbd in `xe pbd-list currently-attached=false| grep 
> ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd
>  >
> ...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (CLOUDSTACK-2214) DB upgrade from 3.0.2 -> 4.0.2 is broken

2013-05-11 Thread France (JIRA)

[ 
https://issues.apache.org/jira/browse/CLOUDSTACK-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13655268#comment-13655268
 ] 

France commented on CLOUDSTACK-2214:


I have the same problem.
Now i have to roll-back to 3.0.2
I'm using XenServer, but it doesn't matter, because the problem lies in the DB 
schema, which is missing!
-->
root@mc1 management]# find / -name schema*
/usr/share/cloud/setup/db/schema-level.sql
/usr/share/cloud/setup/db/schema-2214to30-cleanup.sql
/usr/share/cloud/setup/db/schema-222to224-premium.sql
/usr/share/cloud/setup/db/schema-227to228.sql
/usr/share/cloud/setup/db/schema-217to218.sql
/usr/share/cloud/setup/db/schema-2210to2211.sql
/usr/share/cloud/setup/db/schema-2212to2213.sql
/usr/share/cloud/setup/db/schema-snapshot-217to224.sql
/usr/share/cloud/setup/db/schema-302to40.sql
/usr/share/cloud/setup/db/schema-2213to2214.sql
/usr/share/cloud/setup/db/schema-225to226.sql
/usr/share/cloud/setup/db/schema-30to301.sql
/usr/share/cloud/setup/db/schema-301to302-cleanup.sql
/usr/share/cloud/setup/db/schema-222to224-cleanup.sql
/usr/share/cloud/setup/db/schema-2211to2212-premium.sql
/usr/share/cloud/setup/db/schema-22beta3to22beta4.sql
/usr/share/cloud/setup/db/schema-20to21.sql
/usr/share/cloud/setup/db/schema-21to22-premium.sql
/usr/share/cloud/setup/db/schema-22beta1to22beta2.sql
/usr/share/cloud/setup/db/schema-222to224.sql
/usr/share/cloud/setup/db/schema-301to302.sql
/usr/share/cloud/setup/db/schema-2214to30.sql
/usr/share/cloud/setup/db/schema-224to225-cleanup.sql
/usr/share/cloud/setup/db/schema-221to222.sql
/usr/share/cloud/setup/db/schema-229to2210.sql
/usr/share/cloud/setup/db/schema-221to222-cleanup.sql
/usr/share/cloud/setup/db/schema-21to22.sql
/usr/share/cloud/setup/db/schema-228to229.sql
/usr/share/cloud/setup/db/schema-21to22-cleanup.sql
/usr/share/cloud/setup/db/schema-2211to2212.sql
/usr/share/cloud/setup/db/schema-221to222-premium.sql
/usr/share/cloud/setup/db/schema-224to225.sql
/usr/share/cloud/setup/db/schema-227to228-premium.sql
/usr/share/cloud/setup/db/schema-302to40-cleanup.sql
/usr/share/cloud/setup/db/schema-snapshot-223to224.sql
--

I suppose this should be added to release notes, so people don't waste their 
time.

> DB upgrade from 3.0.2 -> 4.0.2 is broken
> 
>
> Key: CLOUDSTACK-2214
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2214
> Project: CloudStack
>  Issue Type: Bug
>  Security Level: Public(Anyone can view this level - this is the 
> default.) 
>  Components: Management Server
>Affects Versions: 4.0.2
> Environment: CentOS 6 + vSphere
>Reporter: Tamas Monos
>Priority: Blocker
>
> Hi,
> I have tried an upgrade according to the upgrade instructions set out here:
> http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/pdf/Release_Notes/Apache_CloudStack-4.0.2-Release_Notes-en-US.pdf
> Packages upgraded:
> Installed:
>   cloud-scripts.x86_64 0:4.0.2-1.el6  
>   
> Dependency Installed:
>   apache-tomcat-apis.noarch 0:0.1-1.el6   
>   
>   cloud-aws-api.x86_64 0:4.0.2-1.el6  
>   
>   geronimo-specs.noarch 0:1.0-3.5.M2.el6  
>   
>   geronimo-specs-compat.noarch 0:1.0-3.5.M2.el6   
>   
>   jakarta-commons-daemon-jsvc.x86_64 1:1.0.1-8.9.el6  
>   
>   jakarta-commons-lang.noarch 0:2.4-1.1.el6   
>   
>   mysql-connector-java.noarch 1:5.1.17-6.el6  
>   
>   slf4j.noarch 0:1.5.8-8.el6  
>   
> Updated:
>   cloud-client.x86_64 0:4.0.2-1.el6 cloud-client-ui.x86_64 0:4.0.2-1.el6  
>   
>   cloud-core.x86_64 0:4.0.2-1.el6   cloud-deps.x86_64 0:4.0.2-1.el6   
>   
>   cloud-python.x86_64 0:4.0.2-1.el6 cloud-server.x86_64 0:4.0.2-1.el6 
>   
>   cloud-setup.x86_64 0:4.0.2-1.el6  cloud-usage.x86_64 0:4.0.2-1.el6  
>   
>   cloud-utils.x86_64 0:4.0.2-1.el6
> Repo URL: http://cloudstack.apt-get.eu/rhel/4.0/
> When I start the management server I get a FATAL error:
> 2013-04-26 16:57:56,759 DEBUG [upgrade.dao.VersionDaoImpl] (main:null) 
> Checking to see if the database is at a version before it was the version 
> table is created
> 2013-04-26 16:57:56,765 INFO  [cloud.upgrade.DatabaseUpgradeChecker] 
> (main:null) DB version = 3.0.2.20120506223416 Code Version = 
> 4.0.2.20130420145617
> 2013-04-26 16:57:56,765 INFO  [cloud.upgrade.DatabaseUpgradeChecker] 
> (main:null) Database upgrade must be performed from 3.0.2.20120506223416 to 
> 4.0.2.20130420145617
> 2013-04-26 16:57:56,765 ERROR [cloud.upgrade.DatabaseUpgradeChecker] 
> (main:null) The end upgrade version