[MERGE] Opendaylight plugin

2014-01-06 Thread Hugo Trippaers
Hey all,

I would like to merge the branch open daylight into master. This branch 
contains a plugin with an interface to an OpenDaylight controller. 

The current functionality is limited to creating layer 2 isolated networks 
using overlay networking as supported by opendaylight. We are using the OVSDB 
and OpenFlow modules in OpenDaylight to build overlay networks using gre 
tunneling. Opendaylight does not have a release yet, so the state of the plugin 
is more of a technology preview, but has been tested with several KVM 
hypervisors and the latest build of the OpenDaylight controller and the ovsdb 
subproject. The functionality is enough to work as an equivalent to the 
existing GRE tunnel implementation for KVM hypervisors.

Ideally this plugin should be the basic groundwork to start supporting more 
functions/features available in OpenDaylight when they become available. It 
allows interested parties to work with CloudStack and OpenDaylight without 
having to work with a CS fork and keep rebasing a separate branch.


Cheers,

Hugo

RE: Hyper-V agent

2014-01-06 Thread Devdeep Singh
Error seems to be in starting the service. Can you check under services 
(services.msc) if a service is present by the name CloudStack Hyper-V Agent? 
To debug the service start issue, can you open up the 8250 port (or try disable 
firewall) and check if the service starts up.

Regards,
Devdeep

-Original Message-
From: Paul Angus [mailto:paul.an...@shapeblue.com] 
Sent: Friday, January 3, 2014 11:19 PM
To: dev@cloudstack.apache.org; Donal Lafferty; Anshul Gangwar
Subject: RE: Hyper-V agent

So... updating .net 4.5.1 and a reboot and CloudAgent builds (with 19 warnings) 
http://pastebin.com/ahz5yJw2

I copy it to my hyper-v box and it bombs out immediately 
http://imgur.com/NMan0S2

Install log says:

Installing assembly 'B:\Microsoft\AgentShell\AgentShell.exe'.
Affected parameters are:
   assemblypath = B:\Microsoft\AgentShell\AgentShell.exe
   logfile = B:\Microsoft\AgentShell\AgentShell.InstallLog
Installing service CloudStack Hyper-V Agent...
Service CloudStack Hyper-V Agent has been successfully installed.
Creating EventLog source CloudStack Hyper-V Agent in log Application...
See the contents of the log file for the B:\Microsoft\AgentShell\AgentShell.exe 
assembly's progress.
The file is located at B:\Microsoft\AgentShell\AgentShell.InstallLog.
Committing assembly 'B:\Microsoft\AgentShell\AgentShell.exe'.
Affected parameters are:
   logtoconsole =
   assemblypath = B:\Microsoft\AgentShell\AgentShell.exe
   logfile = B:\Microsoft\AgentShell\AgentShell.InstallLog

agent log says:

2014-01-03 17:33:59,755 [1] DEBUG CloudStack.Plugin.AgentShell.Program [(null)] 
- CloudStack Hyper-V Agent arg is
2014-01-03 17:33:59,823 [1] INFO  CloudStack.Plugin.AgentShell.Program [(null)] 
- Installing and running CloudStack Hyper-V Agent
2014-01-03 17:34:02,185 [1] ERROR CloudStack.Plugin.AgentShell.Program [(null)] 
-  Error occured in starting service Cannot start service CloudStack Hyper-V 
Agent on computer '.'.


Regards,

Paul Angus
Cloud Architect
S: +44 20 3603 0540 | M: +447711418784 | T: @CloudyAngus 
paul.an...@shapeblue.com

From: Paul Angus [mailto:paul.an...@shapeblue.com]
Sent: 03 January 2014 16:35
To: Donal Lafferty; dev@cloudstack.apache.org; Anshul Gangwar
Subject: RE: Hyper-V agent

When (trying) to build the hyper-v agent I get these errors:

Build FAILED.

Warnings:

C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\ServerResource.sln
 (default targets) - (Build target) - 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\WmiWrappers\WmiWrappers.csproj
 (default targets) - 
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets 
(ResolveAssemblyReferences target) -

C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'AWSSDK' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'Ionic.Zip' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'log4net' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'Newtonsoft.Json' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'NSubstitute' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'xunit' not resolved

Errors:

C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\ServerResource.sln
 (default targets) - (Build target) - 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\HypervResource\HypervResource.csproj
 (default targets) - 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\\.nuget\NuGet.targets
 (RestorePackages target) -


C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\\.nuget\NuGet.targets:
 error : Command 'mono --runtime=v4.0.30319 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\.nuget\NuGet.exe
 install 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\HypervResource\packages.config
 -source-RequireConsent -solutionDir 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\\
 ' exited with code: 1.

 6 Warning(s)
 1 Error(s)

Time Elapsed 00:00:02.6896758



Any ideas?




Regards,

Paul Angus
Cloud Architect
S: +44 20 3603 0540 | M: +447711418784 | T: @CloudyAngus 
paul.an...@shapeblue.commailto:paul.an...@shapeblue.com

From: Donal Lafferty [mailto:donal.laffe...@citrix.com]
Sent: 02 January 2014 16:43
To: Paul Angus; dev@cloudstack.apache.orgmailto:dev@cloudstack.apache.org; 
Anshul Gangwar
Subject: RE: Hyper-V agent

I agree that we need a distro for the agent.

Based on what KVM does, what is the pattern for distributing non-Java agents?

DL


From: Paul Angus [mailto:paul.an...@shapeblue.com]
Sent: 02 

Re: [VOTE] 3rd round of voting for ASF 4.2.1 RC

2014-01-06 Thread Wei ZHOU
Hi Abhi,

I have two problems,
(1) 3a999e7 made OVS not working on 4.2, so I fixed it by
commit 79f609ca19fc44aab8de8294f234537936bc3613
(2)  DevCloud does not work after commit
7f9463bb54f19e7676f8c6049d1ebc02330a730f. So I am wondering if XCP works
after that.

-Wei



2013/12/17 Abhinandan Prateek abhinandan.prat...@citrix.com

 The 4.2.1 is re-spun mainly because the commit that was used to generate
 the previous RC did not get pushed to repo.

 Following are the particulars to vote for this time around:


 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=refs/heads/4.2
 commit: 1b2b58fe352a19aee1721bd79b9d023d36e80ec5

 List of changes are available in Release Notes, a summary can be accessed
 here:

 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=CHANGES;hb=4.2

 Source release revision 3911 (checksums and signatures are available at
 the same location):
 https://dist.apache.org/repos/dist/dev/cloudstack/4.2.1/

 PGP release keys (signed using RSA Key ID = 42443AA1):
 https://dist.apache.org/repos/dist/release/cloudstack/KEYS

 Vote will be open for 72 hours (until 20 Dec 2013 End of day PST).

 For sanity in tallying the vote, can PMC members please be sure to
 indicate (binding) with their vote?

 [ ] +1  approve
 [ ] +0  no opinion
 [ ] -1  disapprove (and reason why)




Build failed in Jenkins: build-master #345

2014-01-06 Thread jenkins
See http://jenkins.buildacloud.org/job/build-master/345/changes

Changes:

[muralimmreddy] CLOUDSTACK-5787:  support in-memroy eventbus

--
[...truncated 71 lines...]
[INFO] Apache CloudStack Plugin - Open vSwitch
[INFO] Apache CloudStack Plugin - Hypervisor Xen
[INFO] Apache CloudStack Plugin - Hypervisor KVM
[INFO] Apache CloudStack Plugin - RabbitMQ Event Bus
[INFO] Apache CloudStack Plugin - In Memory Event Bus
[INFO] Apache CloudStack Plugin - Hypervisor Baremetal
[INFO] Apache CloudStack Plugin - Hypervisor UCS
[INFO] Apache CloudStack Plugin - Hypervisor Hyper-V
[INFO] Apache CloudStack Plugin - Network Elastic Load Balancer
[INFO] Apache CloudStack Plugin - Network Internal Load Balancer
[INFO] Apache CloudStack Plugin - Network Juniper Contrail
[INFO] Apache CloudStack Plugin - Palo Alto
[INFO] Apache CloudStack Plugin - Network Nicira NVP
[INFO] Apache CloudStack Plugin - BigSwitch Virtual Network Segment
[INFO] Apache CloudStack Plugin - Midokura Midonet
[INFO] Apache Cloudstack Plugin - Stratosphere SSP
[INFO] Apache CloudStack Plugin - Storage Allocator Random
[INFO] Apache CloudStack Plugin - User Authenticator LDAP
[INFO] Apache CloudStack Plugin - User Authenticator MD5
[INFO] Apache CloudStack Plugin - User Authenticator Plain Text
[INFO] Apache CloudStack Plugin - User Authenticator SHA256 Salted
[INFO] Apache CloudStack Plugin - Dns Notifier Example
[INFO] Apache CloudStack Plugin - Storage Image S3
[INFO] Apache CloudStack Plugin - Storage Image Swift provider
[INFO] Apache CloudStack Plugin - Storage Image default provider
[INFO] Apache CloudStack Plugin - Storage Image sample provider
[INFO] Apache CloudStack Plugin - Storage Volume SolidFire Provider
[INFO] Apache CloudStack Plugin - Storage Volume default provider
[INFO] Apache CloudStack Plugin - Storage Volume sample provider
[INFO] Apache CloudStack Plugin - SNMP Alerts
[INFO] Apache CloudStack Plugin - Syslog Alerts
[INFO] Apache CloudStack Plugin - Network VXLAN
[INFO] Apache CloudStack Framework - Spring Life Cycle
[INFO] cloud-framework-spring-module
[INFO] Apache CloudStack Test
[INFO] Apache CloudStack Console Proxy
[INFO] Apache CloudStack Console Proxy - Server
[INFO] Apache CloudStack System VM
[INFO] Apache CloudStack Client UI
[INFO] Apache CloudStack Console Proxy - RDP Client
[INFO] Apache CloudStack Framework - QuickCloud
[INFO] 
[INFO] 
[INFO] Building Apache CloudStack 4.4.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ cloudstack ---
[INFO] Deleting http://jenkins.buildacloud.org/job/build-master/ws/target 
(includes = [**/*], excludes = [])
[INFO] Deleting http://jenkins.buildacloud.org/job/build-master/ws/ (includes 
= [target, dist], excludes = [])
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.11:check (default) @ cloudstack ---
[INFO] Starting audit...
http://jenkins.buildacloud.org/job/build-master/ws/plugins/event-bus/inmemory/src/org/apache/cloudstack/mom/inmemory/InMemoryEventBus.java:22:
 Using the '.*' form of import should be avoided - java.util.*.
http://jenkins.buildacloud.org/job/build-master/ws/plugins/event-bus/inmemory/src/org/apache/cloudstack/mom/inmemory/InMemoryEventBus.java:29:8:
 Unused import - com.cloud.utils.Ternary.
Audit done.

[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CloudStack . FAILURE [48.445s]
[INFO] Apache CloudStack Maven Conventions Parent  SKIPPED
[INFO] Apache CloudStack Framework - Managed Context . SKIPPED
[INFO] Apache CloudStack Utils ... SKIPPED
[INFO] Apache CloudStack Framework ... SKIPPED
[INFO] Apache CloudStack Framework - Event Notification .. SKIPPED
[INFO] Apache CloudStack Framework - Configuration ... SKIPPED
[INFO] Apache CloudStack API . SKIPPED
[INFO] Apache CloudStack Framework - REST  SKIPPED
[INFO] Apache CloudStack Framework - IPC . SKIPPED
[INFO] Apache CloudStack Cloud Engine  SKIPPED
[INFO] Apache CloudStack Cloud Engine API  SKIPPED
[INFO] Apache CloudStack Core  SKIPPED
[INFO] Apache CloudStack Agents .. SKIPPED
[INFO] Apache CloudStack Framework - Clustering .. SKIPPED
[INFO] Apache CloudStack Framework - Jobs  SKIPPED
[INFO] Apache CloudStack Cloud Engine Schema Component ... SKIPPED
[INFO] Apache CloudStack Framework - Event Notification .. SKIPPED
[INFO] Apache CloudStack Cloud Engine Internal Components API  SKIPPED
[INFO] Apache CloudStack Server .. SKIPPED
[INFO] Apache CloudStack Usage 

Build failed in Jenkins: build-master » Apache CloudStack #345

2014-01-06 Thread jenkins
See 
http://jenkins.buildacloud.org/job/build-master/org.apache.cloudstack$cloudstack/345/

--
maven31-agent.jar already up to date
maven31-interceptor.jar already up to date
maven3-interceptor-commons.jar already up to date
===[JENKINS REMOTING CAPACITY]===   channel started
log4j:WARN No appenders could be found for logger 
(org.apache.commons.beanutils.converters.BooleanConverter).
log4j:WARN Please initialize the log4j system properly.
Executing Maven:  -B -f 
http://jenkins.buildacloud.org/job/build-master/org.apache.cloudstack$cloudstack/ws/pom.xml
 -Psystemvm clean test
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Apache CloudStack
[INFO] Apache CloudStack Maven Conventions Parent
[INFO] Apache CloudStack Framework - Managed Context
[INFO] Apache CloudStack Utils
[INFO] Apache CloudStack Framework
[INFO] Apache CloudStack Framework - Event Notification
[INFO] Apache CloudStack Framework - Configuration
[INFO] Apache CloudStack API
[INFO] Apache CloudStack Framework - REST
[INFO] Apache CloudStack Framework - IPC
[INFO] Apache CloudStack Cloud Engine
[INFO] Apache CloudStack Cloud Engine API
[INFO] Apache CloudStack Core
[INFO] Apache CloudStack Agents
[INFO] Apache CloudStack Framework - Clustering
[INFO] Apache CloudStack Framework - Jobs
[INFO] Apache CloudStack Cloud Engine Schema Component
[INFO] Apache CloudStack Framework - Event Notification
[INFO] Apache CloudStack Cloud Engine Internal Components API
[INFO] Apache CloudStack Server
[INFO] Apache CloudStack Usage Server
[INFO] Apache XenSource XAPI
[INFO] Apache CloudStack Cloud Engine Orchestration Component
[INFO] Apache CloudStack Cloud Services
[INFO] Apache CloudStack Secondary Storage Service
[INFO] Apache CloudStack Engine Storage Component
[INFO] Apache CloudStack Engine Storage Volume Component
[INFO] Apache CloudStack Engine Storage Image Component
[INFO] Apache CloudStack Engine Storage Data Motion Component
[INFO] Apache CloudStack Engine Storage Cache Component
[INFO] Apache CloudStack Engine Storage Snapshot Component
[INFO] Apache CloudStack Cloud Engine API
[INFO] Apache CloudStack Cloud Engine Service
[INFO] Apache CloudStack Plugin POM
[INFO] Apache CloudStack Plugin - API Rate Limit
[INFO] Apache CloudStack Plugin - API Discovery
[INFO] Apache CloudStack Plugin - ACL Static Role Based
[INFO] Apache CloudStack Plugin - Host Anti-Affinity Processor
[INFO] Apache CloudStack Plugin - Explicit Dedication Processor
[INFO] Apache CloudStack Plugin - User Concentrated Pod Deployment Planner
[INFO] Apache CloudStack Plugin - User Dispersing Deployment Planner
[INFO] Apache CloudStack Plugin - Implicit Dedication Planner
[INFO] Apache CloudStack Plugin - Skip Heurestics Planner
[INFO] Apache CloudStack Plugin - Host Allocator Random
[INFO] Apache CloudStack Plugin - Dedicated Resources
[INFO] Apache CloudStack Plugin - Hypervisor OracleVM
[INFO] Apache CloudStack Plugin - Open vSwitch
[INFO] Apache CloudStack Plugin - Hypervisor Xen
[INFO] Apache CloudStack Plugin - Hypervisor KVM
[INFO] Apache CloudStack Plugin - RabbitMQ Event Bus
[INFO] Apache CloudStack Plugin - In Memory Event Bus
[INFO] Apache CloudStack Plugin - Hypervisor Baremetal
[INFO] Apache CloudStack Plugin - Hypervisor UCS
[INFO] Apache CloudStack Plugin - Hypervisor Hyper-V
[INFO] Apache CloudStack Plugin - Network Elastic Load Balancer
[INFO] Apache CloudStack Plugin - Network Internal Load Balancer
[INFO] Apache CloudStack Plugin - Network Juniper Contrail
[INFO] Apache CloudStack Plugin - Palo Alto
[INFO] Apache CloudStack Plugin - Network Nicira NVP
[INFO] Apache CloudStack Plugin - BigSwitch Virtual Network Segment
[INFO] Apache CloudStack Plugin - Midokura Midonet
[INFO] Apache Cloudstack Plugin - Stratosphere SSP
[INFO] Apache CloudStack Plugin - Storage Allocator Random
[INFO] Apache CloudStack Plugin - User Authenticator LDAP
[INFO] Apache CloudStack Plugin - User Authenticator MD5
[INFO] Apache CloudStack Plugin - User Authenticator Plain Text
[INFO] Apache CloudStack Plugin - User Authenticator SHA256 Salted
[INFO] Apache CloudStack Plugin - Dns Notifier Example
[INFO] Apache CloudStack Plugin - Storage Image S3
[INFO] Apache CloudStack Plugin - Storage Image Swift provider
[INFO] Apache CloudStack Plugin - Storage Image default provider
[INFO] Apache CloudStack Plugin - Storage Image sample provider
[INFO] Apache CloudStack Plugin - Storage Volume SolidFire Provider
[INFO] Apache CloudStack Plugin - Storage Volume default provider
[INFO] Apache CloudStack Plugin - Storage Volume sample provider
[INFO] Apache CloudStack Plugin - SNMP Alerts
[INFO] Apache CloudStack Plugin - Syslog Alerts
[INFO] Apache CloudStack Plugin - Network VXLAN
[INFO] Apache CloudStack Framework - Spring Life Cycle
[INFO] cloud-framework-spring-module
[INFO] Apache CloudStack Test
[INFO] Apache CloudStack Console Proxy
[INFO] Apache 

Re: Review Request 16605: fixed special characters not working in console view for hyperv

2014-01-06 Thread Devdeep Singh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16605/#review31226
---

Ship it!


Committed in ef51def9fff3610b13f50be39c7610262d4a1c04. Kindly close the review 
request.

- Devdeep Singh


On Jan. 3, 2014, 11:42 a.m., Anshul Gangwar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16605/
 ---
 
 (Updated Jan. 3, 2014, 11:42 a.m.)
 
 
 Review request for cloudstack, Devdeep Singh and rajeshbabu chintaguntla.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 this patch fixes the special characters  not working in cnsole view for 
 hyperv. Added the keymap for special keys and modified modifeir keys handling
 
 
 Diffs
 -
 
   
 services/console-proxy-rdp/rdpconsole/src/main/java/rdpclient/adapter/AwtRdpKeyboardAdapter.java
  36da0a3 
   
 services/console-proxy/server/src/com/cloud/consoleproxy/ConsoleProxyRdpClient.java
  6b317ff 
   
 services/console-proxy/server/src/com/cloud/consoleproxy/rdp/KeysymToKeycode.java
  10282ad 
 
 Diff: https://reviews.apache.org/r/16605/diff/
 
 
 Testing
 ---
 
 verified by typing special keys in console view of vm in hyperv
 
 
 Thanks,
 
 Anshul Gangwar
 




RE: Hyper-V agent

2014-01-06 Thread Alex Hitchins
Just noticed in the error log that it's trying to start on computer '.' - while 
I know the . works in SQL Server etc, could it not be working here? Or have you 
set a custom hosts file?

--
Error occured in starting service Cannot start service CloudStack Hyper-V Agent 
on computer '.'.



Alex Hitchins
+44 7788 423 969

-Original Message-
From: Devdeep Singh [mailto:devdeep.si...@citrix.com]
Sent: 06 January 2014 08:40
To: dev@cloudstack.apache.org; Donal Lafferty; Anshul Gangwar
Subject: RE: Hyper-V agent

Error seems to be in starting the service. Can you check under services 
(services.msc) if a service is present by the name CloudStack Hyper-V Agent? 
To debug the service start issue, can you open up the 8250 port (or try disable 
firewall) and check if the service starts up.

Regards,
Devdeep

-Original Message-
From: Paul Angus [mailto:paul.an...@shapeblue.com]
Sent: Friday, January 3, 2014 11:19 PM
To: dev@cloudstack.apache.org; Donal Lafferty; Anshul Gangwar
Subject: RE: Hyper-V agent

So... updating .net 4.5.1 and a reboot and CloudAgent builds (with 19 warnings) 
http://pastebin.com/ahz5yJw2

I copy it to my hyper-v box and it bombs out immediately 
http://imgur.com/NMan0S2

Install log says:

Installing assembly 'B:\Microsoft\AgentShell\AgentShell.exe'.
Affected parameters are:
   assemblypath = B:\Microsoft\AgentShell\AgentShell.exe
   logfile = B:\Microsoft\AgentShell\AgentShell.InstallLog
Installing service CloudStack Hyper-V Agent...
Service CloudStack Hyper-V Agent has been successfully installed.
Creating EventLog source CloudStack Hyper-V Agent in log Application...
See the contents of the log file for the B:\Microsoft\AgentShell\AgentShell.exe 
assembly's progress.
The file is located at B:\Microsoft\AgentShell\AgentShell.InstallLog.
Committing assembly 'B:\Microsoft\AgentShell\AgentShell.exe'.
Affected parameters are:
   logtoconsole =
   assemblypath = B:\Microsoft\AgentShell\AgentShell.exe
   logfile = B:\Microsoft\AgentShell\AgentShell.InstallLog

agent log says:

2014-01-03 17:33:59,755 [1] DEBUG CloudStack.Plugin.AgentShell.Program [(null)] 
- CloudStack Hyper-V Agent arg is
2014-01-03 17:33:59,823 [1] INFO  CloudStack.Plugin.AgentShell.Program [(null)] 
- Installing and running CloudStack Hyper-V Agent
2014-01-03 17:34:02,185 [1] ERROR CloudStack.Plugin.AgentShell.Program [(null)] 
-  Error occured in starting service Cannot start service CloudStack Hyper-V 
Agent on computer '.'.


Regards,

Paul Angus
Cloud Architect
S: +44 20 3603 0540 | M: +447711418784 | T: @CloudyAngus 
paul.an...@shapeblue.com

From: Paul Angus [mailto:paul.an...@shapeblue.com]
Sent: 03 January 2014 16:35
To: Donal Lafferty; dev@cloudstack.apache.org; Anshul Gangwar
Subject: RE: Hyper-V agent

When (trying) to build the hyper-v agent I get these errors:

Build FAILED.

Warnings:

C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\ServerResource.sln
 (default targets) - (Build target) - 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\WmiWrappers\WmiWrappers.csproj
 (default targets) - 
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets 
(ResolveAssemblyReferences target) -

C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'AWSSDK' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'Ionic.Zip' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'log4net' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'Newtonsoft.Json' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'NSubstitute' not resolved
C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:  warning 
: Reference 'xunit' not resolved

Errors:

C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\ServerResource.sln
 (default targets) - (Build target) - 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\HypervResource\HypervResource.csproj
 (default targets) - 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\\.nuget\NuGet.targets
 (RestorePackages target) -


C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\\.nuget\NuGet.targets:
 error : Command 'mono --runtime=v4.0.30319 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\.nuget\NuGet.exe
 install 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\HypervResource\packages.config
 -source-RequireConsent -solutionDir 
C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\ServerResource\\
 ' exited with code: 1.

 6 Warning(s)
 1 Error(s)

Time Elapsed 00:00:02.6896758



Any ideas?




Regards,

Paul Angus
Cloud 

Jenkins build is back to normal : build-master #346

2014-01-06 Thread jenkins
See http://jenkins.buildacloud.org/job/build-master/346/changes



Jenkins build is back to normal : build-master » Apache CloudStack #346

2014-01-06 Thread jenkins
See 
http://jenkins.buildacloud.org/job/build-master/org.apache.cloudstack$cloudstack/346/



Re: [VOTE] 3rd round of voting for ASF 4.2.1 RC

2014-01-06 Thread Abhinandan Prateek
Wei,

   I think KVM support for OVS was not a supported feature in 4.2. It can
go as a supported feature in 4.3.

DevCloud is not a blocker.

We will go ahead with the release process as of now.

-abhi




On 06/01/14 2:16 pm, Wei ZHOU ustcweiz...@gmail.com wrote:

Hi Abhi,

I have two problems,
(1) 3a999e7 made OVS not working on 4.2, so I fixed it by
commit 79f609ca19fc44aab8de8294f234537936bc3613
(2)  DevCloud does not work after commit
7f9463bb54f19e7676f8c6049d1ebc02330a730f. So I am wondering if XCP works
after that.

-Wei



2013/12/17 Abhinandan Prateek abhinandan.prat...@citrix.com

 The 4.2.1 is re-spun mainly because the commit that was used to generate
 the previous RC did not get pushed to repo.

 Following are the particulars to vote for this time around:


 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=ref
s/heads/4.2
 commit: 1b2b58fe352a19aee1721bd79b9d023d36e80ec5

 List of changes are available in Release Notes, a summary can be
accessed
 here:

 
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=C
HANGES;hb=4.2

 Source release revision 3911 (checksums and signatures are available at
 the same location):
 https://dist.apache.org/repos/dist/dev/cloudstack/4.2.1/

 PGP release keys (signed using RSA Key ID = 42443AA1):
 https://dist.apache.org/repos/dist/release/cloudstack/KEYS

 Vote will be open for 72 hours (until 20 Dec 2013 End of day PST).

 For sanity in tallying the vote, can PMC members please be sure to
 indicate (binding) with their vote?

 [ ] +1  approve
 [ ] +0  no opinion
 [ ] -1  disapprove (and reason why)





Re: httpd server in vms

2014-01-06 Thread Nux!

On 06.01.2014 06:48, Girish Shilamkar wrote:

Hello,

Is it safe to assume that whenever a vm instance is created in
Cloudstack with default Centos template, httpd server will running
once the vm has booted ?
On Xen I see that Apache http server is not installed. And therefore
some of regression tests fail as they use http server.

Regards,
Girish


Hello,

The default template CentOS 5.5(64-bit) no GUI has httpd installed, 
but does not start it at boot by default; you have to start it manually.


--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro


Database Deployment is failing

2014-01-06 Thread Prashant Kumar Mishra
Hi ,

Deploy Db is failing , can someone  help me .

[root@localhost cloudstack]# mvn -P developer -pl developer -Ddeploydb

Eror msg
-
 Processing upgrade: com.cloud.upgrade.DatabaseUpgradeChecker
[WARNING]
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.codehaus.mojo.exec.ExecJavaMojo$1.run(ExecJavaMojo.java:297)
at java.lang.Thread.run(Thread.java:679)
Caused by: com.cloud.utils.exception.CloudRuntimeException: Unable to upgrade 
the database
at 
com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:337)
at 
com.cloud.upgrade.DatabaseUpgradeChecker.check(DatabaseUpgradeChecker.java:432)
at com.cloud.upgrade.DatabaseCreator.main(DatabaseCreator.java:222)
... 6 more
Caused by: com.cloud.utils.exception.CloudRuntimeException: Unable to execute 
upgrade script: /root/cloudstack/developer/target/db/db/schema-421to430.sql
at 
com.cloud.upgrade.DatabaseUpgradeChecker.runScript(DatabaseUpgradeChecker.java:252)
at 
com.cloud.upgrade.DatabaseUpgradeChecker.upgrade(DatabaseUpgradeChecker.java:306)
... 8 more
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate 
column name 'is_exclusive_gslb_provider'
at com.cloud.utils.db.ScriptRunner.runScript(ScriptRunner.java:193)
at com.cloud.utils.db.ScriptRunner.runScript(ScriptRunner.java:87)
at 
com.cloud.upgrade.DatabaseUpgradeChecker.runScript(DatabaseUpgradeChecker.java:243)
... 9 more
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 35.980s
[INFO] Finished at: Mon Jan 06 10:29:36 EST 2014
[INFO] Final Memory: 37M/90M
[INFO] 
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2.1:java 
(create-schema) on project cloud-developer: An exception occured while 
executing the Java class. null: InvocationTargetException: Unable to upgrade 
the database: Unable to execute upgrade script: 
/root/cloudstack/developer/target/db/db/schema-421to430.sql: Duplicate column 
name 'is_exclusive_gslb_provider' - [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException




Setup Detail
-
[root@localhost cloudstack]# git branch
* master


root@localhost cloudstack]# git log --name-status HEAD^..HEAD
commit c2b5addaedc47b3715b408a6d5a2aa356e7fcd1b
Author: SrikanteswaraRao Talluri tall...@apache.org
Date:   Mon Jan 6 14:50:46 2014 +0530

CLOUDSTACK-5625: removed unnecessary global setting 
'ldap.realname.attribute'

M   test/integration/component/test_ldap.py


Thanks
prashant


Re: [VOTE] 3rd round of voting for ASF 4.2.1 RC

2014-01-06 Thread Wei ZHOU
Abhi, Thanks!

-Wei


Review Request 16647: Install fails through duplicate column add

2014-01-06 Thread Ian Southam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16647/
---

Review request for cloudstack and Hugo Trippaers.


Repository: cloudstack-git


Description
---

ADD COLUMN `is_exclusive_gslb_provider` is duplicated in the schema upgrade 
script schema-421to430.sql.

This causes installation and upgrades to fail

The patch removes the duplicate line


Diffs
-

  setup/db/db/schema-421to430.sql b7b1b2b 

Diff: https://reviews.apache.org/r/16647/diff/


Testing
---

Script no longer fails


Thanks,

Ian Southam



Re: Review Request 16603: CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands false.

2014-01-06 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16603/#review31231
---


Commit 3a2cf48d92af2566271359b1b8bfa3b335cf54ce in branch refs/heads/4.3 from 
Bharat Kumar
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=3a2cf48 ]

CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands 
false.


- ASF Subversion and Git Services


On Jan. 3, 2014, 10:17 a.m., bharat kumar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16603/
 ---
 
 (Updated Jan. 3, 2014, 10:17 a.m.)
 
 
 Review request for cloudstack, Kishan Kavala and Koushik Das.
 
 
 Bugs: CLOUDSTACK-5750
 https://issues.apache.org/jira/browse/CLOUDSTACK-5750
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands 
 false.
 
 
 Diffs
 -
 
   engine/api/src/com/cloud/vm/VirtualMachineManager.java d182126 
 
 Diff: https://reviews.apache.org/r/16603/diff/
 
 
 Testing
 ---
 
 Tested on 4.3
 
 
 Thanks,
 
 bharat kumar
 




Re: [VOTE] 3rd round of voting for ASF 4.2.1 RC

2014-01-06 Thread Abhinandan Prateek
Wei,
  The concerns you have raised are valid. I guess we should have more
people testing things like DevCloud and XCP.
In the past also I have seen that issues with DevCloud are not resolved on
priority. From my end if possible I will try to push for testing these
much earlier in release cycle.

-abhi

On 06/01/14 4:29 pm, Wei ZHOU ustcweiz...@gmail.com wrote:

Abhi, Thanks!

-Wei



Re: Review Request 16603: CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands false.

2014-01-06 Thread Kishan Kavala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16603/#review31233
---

Ship it!


commit 17023c0d60e99fe78f641531b899aebf2b5e2d50

- Kishan Kavala


On Jan. 3, 2014, 3:47 p.m., bharat kumar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16603/
 ---
 
 (Updated Jan. 3, 2014, 3:47 p.m.)
 
 
 Review request for cloudstack, Kishan Kavala and Koushik Das.
 
 
 Bugs: CLOUDSTACK-5750
 https://issues.apache.org/jira/browse/CLOUDSTACK-5750
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands 
 false.
 
 
 Diffs
 -
 
   engine/api/src/com/cloud/vm/VirtualMachineManager.java d182126 
 
 Diff: https://reviews.apache.org/r/16603/diff/
 
 
 Testing
 ---
 
 Tested on 4.3
 
 
 Thanks,
 
 bharat kumar
 




Re: Review Request 16603: CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands false.

2014-01-06 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16603/#review31232
---


Commit 17023c0d60e99fe78f641531b899aebf2b5e2d50 in branch refs/heads/master 
from Bharat Kumar
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=17023c0 ]

CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands 
false.

Conflicts:
engine/api/src/com/cloud/vm/VirtualMachineManager.java


- ASF Subversion and Git Services


On Jan. 3, 2014, 10:17 a.m., bharat kumar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16603/
 ---
 
 (Updated Jan. 3, 2014, 10:17 a.m.)
 
 
 Review request for cloudstack, Kishan Kavala and Koushik Das.
 
 
 Bugs: CLOUDSTACK-5750
 https://issues.apache.org/jira/browse/CLOUDSTACK-5750
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 CLOUDSTACK-5750 Make default value of execute.in.sequence.hypervisor.commands 
 false.
 
 
 Diffs
 -
 
   engine/api/src/com/cloud/vm/VirtualMachineManager.java d182126 
 
 Diff: https://reviews.apache.org/r/16603/diff/
 
 
 Testing
 ---
 
 Tested on 4.3
 
 
 Thanks,
 
 bharat kumar
 




Re: Review Request 16465: fixed the listvirtualmachines API to show cpu, memory and cpucores when using custom compute offering

2014-01-06 Thread Kishan Kavala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16465/#review31234
---

Ship it!


commit 91181a3b216357f09d69a50409ff0a505513239b

- Kishan Kavala


On Jan. 2, 2014, 11:31 a.m., bharat kumar wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16465/
 ---
 
 (Updated Jan. 2, 2014, 11:31 a.m.)
 
 
 Review request for cloudstack and Kishan Kavala.
 
 
 Bugs: CLOUDSTACK-5472
 https://issues.apache.org/jira/browse/CLOUDSTACK-5472
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 CLOUDSTACK-5472 fixed the listvirtualmachines API to show cpu, memory and 
 cpucores when using custom compute offering
 https://issues.apache.org/jira/browse/CLOUDSTACK-5472
 
 
 Diffs
 -
 
   setup/db/db/schema-420to421.sql c09a1bb 
 
 Diff: https://reviews.apache.org/r/16465/diff/
 
 
 Testing
 ---
 
 Tested on 4.3
 
 
 Thanks,
 
 bharat kumar
 




Re: Review Request 16647: Install fails through duplicate column add

2014-01-06 Thread daan Hoogland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16647/#review31235
---

Ship it!


Ship It!

- daan Hoogland


On Jan. 6, 2014, 1:08 p.m., Ian Southam wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16647/
 ---
 
 (Updated Jan. 6, 2014, 1:08 p.m.)
 
 
 Review request for cloudstack and Hugo Trippaers.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 ADD COLUMN `is_exclusive_gslb_provider` is duplicated in the schema upgrade 
 script schema-421to430.sql.
 
 This causes installation and upgrades to fail
 
 The patch removes the duplicate line
 
 
 Diffs
 -
 
   setup/db/db/schema-421to430.sql b7b1b2b 
 
 Diff: https://reviews.apache.org/r/16647/diff/
 
 
 Testing
 ---
 
 Script no longer fails
 
 
 Thanks,
 
 Ian Southam
 




Re: Review Request 16647: Install fails through duplicate column add

2014-01-06 Thread daan Hoogland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16647/#review31236
---


b1eb8665b7be7f27c52a9ee04498a3b475ef9b62

- daan Hoogland


On Jan. 6, 2014, 1:08 p.m., Ian Southam wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16647/
 ---
 
 (Updated Jan. 6, 2014, 1:08 p.m.)
 
 
 Review request for cloudstack and Hugo Trippaers.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 ADD COLUMN `is_exclusive_gslb_provider` is duplicated in the schema upgrade 
 script schema-421to430.sql.
 
 This causes installation and upgrades to fail
 
 The patch removes the duplicate line
 
 
 Diffs
 -
 
   setup/db/db/schema-421to430.sql b7b1b2b 
 
 Diff: https://reviews.apache.org/r/16647/diff/
 
 
 Testing
 ---
 
 Script no longer fails
 
 
 Thanks,
 
 Ian Southam
 




Re: Review Request 16647: Install fails through duplicate column add

2014-01-06 Thread Ian Southam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/16647/#review31237
---

Ship it!


Ship It!

- Ian Southam


On Jan. 6, 2014, 1:08 p.m., Ian Southam wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/16647/
 ---
 
 (Updated Jan. 6, 2014, 1:08 p.m.)
 
 
 Review request for cloudstack and Hugo Trippaers.
 
 
 Repository: cloudstack-git
 
 
 Description
 ---
 
 ADD COLUMN `is_exclusive_gslb_provider` is duplicated in the schema upgrade 
 script schema-421to430.sql.
 
 This causes installation and upgrades to fail
 
 The patch removes the duplicate line
 
 
 Diffs
 -
 
   setup/db/db/schema-421to430.sql b7b1b2b 
 
 Diff: https://reviews.apache.org/r/16647/diff/
 
 
 Testing
 ---
 
 Script no longer fails
 
 
 Thanks,
 
 Ian Southam
 




Re: [MERGE] Opendaylight plugin

2014-01-06 Thread sebgoa

On Jan 6, 2014, at 9:03 AM, Hugo Trippaers h...@trippaers.nl wrote:

 Hey all,
 
 I would like to merge the branch open daylight into master. This branch 
 contains a plugin with an interface to an OpenDaylight controller. 
 
 The current functionality is limited to creating layer 2 isolated networks 
 using overlay networking as supported by opendaylight. We are using the OVSDB 
 and OpenFlow modules in OpenDaylight to build overlay networks using gre 
 tunneling. Opendaylight does not have a release yet, so the state of the 
 plugin is more of a technology preview, but has been tested with several KVM 
 hypervisors and the latest build of the OpenDaylight controller and the ovsdb 
 subproject. The functionality is enough to work as an equivalent to the 
 existing GRE tunnel implementation for KVM hypervisors.
 
 Ideally this plugin should be the basic groundwork to start supporting more 
 functions/features available in OpenDaylight when they become available. It 
 allows interested parties to work with CloudStack and OpenDaylight without 
 having to work with a CS fork and keep rebasing a separate branch.
 
 
 Cheers,
 
 Hugo

I have not tested it, but saw the patches and I am +1



Re: ACS 4.2: list networks returns empty, if a VN is created without net mask

2014-01-06 Thread sebgoa

On Jan 3, 2014, at 8:55 PM, Vinod Nair vinodn...@juniper.net wrote:

 Thanks Saksham 
 
 There is all-ready one open CLOUDSTACK-5681
 Can it be assigned to someone
 

Just a quick note here, as a community we don't assign bugs. Someone has to 
step up and assign it to him/herself

 Thanks
 Vinod
 
 
 -Original Message-
 From: Saksham Srivastava [mailto:saksham.srivast...@citrix.com] 
 Sent: Friday, January 03, 2014 8:18 AM
 To: dev@cloudstack.apache.org
 Subject: RE: ACS 4.2: list networks returns empty, if a VN is created without 
 net mask 
 
 This could be a bug, go ahead a file an issue.
 
 Thanks,
 Saksham
 
 -Original Message-
 From: Vinod Nair [mailto:vinodn...@juniper.net] 
 Sent: Friday, January 03, 2014 5:53 AM
 To: dev@cloudstack.apache.org
 Subject: ACS 4.2: list networks returns empty, if a VN is created without net 
 mask 
 
 Hi Saksham 
 
 The issue here is if we specify the gateway without specifying the net-mask , 
 the networks table gets updated with the VN name , but in db  both gateway 
 cidr are empty .  list network bails out because if this.
 
 
 list zones
 count = 1
 zone:
 name = default
 id = 9b5dd877-1fb1-4499-8fec-2baea16ce973
 allocationstate = Enabled
 dhcpprovider = VirtualRouter
 dns1 = 10.84.5.100
 dns2 =
 domain = ROOT
 guestcidraddress = 10.1.0.0/24
 internaldns1 = 10.84.5.100
 internaldns2 =
 ip6dns1 =
 ip6dns2 =
 localstorageenabled = False
 networktype = Advanced
 securitygroupsenabled = False
 zonetoken = 63b953cc-1dbf-3a03-8aea-ce96319173cc
 
 
 mysql select id,name,cidr,gateway  from networks;
 +-+--++-+
 | id  | name | cidr   | gateway |
 +-+--++-+
 | 200 | NULL | NULL   | NULL|
 | 201 | NULL | NULL   | NULL|
 | 202 | NULL | 169.254.0.0/16 | 169.254.0.1 |
 | 203 | NULL | NULL   | NULL|
 | 204 | VN1  | 10.1.1.0/24| 10.1.1.254  |
 | 205 | VN2  | NULL   | NULL|
 +-+--++-+
 
 
 
 Thanks
 Vinod
 -Original Message-
 From: Saksham Srivastava [mailto:saksham.srivast...@citrix.com] 
 Sent: Monday, December 30, 2013 10:21 PM
 To: dev@cloudstack.apache.org
 Subject: RE: ACS4.2 db goes for a toss if no netmask is specified while 
 creating a virtual Network
 
 In general, if you do not specify a  gateway and netmask, the values will be 
 taken from the zone level settings.
 Check listZones to see your configuration.
 
 Thanks,
 Saksham
 
 -Original Message-
 From: Vinod Nair [mailto:vinodn...@juniper.net] 
 Sent: Tuesday, December 31, 2013 6:29 AM
 To: dev@cloudstack.apache.org
 Subject: RE: ACS4.2 db goes for a toss if no netmask is specified while 
 creating a virtual Network
 
 Hi 
 
 Root cause is that ACS is allowing to create a VN without a net mask value, 
 Whereas list networks command is checking for if cidr value is present or not 
 for a network while iterating all networks. If it finds a network without 
 cidr it throws Exception and  returns empty
 
 Thanks
 Vinod
 
 -Original Message-
 From: Vinod Nair [mailto:vinodn...@juniper.net] 
 Sent: Monday, December 30, 2013 11:26 AM
 To: dev@cloudstack.apache.org
 Subject: ACS4.2 db goes for a toss if no netmask is specified while creating 
 a virtual Network
 
 Hi
 
 I have ACS4.2 , If I try creating a virtual network without specifying  a 
 netmask, the database goes for a toss..  only way to recover is to delete  
 the entry from the  database manually  or set the CIDR manually as it is set 
 as NULL. Is there a fix available for this issue.
 
 
 
 # cloudmonkey
 ? Apache CloudStack ?? cloudmonkey 5.0.0. Type help or ? to list commands.
 
 list networks
 : None
 
 
 select * from networks where id=207;
 +-+--+--+--+--+---+---+-+--+--+-+-++--+-+-+---++--+--+---++--++--++--+-+-+---++-+--+--+-++
 | id  | name | uuid | display_text | 
 traffic_type | broadcast_domain_type | broadcast_uri | gateway | cidr | mode 
 | network_offering_id | physical_network_id | data_center_id | guru_name| 
 state   | related | domain_id | account_id | dns1 | dns2 | guru_data | 
 set_fields | acl_type | network_domain | reservation_id   
 | guest_type | restart_required | created | removed | 
 specify_ip_ranges | vpc_id | ip6_gateway | ip6_cidr | network_cidr | 
 display_network | network_acl_id |
 

Re: [VOTE] 3rd round of voting for ASF 4.2.1 RC

2014-01-06 Thread sebgoa

On Jan 6, 2014, at 10:34 AM, Abhinandan Prateek abhinandan.prat...@citrix.com 
wrote:

 Wei,
 
   I think KVM support for OVS was not a supported feature in 4.2. It can
 go as a supported feature in 4.3.
 
 DevCloud is not a blocker.

For the record, I disagree with this statement. For 4.1 and 4.2 we had a 
release testing procedure which was based on devcloud.
While not a production setup, devcloud is used heavily for demos etc…that means 
that we won't be able to demo the official 4.2.1 on devcloud.

+0

 
 We will go ahead with the release process as of now.
 
 -abhi
 
 
 
 
 On 06/01/14 2:16 pm, Wei ZHOU ustcweiz...@gmail.com wrote:
 
 Hi Abhi,
 
 I have two problems,
 (1) 3a999e7 made OVS not working on 4.2, so I fixed it by
 commit 79f609ca19fc44aab8de8294f234537936bc3613
 (2)  DevCloud does not work after commit
 7f9463bb54f19e7676f8c6049d1ebc02330a730f. So I am wondering if XCP works
 after that.
 
 -Wei
 
 
 
 2013/12/17 Abhinandan Prateek abhinandan.prat...@citrix.com
 
 The 4.2.1 is re-spun mainly because the commit that was used to generate
 the previous RC did not get pushed to repo.
 
 Following are the particulars to vote for this time around:
 
 
 
 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=shortlog;h=ref
 s/heads/4.2
 commit: 1b2b58fe352a19aee1721bd79b9d023d36e80ec5
 
 List of changes are available in Release Notes, a summary can be
 accessed
 here:
 
 
 https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;a=blob_plain;f=C
 HANGES;hb=4.2
 
 Source release revision 3911 (checksums and signatures are available at
 the same location):
 https://dist.apache.org/repos/dist/dev/cloudstack/4.2.1/
 
 PGP release keys (signed using RSA Key ID = 42443AA1):
 https://dist.apache.org/repos/dist/release/cloudstack/KEYS
 
 Vote will be open for 72 hours (until 20 Dec 2013 End of day PST).
 
 For sanity in tallying the vote, can PMC members please be sure to
 indicate (binding) with their vote?
 
 [ ] +1  approve
 [ ] +0  no opinion
 [ ] -1  disapprove (and reason why)
 
 
 



RE: Hyper-V agent

2014-01-06 Thread Donal Lafferty
What's in the .config file?  (Be sure not to publish sensitive IPs and keys ;)

 -Original Message-
 From: Alex Hitchins [mailto:alex.hitch...@shapeblue.com]
 Sent: 06 January 2014 09:02
 To: dev@cloudstack.apache.org; Donal Lafferty; Anshul Gangwar
 Subject: RE: Hyper-V agent
 
 Just noticed in the error log that it's trying to start on computer '.' - 
 while I
 know the . works in SQL Server etc, could it not be working here? Or have
 you set a custom hosts file?
 
 --
 Error occured in starting service Cannot start service CloudStack Hyper-V
 Agent on computer '.'.
 
 
 
 Alex Hitchins
 +44 7788 423 969
 
 -Original Message-
 From: Devdeep Singh [mailto:devdeep.si...@citrix.com]
 Sent: 06 January 2014 08:40
 To: dev@cloudstack.apache.org; Donal Lafferty; Anshul Gangwar
 Subject: RE: Hyper-V agent
 
 Error seems to be in starting the service. Can you check under services
 (services.msc) if a service is present by the name CloudStack Hyper-V
 Agent? To debug the service start issue, can you open up the 8250 port (or
 try disable firewall) and check if the service starts up.
 
 Regards,
 Devdeep
 
 -Original Message-
 From: Paul Angus [mailto:paul.an...@shapeblue.com]
 Sent: Friday, January 3, 2014 11:19 PM
 To: dev@cloudstack.apache.org; Donal Lafferty; Anshul Gangwar
 Subject: RE: Hyper-V agent
 
 So... updating .net 4.5.1 and a reboot and CloudAgent builds (with 19
 warnings) http://pastebin.com/ahz5yJw2
 
 I copy it to my hyper-v box and it bombs out immediately
 http://imgur.com/NMan0S2
 
 Install log says:
 
 Installing assembly 'B:\Microsoft\AgentShell\AgentShell.exe'.
 Affected parameters are:
assemblypath = B:\Microsoft\AgentShell\AgentShell.exe
logfile = B:\Microsoft\AgentShell\AgentShell.InstallLog
 Installing service CloudStack Hyper-V Agent...
 Service CloudStack Hyper-V Agent has been successfully installed.
 Creating EventLog source CloudStack Hyper-V Agent in log Application...
 See the contents of the log file for the
 B:\Microsoft\AgentShell\AgentShell.exe assembly's progress.
 The file is located at B:\Microsoft\AgentShell\AgentShell.InstallLog.
 Committing assembly 'B:\Microsoft\AgentShell\AgentShell.exe'.
 Affected parameters are:
logtoconsole =
assemblypath = B:\Microsoft\AgentShell\AgentShell.exe
logfile = B:\Microsoft\AgentShell\AgentShell.InstallLog
 
 agent log says:
 
 2014-01-03 17:33:59,755 [1] DEBUG CloudStack.Plugin.AgentShell.Program
 [(null)] - CloudStack Hyper-V Agent arg is
 2014-01-03 17:33:59,823 [1] INFO  CloudStack.Plugin.AgentShell.Program
 [(null)] - Installing and running CloudStack Hyper-V Agent
 2014-01-03 17:34:02,185 [1] ERROR CloudStack.Plugin.AgentShell.Program
 [(null)] -  Error occured in starting service Cannot start service CloudStack
 Hyper-V Agent on computer '.'.
 
 
 Regards,
 
 Paul Angus
 Cloud Architect
 S: +44 20 3603 0540 | M: +447711418784 | T: @CloudyAngus
 paul.an...@shapeblue.com
 
 From: Paul Angus [mailto:paul.an...@shapeblue.com]
 Sent: 03 January 2014 16:35
 To: Donal Lafferty; dev@cloudstack.apache.org; Anshul Gangwar
 Subject: RE: Hyper-V agent
 
 When (trying) to build the hyper-v agent I get these errors:
 
 Build FAILED.
 
 Warnings:
 
 C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\Serve
 rResource\ServerResource.sln (default targets) - (Build target) -
 C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\Serve
 rResource\WmiWrappers\WmiWrappers.csproj (default targets) -
 C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets
 (ResolveAssemblyReferences target) -
 
 C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:
 warning : Reference 'AWSSDK' not resolved
 C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:
 warning : Reference 'Ionic.Zip' not resolved
 C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:
 warning : Reference 'log4net' not resolved
 C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:
 warning : Reference 'Newtonsoft.Json' not resolved
 C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:
 warning : Reference 'NSubstitute' not resolved
 C:\PROGRA~2\MONO-3~1.3\lib\mono\4.0\Microsoft.Common.targets:
 warning : Reference 'xunit' not resolved
 
 Errors:
 
 C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\Serve
 rResource\ServerResource.sln (default targets) - (Build target) -
 C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\Serve
 rResource\HypervResource\HypervResource.csproj (default targets) -
 C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\Serve
 rResource\\.nuget\NuGet.targets (RestorePackages target) -
 
 
 C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\Serve
 rResource\\.nuget\NuGet.targets: error : Command 'mono --
 runtime=v4.0.30319
 C:\cygwin64\usr\local\cloudstack\plugins\hypervisors\hyperv\DotNet\Serve
 rResource\.nuget\NuGet.exe install
 

Re: Research areas in cloudstack

2014-01-06 Thread sebgoa

On Jan 6, 2014, at 5:33 AM, jitendra shelar jitendra.shelar...@gmail.com 
wrote:

 Hi All,
 
 I am pursuing with my MS at BITs, Pilani, India.
 I am planning of doing my final sem project in cloudstack.
 
 Can somebody please suggest me some research areas in cloudstack?
 
 Thanks,
 Jitendra

I replied on users@ but since both lists were copied:

Hi Jitendra, it depends what you mean by 'research', but there are a lot of 
interesting projects to do IMHO:

-Integration testing: Develop the Marvin framework and write tests to 
continuously check the support for CloudStack clients (libcloud, AWS, jclouds 
etc) this would require learning jenkins, understanding continuous integration 
pipeline and finding ways to tests these clients automatically.

-Investigating Xen GPU passthrough and setting up a demo where Xen hypervisors 
are tagged specifically for VM that need access to GPUs, run CUDA code on them…

-Investigate Mesos framework and develop deployments scripts to automatically 
deploy Mesos on a CloudStack infrastructure, the end goal being to demo running 
mixed workloads (MPI, hadoop, spark) on a virtualized infrastructure in 
cloudstack

-Docker integration/use, a few of us have been talking about this and you would 
be likely to get some help from the community.

-Review of configuration management systems (chef, puppet, ansible, 
saltstack...) develop recipes for deploying cloudstack for all systems (some 
already exist), from source and from packages. Include management server and 
hypervisor setup. Ideally had a wrapper to link to the hypervisor and the mgt 
server together automatically using Marvin.

-Investigate PaaS solutions and their integration with CloudStack. Software 
like cloudify, openshift, cloudfoundry, appscale…some of it is already done but 
a thorough analysis of pros and cons as well as code writing to finish the 
integration of some would be great.

You can also check out JIRA: https://issues.apache.org/jira/browse/CLOUDSTACK , 
browse through the long list of 'bugs' and pick what seems interesting to you.

This all depends of course on your skills and interest, are you more of a java 
developer or a sys admin ? Are you interested in integration with third party 
software or core java development ?

Cheers,

-sebastien

Re: Nexenta iSCSI Storage driver

2014-01-06 Thread Francois Gaudreault

Victor,

What would you gain? I mean, isn't Nexanta using open-iscsi?

Francois

On 1/2/2014, 5:25 PM, Victor Rodionov wrote:

Hello,

I'm working on Nexenta iSCSI storage driver for cloudstack, what you think
about this guys?

Thanks,
Victor Rodionov




--
Francois Gaudreault
Architecte de Solution Cloud | Cloud Solutions Architect
fgaudrea...@cloudops.com
514-629-6775
- - -
CloudOps
420 rue Guy
Montréal QC  H3J 1S6
www.cloudops.com
@CloudOps_



HELP: storage overprovision for storage plugins

2014-01-06 Thread Marcus Sorensen
Does anyone know how to make our storage plugin allow overprovisioning
with the new storage framework? Looks like its currently hardcoded to
just NFS or VMFS.

I imagine we'd want to add a method to StoragePool, boolean
StoragePool.getOverprovision()


server/src/com/cloud/storage/StorageManagerImpl.java

if (storagePool.getPoolType() ==
StoragePoolType.NetworkFilesystem || storagePool.getPoolType() ==
StoragePoolType.VMFS) {
BigDecimal overProvFactor =
getStorageOverProvisioningFactor(storagePool.getDataCenterId());
totalOverProvCapacity = overProvFactor.multiply(new
BigDecimal(storagePool.getCapacityBytes())).longValue();
// All this is for the inaccuracy of floats for big number
multiplication.
} else {
totalOverProvCapacity = storagePool.getCapacityBytes();
}


Re: Nexenta iSCSI Storage driver

2014-01-06 Thread Victor Rodionov
Hello,

This snapshot will create/delete new volumes, volumes snapshot and maybe
migrate volumes.

Thanks,
Victor Rodionov


2014/1/6 Francois Gaudreault fgaudrea...@cloudops.com

 Victor,

 What would you gain? I mean, isn't Nexanta using open-iscsi?

 Francois


 On 1/2/2014, 5:25 PM, Victor Rodionov wrote:

 Hello,

 I'm working on Nexenta iSCSI storage driver for cloudstack, what you think
 about this guys?

 Thanks,
 Victor Rodionov



 --
 Francois Gaudreault
 Architecte de Solution Cloud | Cloud Solutions Architect
 fgaudrea...@cloudops.com
 514-629-6775
 - - -
 CloudOps
 420 rue Guy
 Montréal QC  H3J 1S6
 www.cloudops.com
 @CloudOps_




Old jars on nightly System VM templates

2014-01-06 Thread SuichII, Christopher
I updated to the latest System VM templates from 
http://jenkins.buildacloud.org/job/build-systemvm-master/ and the CloudStack 
jars in /usr/local/cloud/systemvm/ appear to be from Nov. 1. Should the System 
VM build be pulling newer jars than that?

-Chris
--
Chris Suich
chris.su...@netapp.commailto:chris.su...@netapp.com
NetApp Software Engineer
Data Center Platforms – Cloud Solutions
Citrix, Cisco  Red Hat



RE: HELP: storage overprovision for storage plugins

2014-01-06 Thread Edison Su
We can move it to storage driver's capabilities method.
Each storage driver can report its capabilities in DataStoreDriver- 
getCapabilities(), which returns a map[String, String], we can change the 
signature to map[String, Object]
In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)- 
getCapabilities, which can return something like:

Var comparator = new  storageOverProvision() {
Public Boolean isOverProvisionSupported(DataStore store) {
   Var storagepool = (PrimaryDataStoreInfo)store;
   If (store.getPoolType() == NFS or VMFS) {
   Return true; 
  }
 };
};
Var caps = new HashMap[String, Object]();
Caps.put(storageOverProvision, comparator);
Return caps;
}

Whenever, other places in mgt server want to check the capabilities of 
overprovision, we can do the following:

Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
var caps = primaryStore. getDriver().getCapabilities();
var overprovision = caps.get(storageOverProvision);
var result = overprovision. isOverProvisionSupported(primaryStore);





 -Original Message-
 From: Marcus Sorensen [mailto:shadow...@gmail.com]
 Sent: Monday, January 06, 2014 9:19 AM
 To: dev@cloudstack.apache.org
 Subject: HELP: storage overprovision for storage plugins
 
 Does anyone know how to make our storage plugin allow overprovisioning
 with the new storage framework? Looks like its currently hardcoded to just
 NFS or VMFS.
 
 I imagine we'd want to add a method to StoragePool, boolean
 StoragePool.getOverprovision()
 
 
 server/src/com/cloud/storage/StorageManagerImpl.java
 
 if (storagePool.getPoolType() == StoragePoolType.NetworkFilesystem ||
 storagePool.getPoolType() ==
 StoragePoolType.VMFS) {
 BigDecimal overProvFactor =
 getStorageOverProvisioningFactor(storagePool.getDataCenterId());
 totalOverProvCapacity = overProvFactor.multiply(new
 BigDecimal(storagePool.getCapacityBytes())).longValue();
 // All this is for the inaccuracy of floats for big number 
 multiplication.
 } else {
 totalOverProvCapacity = storagePool.getCapacityBytes();
 }


Re: HELP: storage overprovision for storage plugins

2014-01-06 Thread Marcus Sorensen
https://issues.apache.org/jira/browse/CLOUDSTACK-5806

On Mon, Jan 6, 2014 at 12:46 PM, Marcus Sorensen shadow...@gmail.com wrote:
 Thanks, I've created an issue for it. It's unassigned, I assume
 Edison, Mike, Chris S, or myself could fix it, I don't have time to
 immediately look into it (I simply removed the check for type to get
 around our immediate issue) but will try to circle back around on it
 if nobody can help.

 On Mon, Jan 6, 2014 at 12:14 PM, Edison Su edison...@citrix.com wrote:
 We can move it to storage driver's capabilities method.
 Each storage driver can report its capabilities in DataStoreDriver- 
 getCapabilities(), which returns a map[String, String], we can change the 
 signature to map[String, Object]
 In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)- 
 getCapabilities, which can return something like:

 Var comparator = new  storageOverProvision() {
 Public Boolean isOverProvisionSupported(DataStore store) {
Var storagepool = (PrimaryDataStoreInfo)store;
If (store.getPoolType() == NFS or VMFS) {
Return true;
   }
  };
 };
 Var caps = new HashMap[String, Object]();
 Caps.put(storageOverProvision, comparator);
 Return caps;
 }

 Whenever, other places in mgt server want to check the capabilities of 
 overprovision, we can do the following:

 Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
 var caps = primaryStore. getDriver().getCapabilities();
 var overprovision = caps.get(storageOverProvision);
 var result = overprovision. isOverProvisionSupported(primaryStore);





 -Original Message-
 From: Marcus Sorensen [mailto:shadow...@gmail.com]
 Sent: Monday, January 06, 2014 9:19 AM
 To: dev@cloudstack.apache.org
 Subject: HELP: storage overprovision for storage plugins

 Does anyone know how to make our storage plugin allow overprovisioning
 with the new storage framework? Looks like its currently hardcoded to just
 NFS or VMFS.

 I imagine we'd want to add a method to StoragePool, boolean
 StoragePool.getOverprovision()


 server/src/com/cloud/storage/StorageManagerImpl.java

 if (storagePool.getPoolType() == StoragePoolType.NetworkFilesystem 
 ||
 storagePool.getPoolType() ==
 StoragePoolType.VMFS) {
 BigDecimal overProvFactor =
 getStorageOverProvisioningFactor(storagePool.getDataCenterId());
 totalOverProvCapacity = overProvFactor.multiply(new
 BigDecimal(storagePool.getCapacityBytes())).longValue();
 // All this is for the inaccuracy of floats for big number 
 multiplication.
 } else {
 totalOverProvCapacity = storagePool.getCapacityBytes();
 }


Re: HELP: storage overprovision for storage plugins

2014-01-06 Thread Marcus Sorensen
Thanks, I've created an issue for it. It's unassigned, I assume
Edison, Mike, Chris S, or myself could fix it, I don't have time to
immediately look into it (I simply removed the check for type to get
around our immediate issue) but will try to circle back around on it
if nobody can help.

On Mon, Jan 6, 2014 at 12:14 PM, Edison Su edison...@citrix.com wrote:
 We can move it to storage driver's capabilities method.
 Each storage driver can report its capabilities in DataStoreDriver- 
 getCapabilities(), which returns a map[String, String], we can change the 
 signature to map[String, Object]
 In CloudStackPrimaryDataStoreDriverImpl(the default storage driver)- 
 getCapabilities, which can return something like:

 Var comparator = new  storageOverProvision() {
 Public Boolean isOverProvisionSupported(DataStore store) {
Var storagepool = (PrimaryDataStoreInfo)store;
If (store.getPoolType() == NFS or VMFS) {
Return true;
   }
  };
 };
 Var caps = new HashMap[String, Object]();
 Caps.put(storageOverProvision, comparator);
 Return caps;
 }

 Whenever, other places in mgt server want to check the capabilities of 
 overprovision, we can do the following:

 Var primaryStore = DataStoreManager. getPrimaryDataStore(primaryStoreId);
 var caps = primaryStore. getDriver().getCapabilities();
 var overprovision = caps.get(storageOverProvision);
 var result = overprovision. isOverProvisionSupported(primaryStore);





 -Original Message-
 From: Marcus Sorensen [mailto:shadow...@gmail.com]
 Sent: Monday, January 06, 2014 9:19 AM
 To: dev@cloudstack.apache.org
 Subject: HELP: storage overprovision for storage plugins

 Does anyone know how to make our storage plugin allow overprovisioning
 with the new storage framework? Looks like its currently hardcoded to just
 NFS or VMFS.

 I imagine we'd want to add a method to StoragePool, boolean
 StoragePool.getOverprovision()


 server/src/com/cloud/storage/StorageManagerImpl.java

 if (storagePool.getPoolType() == StoragePoolType.NetworkFilesystem ||
 storagePool.getPoolType() ==
 StoragePoolType.VMFS) {
 BigDecimal overProvFactor =
 getStorageOverProvisioningFactor(storagePool.getDataCenterId());
 totalOverProvCapacity = overProvFactor.multiply(new
 BigDecimal(storagePool.getCapacityBytes())).longValue();
 // All this is for the inaccuracy of floats for big number 
 multiplication.
 } else {
 totalOverProvCapacity = storagePool.getCapacityBytes();
 }


IPv6 in VPC (was Re: IPv6 plan - questions)

2014-01-06 Thread Marcus Sorensen
I've discussed this a bit with various subject matter experts at our
datacenters/business, and so far we're leaning toward a rollout like
this:

* VPC has no global IPv6 prefix (super CIDR as current private space),
it's simply IPv6 enabled or not. Admins can choose to route a /60 or a
/48 to a vpc and carve it up among the networks there, but it's not
required or enforced by cloudstack.

* VPC networks get assigned one or multiple IPv6 prefixes, each prefix
can be marked 'SLAAC', 'DHCP', or 'Manual'.

* Mgmt server allows calling external plugins for pushing routes to
routers upstream of VPCs (SDN or router API), but could be manually
done by admins as well

* Work could be done in stages, e.g. SLAAC/manual network ranges would
be fairly straightforward, whereas DHCP ranges would require
programming scripts and ip allocation code.

* An issue was raised about privacy concerns with SLAAC using MAC, but
we think this revolves more around clients than servers; that is, a
client moving around the country would be traceable because of the
MAC, but a server always has the same address anyway.

* Still need to figure out what to do about ACLs, that's a whole
separate issue, but a plan needs to be in place. Do we care about port
forwarding, static NAT, Load Balancing etc as well? Or at least think
about what impact these decisions have.

* We assume there will be an ipv6 public range assignable, allocated
for VPC routers/static nat/loadbalancers to pull from.

On Sat, Jan 4, 2014 at 6:11 AM, Marcus Sorensen shadow...@gmail.com wrote:
 I've put together a rough draft spec:

 https://cwiki.apache.org/confluence/display/CLOUDSTACK/IPv6+in+VPC+Router

 I basically just laid out some rough ideas. I know there has been a
 lot of discussion in the past about DHCPv6, etc. My hope is that we
 can at least decide on a spec, for future reference.


 On Fri, Jan 3, 2014 at 9:53 PM, Marcus Sorensen shadow...@gmail.com wrote:
 It's been a long time since I've heard anything in regards to IPv6,
 let alone VPC support. Does anyone have plans for this at all?  We'd
 like to support IPv6, and we have enough CS knowledge and external
 tools to hack something together, but I'd much prefer to build with
 the community and/or be forward compatible with what it deploys.

 I'd like to start with something simple, like perhaps optionally
 providing a /64 or larger as a parameter when creating a VPC (or a
 separate call to add an IPV6 block), and network on the vpc. Then it
 sounds like there's already a mechanism in place for tracking ipv6
 assignments to nics, that could be leveraged to pass dhcp assignments
 to routers.

 Then there's the whole acl thing, that seems like at least as big of a
 project as mentioned previously.

 On Mon, Aug 12, 2013 at 3:47 PM, Marcus Sorensen shadow...@gmail.com wrote:
 has there been any further discussion that I might have missed around
 ipv6 in VPC?

 On Thu, Mar 7, 2013 at 12:09 PM, Sheng Yang sh...@yasker.org wrote:
 Hi Dave,

 I am glad it fits your need. That's our target. :)

 --Sheng

 On Thu, Mar 7, 2013 at 2:14 AM, Dave Cahill dcah...@midokura.com wrote:
 Hi Sheng,

 Thanks for the quick reply, that helps a lot.

 My main purpose was to figure out how these changes affect virtual
 networking and pluggability. Having read through the IPv6 code today,
 it looks like it will work very nicely with virtual networks.

 For example, when VMs are assigned an IPv6 address, the IPv6 address
 is stored in the NicProfile object. So, taking DHCP as an example, if
 the MidoNet plugin implements the DHCPServiceProvider interface, it
 will receive the NicProfile as one of the parameters of addDhcpEntry.
 If we want to implement IPv6, we can then take the IPv6 address from
 the NicProfile, and just use it as needed.

 Thanks again for taking the time to respond, and for the detailed FS.

 Dave.

 On Thu, Mar 7, 2013 at 4:57 AM, Sheng Yang sh...@yasker.org wrote:

 On Wed, Mar 6, 2013 at 1:36 AM, Dave Cahill dcah...@midokura.com wrote:
  Hi,

 Hi Dave,
 
  I've been catching up on IPv6 plans by reading the functional specs
  and Jira tickets - it's great to have so much material to refer to.
 
  I still have a few questions though, and I'm hoping someone involved
  with the feature can enlighten me.
 
  *[Support for Providers other than Virtual Router]*
  In [3], the spec says No external device support in plan.
  What does this mean exactly?

 Because CloudStack also supports using external devices as network
 controller e.g. Juniper SRX as firewall and NetScaler as load
 balancer. The words here said is just we don't support these devices
 when using IPv6.
 
  For example, if using Providers other than the Virtual Router, does
  the UI still allow setting IPv6 addresses?
 
  If so, do we attempt to pass IPv6 addresses to the Providers no
  matter what, or do we check whether the Provider has IPv6 support?

 Yes, we checked it when you try to create a IPv6 network(currently
 only support advance shared 

Re: Old jars on nightly System VM templates

2014-01-06 Thread Wei ZHOU
The jars will be injected to systemvms from systemvm.iso on host.


2014/1/6 SuichII, Christopher chris.su...@netapp.com

 I updated to the latest System VM templates from
 http://jenkins.buildacloud.org/job/build-systemvm-master/ and the
 CloudStack jars in /usr/local/cloud/systemvm/ appear to be from Nov. 1.
 Should the System VM build be pulling newer jars than that?

 -Chris
 --
 Chris Suich
 chris.su...@netapp.commailto:chris.su...@netapp.com
 NetApp Software Engineer
 Data Center Platforms – Cloud Solutions
 Citrix, Cisco  Red Hat




VMware snapshot question

2014-01-06 Thread Mike Tutkowski
Hi,

I was wondering about the following code in VmwareStorageManagerImpl. It is
in the CreateVMSnapshotAnswer execute(VmwareHostService hostService,
CreateVMSnapshotCommand cmd) method.

The part I wonder about is in populating the mapNewDisk map. For disks like
the following:

i-2-9-VM/fksjfaklsjdgflajs.vmdk, the key for the map ends up being i-2.

When we call this:

String baseName = extractSnapshotBaseFileName(volumeTO.getPath());

It uses a path such as the following:

fksjfaklsjdgflajs

There is no i-2-9-VM/ preceding the name, so the key we search on ends up
being the following:

fksjfaklsjdgflajs

This leads to a newPath being equal to null.

As it turns out, I believe null is actually correct, but - if that's the
case - why do we have all this logic if - in the end - we are just going to
assign null to newPath in every case when creating a VM snapshot for
VMware? As it turns out, null is later interpreted to mean, don't replace
the path field of this volume in the volumes table, which is, I think,
what we want.

Thanks!

VirtualDisk[] vdisks = vmMo.getAllDiskDevice();

for (int i = 0; i  vdisks.length; i ++){

ListPairString, ManagedObjectReference vmdkFiles =
vmMo.getDiskDatastorePathChain(vdisks[i], false);

for(PairString, ManagedObjectReference fileItem :
vmdkFiles) {

String vmdkName = fileItem.first().split( )[1];

if (vmdkName.endsWith(.vmdk)){

vmdkName = vmdkName.substring(0,
vmdkName.length() - (.vmdk).length());

}

String baseName =
extractSnapshotBaseFileName(vmdkName);

mapNewDisk.put(baseName, vmdkName);

}

}

for (VolumeObjectTO volumeTO : volumeTOs) {

String baseName =
extractSnapshotBaseFileName(volumeTO.getPath());

String newPath = mapNewDisk.get(baseName);

// get volume's chain size for this VM snapshot,
exclude current volume vdisk

DataStoreTO store = volumeTO.getDataStore();

long size =
getVMSnapshotChainSize(context,hyperHost,baseName + *.vmdk,

store.getUuid(), newPath);


if(volumeTO.getVolumeType()== Volume.Type.ROOT){

// add memory snapshot size

size = size +
getVMSnapshotChainSize(context,hyperHost,cmd.getVmName()+*.vmsn
,store.getUuid(),null);

}


volumeTO.setSize(size);

volumeTO.setPath(newPath);

}

-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloudhttp://solidfire.com/solution/overview/?video=play
*™*


RE: 4.3 : Developer Profile, tools module commented

2014-01-06 Thread Frank Zhang
Sorry this was by a mistake.
The maven build costs too long so I comment tools out as it's rarely used by 
developer build. later on I found there is 
a profile called 'impatient' which does exactly the same thing.
before bring it back, I wonder if it's right to put 'tools' in developer 
profile? I am sure 90% developer won't use it. Why not
move it to a profile only used by RPM build?  

 -Original Message-
 From: Santhosh Edukulla [mailto:santhosh.eduku...@citrix.com]
 Sent: Tuesday, December 31, 2013 7:25 AM
 To: dev@cloudstack.apache.org
 Subject: 4.3 : Developer Profile, tools module commented
 
 Team,
 
 For branch 4,3, the below commit appears to have commented the tools
 module under developer profile. Any specific reason?
 
 commit fb1f3f0865c254abebfa5a43f66cef116fe36165
 Author: Frank.Zhang frank.zh...@citrix.com
 Date:   Mon Oct 7 18:03:12 2013 -0700
 
 Add missing Baremetal security_group_agent java part
 Change security_group_agent python side in line with default security
 group rules change in 4.2
 
 Conflicts:
 
 
 plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/Bare
 MetalResourceBase.java
 
 diff --git a/pom.xml b/pom.xml
 index 2cee084..31946d8 100644
 --- a/pom.xml
 +++ b/pom.xml
 @@ -747,7 +747,9 @@
/properties
modules
  moduledeveloper/module
 +!--
  moduletools/module
 +--
/modules
  /profile
  profile
 
 Thanks!
 Santhosh


[Proposal] Switch to Java 7

2014-01-06 Thread Kelven Yang
Java 7 has been around for some time now. I strongly suggest CloudStack to 
adopt Java 7 as early as possible, the reason I feel like to raise the issue is 
from the some of practicing with the new DB transaction pattern, as following 
example shows.  The new Transaction pattern uses anonymous class to beautify 
the code structure, but in the mean time, it will introduce a couple runtime 
costs

  1.  Anonymous class introduces a “captured context”, information exchange 
between the containing context and the anonymous class implementation context 
has either to go through with mutable passed-in parameter or returned result 
object, in the following example, without changing basic Transaction framework, 
I have to exchange through returned result with an un-typed array. This has a 
few implications at run time, basically with each call of the method, it will 
generate two objects to the heap. Depends on how frequently the involved method 
will be called, it may introduce quite a burden to java GC process
  2.  Anonymous class captured context also means that there will be more 
hidden classes be generated, since each appearance of the anonymous class 
implementation will have a distance copy of its own as hidden class, it will 
generally increase our permanent heap usage, which is already pretty huge with 
current CloudStack code base.

Java 7 has a language level support to address the issues in a cheaper way that 
our current DB Transaction code pattern is trying to solve.  
http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html.
   So, time to adopt Java 7?

public OutcomeVirtualMachine startVmThroughJobQueue(final String vmUuid,
final MapVirtualMachineProfile.Param, Object params,
final DeploymentPlan planToDeploy) {

final CallContext context = CallContext.current();
final User callingUser = context.getCallingUser();
final Account callingAccount = context.getCallingAccount();

final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);


Object[] result = Transaction.execute(new 
TransactionCallbackObject[]() {
@Override
public Object[] doInTransaction(TransactionStatus status) {
VmWorkJobVO workJob = null;

   _vmDao.lockRow(vm.getId(), true);
   ListVmWorkJobVO pendingWorkJobs = 
_workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
vm.getId(), VmWorkStart.class.getName());

   if (pendingWorkJobs.size()  0) {
   assert (pendingWorkJobs.size() == 1);
   workJob = pendingWorkJobs.get(0);
   } else {
   workJob = new VmWorkJobVO(context.getContextId());


workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
   workJob.setCmd(VmWorkStart.class.getName());

   workJob.setAccountId(callingAccount.getId());
   workJob.setUserId(callingUser.getId());
   workJob.setStep(VmWorkJobVO.Step.Starting);
   workJob.setVmType(vm.getType());
   workJob.setVmInstanceId(vm.getId());

workJob.setRelated(AsyncJobExecutionContext.getOriginJobContextId());

   // save work context info (there are some duplications)
VmWorkStart workInfo = new VmWorkStart(callingUser.getId(), 
callingAccount.getId(), vm.getId(), 
VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
   workInfo.setPlan(planToDeploy);
   workInfo.setParams(params);
   workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));

_jobMgr.submitAsyncJob(workJob, 
VmWorkConstants.VM_WORK_QUEUE, vm.getId());
}

return new Object[] {workJob, new Long(workJob.getId())};
}
});

final long jobId = (Long)result[1];
AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(jobId);

return new VmStateSyncOutcome((VmWorkJobVO)result[0],
VirtualMachine.PowerState.PowerOn, vm.getId(), null);
}


Kelven


Re: [Proposal] Switch to Java 7

2014-01-06 Thread Chiradeep Vittal
Yes, there was another discussion here:
http://markmail.org/thread/uf6bxab6u4z4fmrp



On 1/6/14 3:18 PM, Kelven Yang kelven.y...@citrix.com wrote:

Java 7 has been around for some time now. I strongly suggest CloudStack
to adopt Java 7 as early as possible, the reason I feel like to raise the
issue is from the some of practicing with the new DB transaction pattern,
as following example shows.  The new Transaction pattern uses anonymous
class to beautify the code structure, but in the mean time, it will
introduce a couple runtime costs

  1.  Anonymous class introduces a ³captured context², information
exchange between the containing context and the anonymous class
implementation context has either to go through with mutable passed-in
parameter or returned result object, in the following example, without
changing basic Transaction framework, I have to exchange through returned
result with an un-typed array. This has a few implications at run time,
basically with each call of the method, it will generate two objects to
the heap. Depends on how frequently the involved method will be called,
it may introduce quite a burden to java GC process
  2.  Anonymous class captured context also means that there will be more
hidden classes be generated, since each appearance of the anonymous class
implementation will have a distance copy of its own as hidden class, it
will generally increase our permanent heap usage, which is already pretty
huge with current CloudStack code base.

Java 7 has a language level support to address the issues in a cheaper
way that our current DB Transaction code pattern is trying to solve.
http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClo
se.html.   So, time to adopt Java 7?

public OutcomeVirtualMachine startVmThroughJobQueue(final String
vmUuid,
final MapVirtualMachineProfile.Param, Object params,
final DeploymentPlan planToDeploy) {

final CallContext context = CallContext.current();
final User callingUser = context.getCallingUser();
final Account callingAccount = context.getCallingAccount();

final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);


Object[] result = Transaction.execute(new
TransactionCallbackObject[]() {
@Override
public Object[] doInTransaction(TransactionStatus status) {
VmWorkJobVO workJob = null;

   _vmDao.lockRow(vm.getId(), true);
   ListVmWorkJobVO pendingWorkJobs =
_workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
vm.getId(), VmWorkStart.class.getName());

   if (pendingWorkJobs.size()  0) {
   assert (pendingWorkJobs.size() == 1);
   workJob = pendingWorkJobs.get(0);
   } else {
   workJob = new VmWorkJobVO(context.getContextId());

  
workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
   workJob.setCmd(VmWorkStart.class.getName());

   workJob.setAccountId(callingAccount.getId());
   workJob.setUserId(callingUser.getId());
   workJob.setStep(VmWorkJobVO.Step.Starting);
   workJob.setVmType(vm.getType());
   workJob.setVmInstanceId(vm.getId());
  
workJob.setRelated(AsyncJobExecutionContext.getOriginJobContextId());

   // save work context info (there are some duplications)
VmWorkStart workInfo = new
VmWorkStart(callingUser.getId(), callingAccount.getId(), vm.getId(),
VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
   workInfo.setPlan(planToDeploy);
   workInfo.setParams(params);
   workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));

_jobMgr.submitAsyncJob(workJob,
VmWorkConstants.VM_WORK_QUEUE, vm.getId());
}

return new Object[] {workJob, new Long(workJob.getId())};
}
});

final long jobId = (Long)result[1];
AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(jobId);

return new VmStateSyncOutcome((VmWorkJobVO)result[0],
VirtualMachine.PowerState.PowerOn, vm.getId(), null);
}


Kelven



Re: VMware snapshot question

2014-01-06 Thread Mike Tutkowski
In short, I believe we can remove mapNewDisk and just assign null to
newPath. This will keep the existing path for the volume in question (in
the volumes table) in the same state as it was before we created a VMware
snapshot, which I believe is the intent anyways.

Thoughts on that?


On Mon, Jan 6, 2014 at 4:10 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Hi,

 I was wondering about the following code in VmwareStorageManagerImpl. It
 is in the CreateVMSnapshotAnswer execute(VmwareHostService hostService,
 CreateVMSnapshotCommand cmd) method.

 The part I wonder about is in populating the mapNewDisk map. For disks
 like the following:

 i-2-9-VM/fksjfaklsjdgflajs.vmdk, the key for the map ends up being i-2.

 When we call this:

 String baseName = extractSnapshotBaseFileName(volumeTO.getPath());

 It uses a path such as the following:

 fksjfaklsjdgflajs

 There is no i-2-9-VM/ preceding the name, so the key we search on ends up
 being the following:

 fksjfaklsjdgflajs

 This leads to a newPath being equal to null.

 As it turns out, I believe null is actually correct, but - if that's the
 case - why do we have all this logic if - in the end - we are just going to
 assign null to newPath in every case when creating a VM snapshot for
 VMware? As it turns out, null is later interpreted to mean, don't replace
 the path field of this volume in the volumes table, which is, I think,
 what we want.

 Thanks!

 VirtualDisk[] vdisks = vmMo.getAllDiskDevice();

 for (int i = 0; i  vdisks.length; i ++){

 ListPairString, ManagedObjectReference vmdkFiles =
 vmMo.getDiskDatastorePathChain(vdisks[i], false);

 for(PairString, ManagedObjectReference fileItem :
 vmdkFiles) {

 String vmdkName = fileItem.first().split( )[1];

 if (vmdkName.endsWith(.vmdk)){

 vmdkName = vmdkName.substring(0,
 vmdkName.length() - (.vmdk).length());

 }

 String baseName =
 extractSnapshotBaseFileName(vmdkName);

 mapNewDisk.put(baseName, vmdkName);

 }

 }

 for (VolumeObjectTO volumeTO : volumeTOs) {

 String baseName =
 extractSnapshotBaseFileName(volumeTO.getPath());

 String newPath = mapNewDisk.get(baseName);

 // get volume's chain size for this VM snapshot,
 exclude current volume vdisk

 DataStoreTO store = volumeTO.getDataStore();

 long size =
 getVMSnapshotChainSize(context,hyperHost,baseName + *.vmdk,

 store.getUuid(), newPath);


 if(volumeTO.getVolumeType()== Volume.Type.ROOT){

 // add memory snapshot size

 size = size +
 getVMSnapshotChainSize(context,hyperHost,cmd.getVmName()+*.vmsn
 ,store.getUuid(),null);

 }


 volumeTO.setSize(size);

 volumeTO.setPath(newPath);

 }

 --
 *Mike Tutkowski*
 *Senior CloudStack Developer, SolidFire Inc.*
 e: mike.tutkow...@solidfire.com
 o: 303.746.7302
 Advancing the way the world uses the 
 cloudhttp://solidfire.com/solution/overview/?video=play
 *™*




-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloudhttp://solidfire.com/solution/overview/?video=play
*™*


Re: [Proposal] Switch to Java 7

2014-01-06 Thread Kelven Yang
Yes, it is for the same reason, to management resource leaking in a better
way. Java 7 finally has added this long-waited language feature. I’ve been
using JRE 7 with CloudStack for a while and didn’t see any alerted issues.

Kelven

On 1/6/14, 3:34 PM, Chiradeep Vittal chiradeep.vit...@citrix.com wrote:

Yes, there was another discussion here:
http://markmail.org/thread/uf6bxab6u4z4fmrp



On 1/6/14 3:18 PM, Kelven Yang kelven.y...@citrix.com wrote:

Java 7 has been around for some time now. I strongly suggest CloudStack
to adopt Java 7 as early as possible, the reason I feel like to raise the
issue is from the some of practicing with the new DB transaction pattern,
as following example shows.  The new Transaction pattern uses anonymous
class to beautify the code structure, but in the mean time, it will
introduce a couple runtime costs

  1.  Anonymous class introduces a ³captured context², information
exchange between the containing context and the anonymous class
implementation context has either to go through with mutable passed-in
parameter or returned result object, in the following example, without
changing basic Transaction framework, I have to exchange through returned
result with an un-typed array. This has a few implications at run time,
basically with each call of the method, it will generate two objects to
the heap. Depends on how frequently the involved method will be called,
it may introduce quite a burden to java GC process
  2.  Anonymous class captured context also means that there will be more
hidden classes be generated, since each appearance of the anonymous class
implementation will have a distance copy of its own as hidden class, it
will generally increase our permanent heap usage, which is already pretty
huge with current CloudStack code base.

Java 7 has a language level support to address the issues in a cheaper
way that our current DB Transaction code pattern is trying to solve.
http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceCl
o
se.html.   So, time to adopt Java 7?

public OutcomeVirtualMachine startVmThroughJobQueue(final String
vmUuid,
final MapVirtualMachineProfile.Param, Object params,
final DeploymentPlan planToDeploy) {

final CallContext context = CallContext.current();
final User callingUser = context.getCallingUser();
final Account callingAccount = context.getCallingAccount();

final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);


Object[] result = Transaction.execute(new
TransactionCallbackObject[]() {
@Override
public Object[] doInTransaction(TransactionStatus status) {
VmWorkJobVO workJob = null;

   _vmDao.lockRow(vm.getId(), true);
   ListVmWorkJobVO pendingWorkJobs =
_workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
vm.getId(), VmWorkStart.class.getName());

   if (pendingWorkJobs.size()  0) {
   assert (pendingWorkJobs.size() == 1);
   workJob = pendingWorkJobs.get(0);
   } else {
   workJob = new VmWorkJobVO(context.getContextId());

 
workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
   workJob.setCmd(VmWorkStart.class.getName());

   workJob.setAccountId(callingAccount.getId());
   workJob.setUserId(callingUser.getId());
   workJob.setStep(VmWorkJobVO.Step.Starting);
   workJob.setVmType(vm.getType());
   workJob.setVmInstanceId(vm.getId());
 
workJob.setRelated(AsyncJobExecutionContext.getOriginJobContextId());

   // save work context info (there are some duplications)
VmWorkStart workInfo = new
VmWorkStart(callingUser.getId(), callingAccount.getId(), vm.getId(),
VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
   workInfo.setPlan(planToDeploy);
   workInfo.setParams(params);
   workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));

_jobMgr.submitAsyncJob(workJob,
VmWorkConstants.VM_WORK_QUEUE, vm.getId());
}

return new Object[] {workJob, new Long(workJob.getId())};
}
});

final long jobId = (Long)result[1];
AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(jobId);

return new VmStateSyncOutcome((VmWorkJobVO)result[0],
VirtualMachine.PowerState.PowerOn, vm.getId(), null);
}


Kelven




Re: VMware snapshot question

2014-01-06 Thread Mike Tutkowski
Actually, the more I look at this code, the more I think perhaps VMware
snapshots are broken because the newPath field should probably not be
assigned null after creating a new VMware snapshot (I'm thinking the intend
is to replace the other path with a new path that refers to the delta file
that was just created).

Does anyone know who worked on VMware snapshots? I've love to ask these
questions to him soon as we are approaching the end of 4.3.

Thanks!


On Mon, Jan 6, 2014 at 4:35 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 In short, I believe we can remove mapNewDisk and just assign null to
 newPath. This will keep the existing path for the volume in question (in
 the volumes table) in the same state as it was before we created a VMware
 snapshot, which I believe is the intent anyways.

 Thoughts on that?


 On Mon, Jan 6, 2014 at 4:10 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Hi,

 I was wondering about the following code in VmwareStorageManagerImpl. It
 is in the CreateVMSnapshotAnswer execute(VmwareHostService hostService,
 CreateVMSnapshotCommand cmd) method.

 The part I wonder about is in populating the mapNewDisk map. For disks
 like the following:

 i-2-9-VM/fksjfaklsjdgflajs.vmdk, the key for the map ends up being i-2.

 When we call this:

 String baseName = extractSnapshotBaseFileName(volumeTO.getPath());

 It uses a path such as the following:

 fksjfaklsjdgflajs

 There is no i-2-9-VM/ preceding the name, so the key we search on ends up
 being the following:

 fksjfaklsjdgflajs

 This leads to a newPath being equal to null.

 As it turns out, I believe null is actually correct, but - if that's the
 case - why do we have all this logic if - in the end - we are just going to
 assign null to newPath in every case when creating a VM snapshot for
 VMware? As it turns out, null is later interpreted to mean, don't replace
 the path field of this volume in the volumes table, which is, I think,
 what we want.

 Thanks!

 VirtualDisk[] vdisks = vmMo.getAllDiskDevice();

 for (int i = 0; i  vdisks.length; i ++){

 ListPairString, ManagedObjectReference vmdkFiles
 = vmMo.getDiskDatastorePathChain(vdisks[i], false);

 for(PairString, ManagedObjectReference fileItem :
 vmdkFiles) {

 String vmdkName = fileItem.first().split( )[1];

 if (vmdkName.endsWith(.vmdk)){

 vmdkName = vmdkName.substring(0,
 vmdkName.length() - (.vmdk).length());

 }

 String baseName =
 extractSnapshotBaseFileName(vmdkName);

 mapNewDisk.put(baseName, vmdkName);

 }

 }

 for (VolumeObjectTO volumeTO : volumeTOs) {

 String baseName =
 extractSnapshotBaseFileName(volumeTO.getPath());

 String newPath = mapNewDisk.get(baseName);

 // get volume's chain size for this VM snapshot,
 exclude current volume vdisk

 DataStoreTO store = volumeTO.getDataStore();

 long size =
 getVMSnapshotChainSize(context,hyperHost,baseName + *.vmdk,

 store.getUuid(), newPath);


 if(volumeTO.getVolumeType()== Volume.Type.ROOT){

 // add memory snapshot size

 size = size +
 getVMSnapshotChainSize(context,hyperHost,cmd.getVmName()+*.vmsn
 ,store.getUuid(),null);

 }


 volumeTO.setSize(size);

 volumeTO.setPath(newPath);

 }

 --
 *Mike Tutkowski*
  *Senior CloudStack Developer, SolidFire Inc.*
 e: mike.tutkow...@solidfire.com
 o: 303.746.7302
 Advancing the way the world uses the 
 cloudhttp://solidfire.com/solution/overview/?video=play
 *™*




 --
 *Mike Tutkowski*
 *Senior CloudStack Developer, SolidFire Inc.*
 e: mike.tutkow...@solidfire.com
 o: 303.746.7302
 Advancing the way the world uses the 
 cloudhttp://solidfire.com/solution/overview/?video=play
 *™*




-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloudhttp://solidfire.com/solution/overview/?video=play
*™*


Re: [Proposal] Switch to Java 7

2014-01-06 Thread Ryan Lei
There was yet another similar discussion a half-year ago:
http://markmail.org/thread/ap6v46r3mdsgdszp

---
Yu-Heng (Ryan) Lei, Associate Researcher
Cloud Computing Dept, Chunghwa Telecom Labs
ryan...@cht.com.tw or ryanlei750...@gmail.com



On Tue, Jan 7, 2014 at 7:34 AM, Chiradeep Vittal 
chiradeep.vit...@citrix.com wrote:

 Yes, there was another discussion here:
 http://markmail.org/thread/uf6bxab6u4z4fmrp



 On 1/6/14 3:18 PM, Kelven Yang kelven.y...@citrix.com wrote:

 Java 7 has been around for some time now. I strongly suggest CloudStack
 to adopt Java 7 as early as possible, the reason I feel like to raise the
 issue is from the some of practicing with the new DB transaction pattern,
 as following example shows.  The new Transaction pattern uses anonymous
 class to beautify the code structure, but in the mean time, it will
 introduce a couple runtime costs
 
   1.  Anonymous class introduces a ³captured context², information
 exchange between the containing context and the anonymous class
 implementation context has either to go through with mutable passed-in
 parameter or returned result object, in the following example, without
 changing basic Transaction framework, I have to exchange through returned
 result with an un-typed array. This has a few implications at run time,
 basically with each call of the method, it will generate two objects to
 the heap. Depends on how frequently the involved method will be called,
 it may introduce quite a burden to java GC process
   2.  Anonymous class captured context also means that there will be more
 hidden classes be generated, since each appearance of the anonymous class
 implementation will have a distance copy of its own as hidden class, it
 will generally increase our permanent heap usage, which is already pretty
 huge with current CloudStack code base.
 
 Java 7 has a language level support to address the issues in a cheaper
 way that our current DB Transaction code pattern is trying to solve.
 
 http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClo
 se.html.   So, time to adopt Java 7?
 
 public OutcomeVirtualMachine startVmThroughJobQueue(final String
 vmUuid,
 final MapVirtualMachineProfile.Param, Object params,
 final DeploymentPlan planToDeploy) {
 
 final CallContext context = CallContext.current();
 final User callingUser = context.getCallingUser();
 final Account callingAccount = context.getCallingAccount();
 
 final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 
 
 Object[] result = Transaction.execute(new
 TransactionCallbackObject[]() {
 @Override
 public Object[] doInTransaction(TransactionStatus status) {
 VmWorkJobVO workJob = null;
 
_vmDao.lockRow(vm.getId(), true);
ListVmWorkJobVO pendingWorkJobs =
 _workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
 vm.getId(), VmWorkStart.class.getName());
 
if (pendingWorkJobs.size()  0) {
assert (pendingWorkJobs.size() == 1);
workJob = pendingWorkJobs.get(0);
} else {
workJob = new VmWorkJobVO(context.getContextId());
 
 
 workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
workJob.setCmd(VmWorkStart.class.getName());
 
workJob.setAccountId(callingAccount.getId());
workJob.setUserId(callingUser.getId());
workJob.setStep(VmWorkJobVO.Step.Starting);
workJob.setVmType(vm.getType());
workJob.setVmInstanceId(vm.getId());
 
 workJob.setRelated(AsyncJobExecutionContext.getOriginJobContextId());
 
// save work context info (there are some duplications)
 VmWorkStart workInfo = new
 VmWorkStart(callingUser.getId(), callingAccount.getId(), vm.getId(),
 VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
workInfo.setPlan(planToDeploy);
workInfo.setParams(params);
workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 
 _jobMgr.submitAsyncJob(workJob,
 VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 }
 
 return new Object[] {workJob, new Long(workJob.getId())};
 }
 });
 
 final long jobId = (Long)result[1];
 AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(jobId);
 
 return new VmStateSyncOutcome((VmWorkJobVO)result[0],
 VirtualMachine.PowerState.PowerOn, vm.getId(), null);
 }
 
 
 Kelven




Re: [jira] [Reopened] (CLOUDSTACK-5432) [Automation] Libvtd getting crashed and agent going to alert start

2014-01-06 Thread Marcus Sorensen
This looks different, but I'll take a peek nonetheless.
On Jan 6, 2014 1:10 PM, Rayees Namathponnan (JIRA) j...@apache.org
wrote:


  [
 https://issues.apache.org/jira/browse/CLOUDSTACK-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]

 Rayees Namathponnan reopened CLOUDSTACK-5432:
 -


 Still i am hitting this issue again; please see the agent log;  also
 attaching libvird  and agent logs


 2014-01-06 02:59:18,953 DEBUG [cloud.agent.Agent]
 (agentRequest-Handler-4:null) Request:Seq 2-812254431:  { Cmd , MgmtId:
 29066118877352, via: 2, Ver: v1, Flags: 100011,
 [{org.apache.cloudstack.storage.command.DeleteCommand:{data:{org.apache.cloudstack.storage.to.VolumeObjectTO:{uuid:98229b00-ad9e-4b90-a911-78a73f90548a,volumeType:ROOT,dataStore:{org.apache.cloudstack.storage.to.PrimaryDataStoreTO:{uuid:41b632b5-40b3-3024-a38b-ea259c72579f,id:2,poolType:NetworkFilesystem,host:10.223.110.232,path:/export/home/rayees/SC_QA_AUTO4/primary2,port:2049,url:NetworkFilesystem://
 10.223.110.232//export/home/rayees/SC_QA_AUTO4/primary2/?ROLE=PrimarySTOREUUID=41b632b5-40b3-3024-a38b-ea259c72579f
 }},name:ROOT-266,size:8589934592,path:98229b00-ad9e-4b90-a911-78a73f90548a,volumeId:280,vmName:i-212-266-QA,accountId:212,format:QCOW2,id:280,deviceId:0,hypervisorType:KVM}},wait:0}}]
 }
 2014-01-06 02:59:18,953 DEBUG [cloud.agent.Agent]
 (agentRequest-Handler-4:null) Processing command:
 org.apache.cloudstack.storage.command.DeleteCommand
 2014-01-06 02:59:25,054 DEBUG [cloud.agent.Agent]
 (agentRequest-Handler-1:null) Request:Seq 2-812254432:  { Cmd , MgmtId:
 29066118877352, via: 2, Ver: v1, Flags: 100111,
 [{com.cloud.agent.api.storage.DestroyCommand:{volume:{id:126,mountPoint:/export/home/rayees/SC_QA_AUTO4/primary,path:7c5859c4-792b-4594-81d7-1e149e8a6aef,size:0,storagePoolType:NetworkFilesystem,storagePoolUuid:fff90cb5-06dd-33b3-8815-d78c08ca01d9,deviceId:0},wait:0}}]
 }
 2014-01-06 02:59:25,054 DEBUG [cloud.agent.Agent]
 (agentRequest-Handler-1:null) Processing command:
 com.cloud.agent.api.storage.DestroyCommand
 2014-01-06 03:03:05,781 DEBUG [utils.nio.NioConnection]
 (Agent-Selector:null) Location 1: Socket 
 Socket[addr=/10.223.49.195,port=8250,localport=44856]
 closed on read.  Probably -1 returned: Connection closed with -1 on reading
 size.
 2014-01-06 03:03:05,781 DEBUG [utils.nio.NioConnection]
 (Agent-Selector:null) Closing socket Socket[addr=/10.223.49.195
 ,port=8250,localport=44856]
 2014-01-06 03:03:05,781 DEBUG [cloud.agent.Agent] (Agent-Handler-5:null)
 Clearing watch list: 2
 2014-01-06 03:03:10,782 INFO  [cloud.agent.Agent] (Agent-Handler-5:null)
 Lost connection to the server. Dealing with the remaining commands...
 2014-01-06 03:03:10,782 INFO  [cloud.agent.Agent] (Agent-Handler-5:null)
 Cannot connect because we still have 5 commands in progress.
 2014-01-06 03:03:15,782 INFO  [cloud.agent.Agent] (Agent-Handler-5:null)
 Lost connection to the server. Dealing with the remaining commands...
 2014-01-06 03:03:15,783 INFO  [cloud.agent.Agent] (Agent-Handler-5:null)
 Cannot connect because we still have 5 commands in progress.
 2014-01-06 03:03:20,783 INFO  [cloud.agent.Agent] (Agent-Handler-5:null)
 Lost connection to the server. Dealing with the remaining commands...

  [Automation] Libvtd getting crashed and agent going to alert start
  ---
 
  Key: CLOUDSTACK-5432
  URL:
 https://issues.apache.org/jira/browse/CLOUDSTACK-5432
  Project: CloudStack
   Issue Type: Bug
   Security Level: Public(Anyone can view this level - this is the
 default.)
   Components: KVM
 Affects Versions: 4.3.0
  Environment: KVM (RHEL 6.3)
  Branch : 4.3
 Reporter: Rayees Namathponnan
 Assignee: Marcus Sorensen
 Priority: Blocker
  Fix For: 4.3.0
 
  Attachments: KVM_Automation_Dec_11.rar, agent1.rar, agent2.rar,
 management-server.rar
 
 
  This issue is observed in  4.3 automation environment;  libvirt crashed
 and cloudstack agent went to alert start;
  Please see the agent log; connection between agent and MS lost with
 error Connection closed with -1 on reading size.  @ 2013-12-09
 19:47:06,969
  2013-12-09 19:43:41,495 DEBUG [cloud.agent.Agent]
 (agentRequest-Handler-2:null) Processing command:
 com.cloud.agent.api.GetStorageStatsCommand
  2013-12-09 19:47:06,969 DEBUG [utils.nio.NioConnection]
 (Agent-Selector:null) Location 1: Socket 
 Socket[addr=/10.223.49.195,port=8250,localport=40801]
 closed on read.  Probably -1 returned: Connection closed with -1 on reading
 size.
  2013-12-09 19:47:06,969 DEBUG [utils.nio.NioConnection]
 (Agent-Selector:null) Closing socket Socket[addr=/10.223.49.195
 ,port=8250,localport=40801]
  2013-12-09 19:47:06,969 DEBUG [cloud.agent.Agent] (Agent-Handler-3:null)
 Clearing watch list: 2
  2013-12-09 19:47:11,969 INFO  

Re: [Proposal] Switch to Java 7

2014-01-06 Thread Chiradeep Vittal
Java 7 is preferred for Apache Hadoop but not required
http://wiki.apache.org/hadoop/HadoopJavaVersions
(I was looking to see if other OSS projects had migrated)

--
Chiradeep

 On Jan 6, 2014, at 6:16 PM, Ryan Lei ryan...@cht.com.tw wrote:
 
 There was yet another similar discussion a half-year ago:
 http://markmail.org/thread/ap6v46r3mdsgdszp
 
 ---
 Yu-Heng (Ryan) Lei, Associate Researcher
 Cloud Computing Dept, Chunghwa Telecom Labs
 ryan...@cht.com.tw or ryanlei750...@gmail.com
 
 
 
 On Tue, Jan 7, 2014 at 7:34 AM, Chiradeep Vittal 
 chiradeep.vit...@citrix.com wrote:
 
 Yes, there was another discussion here:
 http://markmail.org/thread/uf6bxab6u4z4fmrp
 
 
 
 On 1/6/14 3:18 PM, Kelven Yang kelven.y...@citrix.com wrote:
 
 Java 7 has been around for some time now. I strongly suggest CloudStack
 to adopt Java 7 as early as possible, the reason I feel like to raise the
 issue is from the some of practicing with the new DB transaction pattern,
 as following example shows.  The new Transaction pattern uses anonymous
 class to beautify the code structure, but in the mean time, it will
 introduce a couple runtime costs
 
 1.  Anonymous class introduces a ³captured context², information
 exchange between the containing context and the anonymous class
 implementation context has either to go through with mutable passed-in
 parameter or returned result object, in the following example, without
 changing basic Transaction framework, I have to exchange through returned
 result with an un-typed array. This has a few implications at run time,
 basically with each call of the method, it will generate two objects to
 the heap. Depends on how frequently the involved method will be called,
 it may introduce quite a burden to java GC process
 2.  Anonymous class captured context also means that there will be more
 hidden classes be generated, since each appearance of the anonymous class
 implementation will have a distance copy of its own as hidden class, it
 will generally increase our permanent heap usage, which is already pretty
 huge with current CloudStack code base.
 
 Java 7 has a language level support to address the issues in a cheaper
 way that our current DB Transaction code pattern is trying to solve.
 http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClo
 se.html.   So, time to adopt Java 7?
 
   public OutcomeVirtualMachine startVmThroughJobQueue(final String
 vmUuid,
   final MapVirtualMachineProfile.Param, Object params,
   final DeploymentPlan planToDeploy) {
 
   final CallContext context = CallContext.current();
   final User callingUser = context.getCallingUser();
   final Account callingAccount = context.getCallingAccount();
 
   final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);
 
 
   Object[] result = Transaction.execute(new
 TransactionCallbackObject[]() {
   @Override
   public Object[] doInTransaction(TransactionStatus status) {
   VmWorkJobVO workJob = null;
 
  _vmDao.lockRow(vm.getId(), true);
  ListVmWorkJobVO pendingWorkJobs =
 _workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
   vm.getId(), VmWorkStart.class.getName());
 
  if (pendingWorkJobs.size()  0) {
  assert (pendingWorkJobs.size() == 1);
  workJob = pendingWorkJobs.get(0);
  } else {
  workJob = new VmWorkJobVO(context.getContextId());
 
 
 workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
  workJob.setCmd(VmWorkStart.class.getName());
 
  workJob.setAccountId(callingAccount.getId());
  workJob.setUserId(callingUser.getId());
  workJob.setStep(VmWorkJobVO.Step.Starting);
  workJob.setVmType(vm.getType());
  workJob.setVmInstanceId(vm.getId());
 
 workJob.setRelated(AsyncJobExecutionContext.getOriginJobContextId());
 
  // save work context info (there are some duplications)
   VmWorkStart workInfo = new
 VmWorkStart(callingUser.getId(), callingAccount.getId(), vm.getId(),
 VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
  workInfo.setPlan(planToDeploy);
  workInfo.setParams(params);
  workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));
 
   _jobMgr.submitAsyncJob(workJob,
 VmWorkConstants.VM_WORK_QUEUE, vm.getId());
   }
 
   return new Object[] {workJob, new Long(workJob.getId())};
   }
   });
 
   final long jobId = (Long)result[1];
   AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(jobId);
 
   return new VmStateSyncOutcome((VmWorkJobVO)result[0],
   VirtualMachine.PowerState.PowerOn, vm.getId(), null);
   }
 
 
 Kelven
 
 


Re: VMware snapshot question

2014-01-06 Thread Kelven Yang


On 1/6/14, 5:33 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote:

Actually, the more I look at this code, the more I think perhaps VMware
snapshots are broken because the newPath field should probably not be
assigned null after creating a new VMware snapshot


(I'm thinking the intend
is to replace the other path with a new path that refers to the delta file
that was just created).

Yes, your guess is correct, the intent to update with a new path is to
reflect the name change after a VM snapshot is taken. When VM snapshot is
involved, one CloudStack volume may have more than one disks be related
with it. So the information in path field only points to the top of the
disk chain, and it is not guaranteed that this information is in sync with
vCenter since there may exist out-of-band changes from vCenter (by taking
VM snapshot in vCenter).

To gracefully work with vCenter, for existing disks that are attached to a
VM, CloudStack only uses the information stored in path field in volume
table as a search basis to connect the record with the real disk chain in
the storage, as soon as it has located the connection, it actually uses
the most recent information from vCenter datastore to setup the disk
device. In addition, CloudStack also updates the full disk-chain
information to a field called ³chainInfo². The full chain-info can be used
for recovery/copy-out purpose

Kelven



Does anyone know who worked on VMware snapshots? I've love to ask these
questions to him soon as we are approaching the end of 4.3.

Thanks!


On Mon, Jan 6, 2014 at 4:35 PM, Mike Tutkowski
mike.tutkow...@solidfire.com
 wrote:

 In short, I believe we can remove mapNewDisk and just assign null to
 newPath. This will keep the existing path for the volume in question (in
 the volumes table) in the same state as it was before we created a
VMware
 snapshot, which I believe is the intent anyways.

 Thoughts on that?


 On Mon, Jan 6, 2014 at 4:10 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Hi,

 I was wondering about the following code in VmwareStorageManagerImpl.
It
 is in the CreateVMSnapshotAnswer execute(VmwareHostService hostService,
 CreateVMSnapshotCommand cmd) method.

 The part I wonder about is in populating the mapNewDisk map. For disks
 like the following:

 i-2-9-VM/fksjfaklsjdgflajs.vmdk, the key for the map ends up being i-2.

 When we call this:

 String baseName = extractSnapshotBaseFileName(volumeTO.getPath());

 It uses a path such as the following:

 fksjfaklsjdgflajs

 There is no i-2-9-VM/ preceding the name, so the key we search on ends
up
 being the following:

 fksjfaklsjdgflajs

 This leads to a newPath being equal to null.

 As it turns out, I believe null is actually correct, but - if that's
the
 case - why do we have all this logic if - in the end - we are just
going to
 assign null to newPath in every case when creating a VM snapshot for
 VMware? As it turns out, null is later interpreted to mean, don't
replace
 the path field of this volume in the volumes table, which is, I think,
 what we want.

 Thanks!

 VirtualDisk[] vdisks = vmMo.getAllDiskDevice();

 for (int i = 0; i  vdisks.length; i ++){

 ListPairString, ManagedObjectReference
vmdkFiles
 = vmMo.getDiskDatastorePathChain(vdisks[i], false);

 for(PairString, ManagedObjectReference fileItem :
 vmdkFiles) {

 String vmdkName = fileItem.first().split(
)[1];

 if (vmdkName.endsWith(.vmdk)){

 vmdkName = vmdkName.substring(0,
 vmdkName.length() - (.vmdk).length());

 }

 String baseName =
 extractSnapshotBaseFileName(vmdkName);

 mapNewDisk.put(baseName, vmdkName);

 }

 }

 for (VolumeObjectTO volumeTO : volumeTOs) {

 String baseName =
 extractSnapshotBaseFileName(volumeTO.getPath());

 String newPath = mapNewDisk.get(baseName);

 // get volume's chain size for this VM snapshot,
 exclude current volume vdisk

 DataStoreTO store = volumeTO.getDataStore();

 long size =
 getVMSnapshotChainSize(context,hyperHost,baseName + *.vmdk,

 store.getUuid(), newPath);


 if(volumeTO.getVolumeType()== Volume.Type.ROOT){

 // add memory snapshot size

 size = size +
 getVMSnapshotChainSize(context,hyperHost,cmd.getVmName()+*.vmsn
 ,store.getUuid(),null);

 }


 volumeTO.setSize(size);

 volumeTO.setPath(newPath);

 }

 --
 *Mike Tutkowski*
  *Senior CloudStack Developer, SolidFire Inc.*
 e: mike.tutkow...@solidfire.com
 o: 303.746.7302
 Advancing the way the world uses the

Re: VMware snapshot question

2014-01-06 Thread Mike Tutkowski
Thanks for the info, Kelven.

I believe we have a serious bug then as null is being assigned to newPath
when a VMware snapshot is being taken (this is in 4.3, by the way).

I was trying to fix an issue with VMware snapshots and managed storage and
happened upon this.

If you have a moment, you might want to set a breakpoint and step through
and see what I mean first hand.

I'm looking into it, as well.

Thanks!


On Mon, Jan 6, 2014 at 10:02 PM, Kelven Yang kelven.y...@citrix.com wrote:



 On 1/6/14, 5:33 PM, Mike Tutkowski mike.tutkow...@solidfire.com wrote:

 Actually, the more I look at this code, the more I think perhaps VMware
 snapshots are broken because the newPath field should probably not be
 assigned null after creating a new VMware snapshot


 (I'm thinking the intend
 is to replace the other path with a new path that refers to the delta file
 that was just created).

 Yes, your guess is correct, the intent to update with a new path is to
 reflect the name change after a VM snapshot is taken. When VM snapshot is
 involved, one CloudStack volume may have more than one disks be related
 with it. So the information in path field only points to the top of the
 disk chain, and it is not guaranteed that this information is in sync with
 vCenter since there may exist out-of-band changes from vCenter (by taking
 VM snapshot in vCenter).

 To gracefully work with vCenter, for existing disks that are attached to a
 VM, CloudStack only uses the information stored in path field in volume
 table as a search basis to connect the record with the real disk chain in
 the storage, as soon as it has located the connection, it actually uses
 the most recent information from vCenter datastore to setup the disk
 device. In addition, CloudStack also updates the full disk-chain
 information to a field called ³chainInfo². The full chain-info can be used
 for recovery/copy-out purpose

 Kelven


 
 Does anyone know who worked on VMware snapshots? I've love to ask these
 questions to him soon as we are approaching the end of 4.3.
 
 Thanks!
 
 
 On Mon, Jan 6, 2014 at 4:35 PM, Mike Tutkowski
 mike.tutkow...@solidfire.com
  wrote:
 
  In short, I believe we can remove mapNewDisk and just assign null to
  newPath. This will keep the existing path for the volume in question (in
  the volumes table) in the same state as it was before we created a
 VMware
  snapshot, which I believe is the intent anyways.
 
  Thoughts on that?
 
 
  On Mon, Jan 6, 2014 at 4:10 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Hi,
 
  I was wondering about the following code in VmwareStorageManagerImpl.
 It
  is in the CreateVMSnapshotAnswer execute(VmwareHostService hostService,
  CreateVMSnapshotCommand cmd) method.
 
  The part I wonder about is in populating the mapNewDisk map. For disks
  like the following:
 
  i-2-9-VM/fksjfaklsjdgflajs.vmdk, the key for the map ends up being i-2.
 
  When we call this:
 
  String baseName = extractSnapshotBaseFileName(volumeTO.getPath());
 
  It uses a path such as the following:
 
  fksjfaklsjdgflajs
 
  There is no i-2-9-VM/ preceding the name, so the key we search on ends
 up
  being the following:
 
  fksjfaklsjdgflajs
 
  This leads to a newPath being equal to null.
 
  As it turns out, I believe null is actually correct, but - if that's
 the
  case - why do we have all this logic if - in the end - we are just
 going to
  assign null to newPath in every case when creating a VM snapshot for
  VMware? As it turns out, null is later interpreted to mean, don't
 replace
  the path field of this volume in the volumes table, which is, I think,
  what we want.
 
  Thanks!
 
  VirtualDisk[] vdisks = vmMo.getAllDiskDevice();
 
  for (int i = 0; i  vdisks.length; i ++){
 
  ListPairString, ManagedObjectReference
 vmdkFiles
  = vmMo.getDiskDatastorePathChain(vdisks[i], false);
 
  for(PairString, ManagedObjectReference fileItem :
  vmdkFiles) {
 
  String vmdkName = fileItem.first().split(
 )[1];
 
  if (vmdkName.endsWith(.vmdk)){
 
  vmdkName = vmdkName.substring(0,
  vmdkName.length() - (.vmdk).length());
 
  }
 
  String baseName =
  extractSnapshotBaseFileName(vmdkName);
 
  mapNewDisk.put(baseName, vmdkName);
 
  }
 
  }
 
  for (VolumeObjectTO volumeTO : volumeTOs) {
 
  String baseName =
  extractSnapshotBaseFileName(volumeTO.getPath());
 
  String newPath = mapNewDisk.get(baseName);
 
  // get volume's chain size for this VM snapshot,
  exclude current volume vdisk
 
  DataStoreTO store = volumeTO.getDataStore();
 
  long size =
  getVMSnapshotChainSize(context,hyperHost,baseName + *.vmdk,
 
  

Re: Research areas in cloudstack

2014-01-06 Thread jitendra shelar
Thanks Sebgoa, Manas  Iganzio for your inputs.

@Sebgoa: I am into both java and Sys admin. In past, I have worked on IBM
Smart Cloud and Open Stack.

Have taken more interest in Sys admin.

Will think of proceeding with some of the use cases.

Thanks,
Jitendra



On Mon, Jan 6, 2014 at 4:33 AM, jitendra shelar 
jitendra.shelar...@gmail.com wrote:

 Hi All,

 I am pursuing with my MS at BITs, Pilani, India.
 I am planning of doing my final sem project in cloudstack.

 Can somebody please suggest me some research areas in cloudstack?

 Thanks,
 Jitendra




Re: VMware snapshot question

2014-01-06 Thread Mike Tutkowski
Hi Kelven,

To give you an idea visually what I am referring to, please check out this
screen capture:

http://i.imgur.com/ma3FE9o.png

The key is i-2 (part of the folder for the VMDK file).

The value contains the folder the VMDK file is in. Since the path column
for VMware volumes in the DB doesn't contain the folder the VMDK file is
in, I think this may be incorrect, as well.

I also noticed that we later try to retrieve from the map using
volumeTO.getPath() (ignore the getPath() method that enclosing
volumeTO.getPath() in the screen shot as this is related to new code...in
the standard case, the value of volumeTO.getPath() is just returned from
the getPath() method).

In the first line of code visible in the screen capture, why do we go to
the trouble of doing this:

String baseName = extractSnapshotBaseFileName(vmdkName);

It seems like this would have worked:

String baseName = extractSnapshotBaseFileName(volumTO.getPath());

Or am I missing something there?

Thanks!!


On Mon, Jan 6, 2014 at 10:13 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Thanks for the info, Kelven.

 I believe we have a serious bug then as null is being assigned to newPath
 when a VMware snapshot is being taken (this is in 4.3, by the way).

 I was trying to fix an issue with VMware snapshots and managed storage and
 happened upon this.

 If you have a moment, you might want to set a breakpoint and step through
 and see what I mean first hand.

 I'm looking into it, as well.

 Thanks!


 On Mon, Jan 6, 2014 at 10:02 PM, Kelven Yang kelven.y...@citrix.comwrote:



 On 1/6/14, 5:33 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Actually, the more I look at this code, the more I think perhaps VMware
 snapshots are broken because the newPath field should probably not be
 assigned null after creating a new VMware snapshot


 (I'm thinking the intend
 is to replace the other path with a new path that refers to the delta
 file
 that was just created).

 Yes, your guess is correct, the intent to update with a new path is to
 reflect the name change after a VM snapshot is taken. When VM snapshot is
 involved, one CloudStack volume may have more than one disks be related
 with it. So the information in path field only points to the top of the
 disk chain, and it is not guaranteed that this information is in sync with
 vCenter since there may exist out-of-band changes from vCenter (by taking
 VM snapshot in vCenter).

 To gracefully work with vCenter, for existing disks that are attached to a
 VM, CloudStack only uses the information stored in path field in volume
 table as a search basis to connect the record with the real disk chain in
 the storage, as soon as it has located the connection, it actually uses
 the most recent information from vCenter datastore to setup the disk
 device. In addition, CloudStack also updates the full disk-chain
 information to a field called ³chainInfo². The full chain-info can be used
 for recovery/copy-out purpose

 Kelven


 
 Does anyone know who worked on VMware snapshots? I've love to ask these
 questions to him soon as we are approaching the end of 4.3.
 
 Thanks!
 
 
 On Mon, Jan 6, 2014 at 4:35 PM, Mike Tutkowski
 mike.tutkow...@solidfire.com
  wrote:
 
  In short, I believe we can remove mapNewDisk and just assign null to
  newPath. This will keep the existing path for the volume in question
 (in
  the volumes table) in the same state as it was before we created a
 VMware
  snapshot, which I believe is the intent anyways.
 
  Thoughts on that?
 
 
  On Mon, Jan 6, 2014 at 4:10 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Hi,
 
  I was wondering about the following code in VmwareStorageManagerImpl.
 It
  is in the CreateVMSnapshotAnswer execute(VmwareHostService
 hostService,
  CreateVMSnapshotCommand cmd) method.
 
  The part I wonder about is in populating the mapNewDisk map. For disks
  like the following:
 
  i-2-9-VM/fksjfaklsjdgflajs.vmdk, the key for the map ends up being
 i-2.
 
  When we call this:
 
  String baseName = extractSnapshotBaseFileName(volumeTO.getPath());
 
  It uses a path such as the following:
 
  fksjfaklsjdgflajs
 
  There is no i-2-9-VM/ preceding the name, so the key we search on ends
 up
  being the following:
 
  fksjfaklsjdgflajs
 
  This leads to a newPath being equal to null.
 
  As it turns out, I believe null is actually correct, but - if that's
 the
  case - why do we have all this logic if - in the end - we are just
 going to
  assign null to newPath in every case when creating a VM snapshot for
  VMware? As it turns out, null is later interpreted to mean, don't
 replace
  the path field of this volume in the volumes table, which is, I
 think,
  what we want.
 
  Thanks!
 
  VirtualDisk[] vdisks = vmMo.getAllDiskDevice();
 
  for (int i = 0; i  vdisks.length; i ++){
 
  ListPairString, ManagedObjectReference
 vmdkFiles
  = vmMo.getDiskDatastorePathChain(vdisks[i], 

Re: VMware snapshot question

2014-01-06 Thread Mike Tutkowski
Ignore my question about coming up with a baseName.

I see now that volumeTO is not available in the first for loop.

I do think the key and value we have in the map, though, is incorrect.

What do you think?


On Mon, Jan 6, 2014 at 10:43 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Hi Kelven,

 To give you an idea visually what I am referring to, please check out this
 screen capture:

 http://i.imgur.com/ma3FE9o.png

 The key is i-2 (part of the folder for the VMDK file).

 The value contains the folder the VMDK file is in. Since the path column
 for VMware volumes in the DB doesn't contain the folder the VMDK file is
 in, I think this may be incorrect, as well.

 I also noticed that we later try to retrieve from the map using
 volumeTO.getPath() (ignore the getPath() method that enclosing
 volumeTO.getPath() in the screen shot as this is related to new code...in
 the standard case, the value of volumeTO.getPath() is just returned from
 the getPath() method).

 In the first line of code visible in the screen capture, why do we go to
 the trouble of doing this:

 String baseName = extractSnapshotBaseFileName(vmdkName);

 It seems like this would have worked:

 String baseName = extractSnapshotBaseFileName(volumTO.getPath());

 Or am I missing something there?

 Thanks!!


 On Mon, Jan 6, 2014 at 10:13 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Thanks for the info, Kelven.

 I believe we have a serious bug then as null is being assigned to newPath
 when a VMware snapshot is being taken (this is in 4.3, by the way).

 I was trying to fix an issue with VMware snapshots and managed storage
 and happened upon this.

 If you have a moment, you might want to set a breakpoint and step through
 and see what I mean first hand.

 I'm looking into it, as well.

 Thanks!


 On Mon, Jan 6, 2014 at 10:02 PM, Kelven Yang kelven.y...@citrix.comwrote:



 On 1/6/14, 5:33 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Actually, the more I look at this code, the more I think perhaps VMware
 snapshots are broken because the newPath field should probably not be
 assigned null after creating a new VMware snapshot


 (I'm thinking the intend
 is to replace the other path with a new path that refers to the delta
 file
 that was just created).

 Yes, your guess is correct, the intent to update with a new path is to
 reflect the name change after a VM snapshot is taken. When VM snapshot is
 involved, one CloudStack volume may have more than one disks be related
 with it. So the information in path field only points to the top of the
 disk chain, and it is not guaranteed that this information is in sync
 with
 vCenter since there may exist out-of-band changes from vCenter (by taking
 VM snapshot in vCenter).

 To gracefully work with vCenter, for existing disks that are attached to
 a
 VM, CloudStack only uses the information stored in path field in volume
 table as a search basis to connect the record with the real disk chain in
 the storage, as soon as it has located the connection, it actually uses
 the most recent information from vCenter datastore to setup the disk
 device. In addition, CloudStack also updates the full disk-chain
 information to a field called ³chainInfo². The full chain-info can be
 used
 for recovery/copy-out purpose

 Kelven


 
 Does anyone know who worked on VMware snapshots? I've love to ask these
 questions to him soon as we are approaching the end of 4.3.
 
 Thanks!
 
 
 On Mon, Jan 6, 2014 at 4:35 PM, Mike Tutkowski
 mike.tutkow...@solidfire.com
  wrote:
 
  In short, I believe we can remove mapNewDisk and just assign null to
  newPath. This will keep the existing path for the volume in question
 (in
  the volumes table) in the same state as it was before we created a
 VMware
  snapshot, which I believe is the intent anyways.
 
  Thoughts on that?
 
 
  On Mon, Jan 6, 2014 at 4:10 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Hi,
 
  I was wondering about the following code in VmwareStorageManagerImpl.
 It
  is in the CreateVMSnapshotAnswer execute(VmwareHostService
 hostService,
  CreateVMSnapshotCommand cmd) method.
 
  The part I wonder about is in populating the mapNewDisk map. For
 disks
  like the following:
 
  i-2-9-VM/fksjfaklsjdgflajs.vmdk, the key for the map ends up being
 i-2.
 
  When we call this:
 
  String baseName = extractSnapshotBaseFileName(volumeTO.getPath());
 
  It uses a path such as the following:
 
  fksjfaklsjdgflajs
 
  There is no i-2-9-VM/ preceding the name, so the key we search on
 ends
 up
  being the following:
 
  fksjfaklsjdgflajs
 
  This leads to a newPath being equal to null.
 
  As it turns out, I believe null is actually correct, but - if that's
 the
  case - why do we have all this logic if - in the end - we are just
 going to
  assign null to newPath in every case when creating a VM snapshot for
  VMware? As it turns out, null is later interpreted to mean, don't
 replace
  the path field of this volume 

RE: 4.3 : Developer Profile, tools module commented

2014-01-06 Thread Santhosh Edukulla
1. Not sure build it is taking much time because of tools module.  tools also 
builds apidoc which could be intentional( may be ) to include it as part of 
developer profile and has its subsequent references for marvin. 

2. We can bring it back for now under developer profile. Especially, we have RC 
lined up now and current usage could have references to usages\mention at  that 
other places like docs\wiki\current of using the current way of building and 
using tools.Impatient seems to be little misnomer to use.

3. There was a reference to speed up build time @ 
http://markmail.org/message/3ch4lgaq4yjaop7e?q=darren+shepherd+mvn

Regards,
Santhosh

From: Frank Zhang [frank.zh...@citrix.com]
Sent: Monday, January 06, 2014 6:17 PM
To: dev@cloudstack.apache.org
Subject: RE: 4.3 : Developer Profile, tools module commented

Sorry this was by a mistake.
The maven build costs too long so I comment tools out as it's rarely used by 
developer build. later on I found there is
a profile called 'impatient' which does exactly the same thing.
before bring it back, I wonder if it's right to put 'tools' in developer 
profile? I am sure 90% developer won't use it. Why not
move it to a profile only used by RPM build?

 -Original Message-
 From: Santhosh Edukulla [mailto:santhosh.eduku...@citrix.com]
 Sent: Tuesday, December 31, 2013 7:25 AM
 To: dev@cloudstack.apache.org
 Subject: 4.3 : Developer Profile, tools module commented

 Team,

 For branch 4,3, the below commit appears to have commented the tools
 module under developer profile. Any specific reason?

 commit fb1f3f0865c254abebfa5a43f66cef116fe36165
 Author: Frank.Zhang frank.zh...@citrix.com
 Date:   Mon Oct 7 18:03:12 2013 -0700

 Add missing Baremetal security_group_agent java part
 Change security_group_agent python side in line with default security
 group rules change in 4.2

 Conflicts:


 plugins/hypervisors/baremetal/src/com/cloud/baremetal/networkservice/Bare
 MetalResourceBase.java

 diff --git a/pom.xml b/pom.xml
 index 2cee084..31946d8 100644
 --- a/pom.xml
 +++ b/pom.xml
 @@ -747,7 +747,9 @@
/properties
modules
  moduledeveloper/module
 +!--
  moduletools/module
 +--
/modules
  /profile
  profile

 Thanks!
 Santhosh

Re: [Proposal] Switch to Java 7

2014-01-06 Thread Wido den Hollander

Just to repeat what has been discussed some time ago.

All the current Long Term Support distributions have Java 7 available.

RHEL6, RHEL7, Ubuntu 12.04, Ubuntu 14.04 (due in April) will all have 
Java 7 available.


I don't see a problem in switching to Java 7 with CloudStack 4.4 or 4.5

Wido

On 01/07/2014 12:18 AM, Kelven Yang wrote:

Java 7 has been around for some time now. I strongly suggest CloudStack to 
adopt Java 7 as early as possible, the reason I feel like to raise the issue is 
from the some of practicing with the new DB transaction pattern, as following 
example shows.  The new Transaction pattern uses anonymous class to beautify 
the code structure, but in the mean time, it will introduce a couple runtime 
costs

   1.  Anonymous class introduces a “captured context”, information exchange 
between the containing context and the anonymous class implementation context 
has either to go through with mutable passed-in parameter or returned result 
object, in the following example, without changing basic Transaction framework, 
I have to exchange through returned result with an un-typed array. This has a 
few implications at run time, basically with each call of the method, it will 
generate two objects to the heap. Depends on how frequently the involved method 
will be called, it may introduce quite a burden to java GC process
   2.  Anonymous class captured context also means that there will be more 
hidden classes be generated, since each appearance of the anonymous class 
implementation will have a distance copy of its own as hidden class, it will 
generally increase our permanent heap usage, which is already pretty huge with 
current CloudStack code base.

Java 7 has a language level support to address the issues in a cheaper way that 
our current DB Transaction code pattern is trying to solve.  
http://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html.
   So, time to adopt Java 7?

 public OutcomeVirtualMachine startVmThroughJobQueue(final String vmUuid,
 final MapVirtualMachineProfile.Param, Object params,
 final DeploymentPlan planToDeploy) {

 final CallContext context = CallContext.current();
 final User callingUser = context.getCallingUser();
 final Account callingAccount = context.getCallingAccount();

 final VMInstanceVO vm = _vmDao.findByUuid(vmUuid);


 Object[] result = Transaction.execute(new 
TransactionCallbackObject[]() {
 @Override
 public Object[] doInTransaction(TransactionStatus status) {
 VmWorkJobVO workJob = null;

_vmDao.lockRow(vm.getId(), true);
ListVmWorkJobVO pendingWorkJobs = 
_workJobDao.listPendingWorkJobs(VirtualMachine.Type.Instance,
 vm.getId(), VmWorkStart.class.getName());

if (pendingWorkJobs.size()  0) {
assert (pendingWorkJobs.size() == 1);
workJob = pendingWorkJobs.get(0);
} else {
workJob = new VmWorkJobVO(context.getContextId());

 
workJob.setDispatcher(VmWorkConstants.VM_WORK_JOB_DISPATCHER);
workJob.setCmd(VmWorkStart.class.getName());

workJob.setAccountId(callingAccount.getId());
workJob.setUserId(callingUser.getId());
workJob.setStep(VmWorkJobVO.Step.Starting);
workJob.setVmType(vm.getType());
workJob.setVmInstanceId(vm.getId());
 
workJob.setRelated(AsyncJobExecutionContext.getOriginJobContextId());

// save work context info (there are some duplications)
 VmWorkStart workInfo = new 
VmWorkStart(callingUser.getId(), callingAccount.getId(), vm.getId(), 
VirtualMachineManagerImpl.VM_WORK_JOB_HANDLER);
workInfo.setPlan(planToDeploy);
workInfo.setParams(params);
workJob.setCmdInfo(VmWorkSerializer.serialize(workInfo));

 _jobMgr.submitAsyncJob(workJob, 
VmWorkConstants.VM_WORK_QUEUE, vm.getId());
 }

 return new Object[] {workJob, new Long(workJob.getId())};
 }
 });

 final long jobId = (Long)result[1];
 AsyncJobExecutionContext.getCurrentExecutionContext().joinJob(jobId);

 return new VmStateSyncOutcome((VmWorkJobVO)result[0],
 VirtualMachine.PowerState.PowerOn, vm.getId(), null);
 }


Kelven



Re: VMware snapshot question

2014-01-06 Thread Mike Tutkowski
Hi Kelven,

I've been playing around with some code to fix this VMware-snapshot issue.
Probably won't have it done until tomorrow, but I wanted to ask you about
this code:

for (int i = 0; i  vdisks.length; i++) {

ListPairString, ManagedObjectReference vmdkFiles =
vmMo.getDiskDatastorePathChain(vdisks[i], false);


for (PairString, ManagedObjectReference fileItem :
vmdkFiles) {


Can you tell me why we iterate through all of the VMDK files of a virtual
disk? It seems like only the last one counts. Is that correct? Am I to
assume the last one we iterate over is the most recent snapshot (the
snapshot we just took)?

Thanks!


On Mon, Jan 6, 2014 at 10:47 PM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Ignore my question about coming up with a baseName.

 I see now that volumeTO is not available in the first for loop.

 I do think the key and value we have in the map, though, is incorrect.

 What do you think?


 On Mon, Jan 6, 2014 at 10:43 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Hi Kelven,

 To give you an idea visually what I am referring to, please check out
 this screen capture:

 http://i.imgur.com/ma3FE9o.png

 The key is i-2 (part of the folder for the VMDK file).

 The value contains the folder the VMDK file is in. Since the path column
 for VMware volumes in the DB doesn't contain the folder the VMDK file is
 in, I think this may be incorrect, as well.

 I also noticed that we later try to retrieve from the map using
 volumeTO.getPath() (ignore the getPath() method that enclosing
 volumeTO.getPath() in the screen shot as this is related to new code...in
 the standard case, the value of volumeTO.getPath() is just returned from
 the getPath() method).

 In the first line of code visible in the screen capture, why do we go to
 the trouble of doing this:

 String baseName = extractSnapshotBaseFileName(vmdkName);

 It seems like this would have worked:

 String baseName = extractSnapshotBaseFileName(volumTO.getPath());

 Or am I missing something there?

 Thanks!!


 On Mon, Jan 6, 2014 at 10:13 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Thanks for the info, Kelven.

 I believe we have a serious bug then as null is being assigned to
 newPath when a VMware snapshot is being taken (this is in 4.3, by the way).

 I was trying to fix an issue with VMware snapshots and managed storage
 and happened upon this.

 If you have a moment, you might want to set a breakpoint and step
 through and see what I mean first hand.

 I'm looking into it, as well.

 Thanks!


 On Mon, Jan 6, 2014 at 10:02 PM, Kelven Yang kelven.y...@citrix.comwrote:



 On 1/6/14, 5:33 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Actually, the more I look at this code, the more I think perhaps VMware
 snapshots are broken because the newPath field should probably not be
 assigned null after creating a new VMware snapshot


 (I'm thinking the intend
 is to replace the other path with a new path that refers to the delta
 file
 that was just created).

 Yes, your guess is correct, the intent to update with a new path is to
 reflect the name change after a VM snapshot is taken. When VM snapshot
 is
 involved, one CloudStack volume may have more than one disks be related
 with it. So the information in path field only points to the top of the
 disk chain, and it is not guaranteed that this information is in sync
 with
 vCenter since there may exist out-of-band changes from vCenter (by
 taking
 VM snapshot in vCenter).

 To gracefully work with vCenter, for existing disks that are attached
 to a
 VM, CloudStack only uses the information stored in path field in volume
 table as a search basis to connect the record with the real disk chain
 in
 the storage, as soon as it has located the connection, it actually uses
 the most recent information from vCenter datastore to setup the disk
 device. In addition, CloudStack also updates the full disk-chain
 information to a field called ³chainInfo². The full chain-info can be
 used
 for recovery/copy-out purpose

 Kelven


 
 Does anyone know who worked on VMware snapshots? I've love to ask these
 questions to him soon as we are approaching the end of 4.3.
 
 Thanks!
 
 
 On Mon, Jan 6, 2014 at 4:35 PM, Mike Tutkowski
 mike.tutkow...@solidfire.com
  wrote:
 
  In short, I believe we can remove mapNewDisk and just assign null to
  newPath. This will keep the existing path for the volume in question
 (in
  the volumes table) in the same state as it was before we created a
 VMware
  snapshot, which I believe is the intent anyways.
 
  Thoughts on that?
 
 
  On Mon, Jan 6, 2014 at 4:10 PM, Mike Tutkowski 
  mike.tutkow...@solidfire.com wrote:
 
  Hi,
 
  I was wondering about the following code in
 VmwareStorageManagerImpl.
 It
  is in the CreateVMSnapshotAnswer execute(VmwareHostService
 hostService,
  CreateVMSnapshotCommand cmd) method.
 
  The part I wonder about is in populating the 

Re: VMware snapshot question

2014-01-06 Thread Mike Tutkowski
If it's true that only the last iteration counts, couldn't we just grab
the last item in this list?:

ListPairString, ManagedObjectReference vmdkFiles = vmMo.
getDiskDatastorePathChain(vdisks[i], false);


On Tue, Jan 7, 2014 at 12:19 AM, Mike Tutkowski 
mike.tutkow...@solidfire.com wrote:

 Hi Kelven,

 I've been playing around with some code to fix this VMware-snapshot issue.
 Probably won't have it done until tomorrow, but I wanted to ask you about
 this code:

 for (int i = 0; i  vdisks.length; i++) {

 ListPairString, ManagedObjectReference vmdkFiles =
 vmMo.getDiskDatastorePathChain(vdisks[i], false);


 for (PairString, ManagedObjectReference fileItem :
 vmdkFiles) {


 Can you tell me why we iterate through all of the VMDK files of a virtual
 disk? It seems like only the last one counts. Is that correct? Am I to
 assume the last one we iterate over is the most recent snapshot (the
 snapshot we just took)?

 Thanks!


 On Mon, Jan 6, 2014 at 10:47 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Ignore my question about coming up with a baseName.

 I see now that volumeTO is not available in the first for loop.

 I do think the key and value we have in the map, though, is incorrect.

 What do you think?


 On Mon, Jan 6, 2014 at 10:43 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Hi Kelven,

 To give you an idea visually what I am referring to, please check out
 this screen capture:

 http://i.imgur.com/ma3FE9o.png

 The key is i-2 (part of the folder for the VMDK file).

 The value contains the folder the VMDK file is in. Since the path column
 for VMware volumes in the DB doesn't contain the folder the VMDK file is
 in, I think this may be incorrect, as well.

 I also noticed that we later try to retrieve from the map using
 volumeTO.getPath() (ignore the getPath() method that enclosing
 volumeTO.getPath() in the screen shot as this is related to new code...in
 the standard case, the value of volumeTO.getPath() is just returned from
 the getPath() method).

 In the first line of code visible in the screen capture, why do we go to
 the trouble of doing this:

 String baseName = extractSnapshotBaseFileName(vmdkName);

 It seems like this would have worked:

 String baseName = extractSnapshotBaseFileName(volumTO.getPath());

 Or am I missing something there?

 Thanks!!


 On Mon, Jan 6, 2014 at 10:13 PM, Mike Tutkowski 
 mike.tutkow...@solidfire.com wrote:

 Thanks for the info, Kelven.

 I believe we have a serious bug then as null is being assigned to
 newPath when a VMware snapshot is being taken (this is in 4.3, by the way).

 I was trying to fix an issue with VMware snapshots and managed storage
 and happened upon this.

 If you have a moment, you might want to set a breakpoint and step
 through and see what I mean first hand.

 I'm looking into it, as well.

 Thanks!


 On Mon, Jan 6, 2014 at 10:02 PM, Kelven Yang kelven.y...@citrix.comwrote:



 On 1/6/14, 5:33 PM, Mike Tutkowski mike.tutkow...@solidfire.com
 wrote:

 Actually, the more I look at this code, the more I think perhaps
 VMware
 snapshots are broken because the newPath field should probably not be
 assigned null after creating a new VMware snapshot


 (I'm thinking the intend
 is to replace the other path with a new path that refers to the delta
 file
 that was just created).

 Yes, your guess is correct, the intent to update with a new path is to
 reflect the name change after a VM snapshot is taken. When VM snapshot
 is
 involved, one CloudStack volume may have more than one disks be related
 with it. So the information in path field only points to the top of the
 disk chain, and it is not guaranteed that this information is in sync
 with
 vCenter since there may exist out-of-band changes from vCenter (by
 taking
 VM snapshot in vCenter).

 To gracefully work with vCenter, for existing disks that are attached
 to a
 VM, CloudStack only uses the information stored in path field in volume
 table as a search basis to connect the record with the real disk chain
 in
 the storage, as soon as it has located the connection, it actually uses
 the most recent information from vCenter datastore to setup the disk
 device. In addition, CloudStack also updates the full disk-chain
 information to a field called ³chainInfo². The full chain-info can be
 used
 for recovery/copy-out purpose

 Kelven


 
 Does anyone know who worked on VMware snapshots? I've love to ask
 these
 questions to him soon as we are approaching the end of 4.3.
 
 Thanks!
 
 
 On Mon, Jan 6, 2014 at 4:35 PM, Mike Tutkowski
 mike.tutkow...@solidfire.com
  wrote:
 
  In short, I believe we can remove mapNewDisk and just assign null to
  newPath. This will keep the existing path for the volume in
 question (in
  the volumes table) in the same state as it was before we created a
 VMware
  snapshot, which I believe is the intent anyways.
 
  Thoughts on that?
 
 
  On Mon, Jan 6, 2014 at 4:10 PM,