RE: Regarding ssvm-check script

2013-06-14 Thread Rajesh Battala
Nitin, 
Yes, the ssvm under console proxy should be removed. It make more sense for the 
script to be coming from secondary-storage folder path. 
If you are removing the script under consoleproxy, make the change in the 
systemvm-description.xml to pick the ssvm script from the secondary storage 
scripts. 
If you don't make that change systemvm.iso might not have the ssvm script.

Thanks
Rajesh Battala

> -Original Message-
> From: Nitin Mehta [mailto:nitin.me...@citrix.com]
> Sent: Friday, June 14, 2013 12:09 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Regarding ssvm-check script
> 
> Hi Rajesh,
> Please find my comments inline
> 
> On 13/06/13 10:40 PM, "Rajesh Battala"  wrote:
> 
> >Hi All,
> >While fixing an issue ( https://reviews.apache.org/r/11862/ )in
> >ssvm-check script I figured out some issues.
> >
> >1.There are two ssvm_check scripts(duplicates).
> >
> >./services/console-proxy/server/scripts/ssvm-check.sh
> >./services/secondary-storage/scripts/ssvm-check.sh
> >
> >When building the code, these scripts will go to systemvm.zip,
> >systemvm.zip will be packaged into systemvm.iso.
> >
> >systemvm-descriptor.xml will define what all the scripts should package.
> >As per the descriptor xml,  the ssvm-check script under console-proxy
> >is getting into systemvm.zip.
> 
> Shouldn't it be the other way round ? I mean the ssvm script under
> secondary-storage should have come in ?
> 
> >
> >I had verified the ssvm-check script with the fix under console-proxy.
> >The systemvm.zip is getting update properly and making into systemvm.iso.
> >And ssvm is getting the right script now.
> >
> >Changes made in script under
> >./services/secondary-storage/scripts/ssvm-check.sh is not getting into
> >systemvm.iso
> >
> >I feel the script is redundant and creating confusion.
> >Can we remove the script in one location?
> 
> I would remove it from console-proxy for the sake of consistency and make
> sure the one under secondary-storage gets in. Also while doing so hopefully
> the final location (folder structure) of the script is not disturbed in the 
> ssvm
> 
> >
> >Thanks
> >Rajesh Battala



RE: Using "In Progress" status in JIRA

2013-06-14 Thread Koushik Das

> -Original Message-
> From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> Sent: Friday, June 14, 2013 6:32 AM
> To: dev@cloudstack.apache.org
> Subject: Using "In Progress" status in JIRA
> 
> Folks
> 
> It seems that we do not use "In Progress" status in JIRA as often as we
> should. Issues seem to change from "Open" to "Resolved" directly. IMHO
> marking an issue "In Progress" provides much better visibility and helps
> communicate to community that you are working on that item.
> 

Isn't assigning a bug would also mean the same? The 'In Progress' is useful 
when bugs are triaged on a daily basis, most probably towards end of a release 
cycle.

> If for whatever reason you stop working on that item and will not attend to it
> you can always mark it back as "Open".
> 

If someone has stopped working on a bug simply unassign so that anyone else can 
pick it up.

> For 4.2 I see that 40 features/ improvement tickets changed from "Open" to
> "Resolved" directly. And for Bugs "509" bugs moved from "Open" to
> "Resolved" directly.
> 

I don't see any issues in that.

> 
> Thanks
> Animesh


RE: Regarding ssvm-check script

2013-06-14 Thread Rajesh Battala
I had created a ticket for the issue 
https://issues.apache.org/jira/browse/CLOUDSTACK-3004 
Will work on it and send the patch for the review. 

Fix would be, remove the script from console proxy folder, modify the 
systemvm-description.xml  to include the ssvm_check file from the proper 
location. 

Thanks
Rajesh Battala

> -Original Message-
> From: Rajesh Battala [mailto:rajesh.batt...@citrix.com]
> Sent: Friday, June 14, 2013 1:07 PM
> To: dev@cloudstack.apache.org
> Subject: RE: Regarding ssvm-check script
> 
> Nitin,
> Yes, the ssvm under console proxy should be removed. It make more sense
> for the script to be coming from secondary-storage folder path.
> If you are removing the script under consoleproxy, make the change in the
> systemvm-description.xml to pick the ssvm script from the secondary storage
> scripts.
> If you don't make that change systemvm.iso might not have the ssvm script.
> 
> Thanks
> Rajesh Battala
> 
> > -Original Message-
> > From: Nitin Mehta [mailto:nitin.me...@citrix.com]
> > Sent: Friday, June 14, 2013 12:09 PM
> > To: dev@cloudstack.apache.org
> > Subject: Re: Regarding ssvm-check script
> >
> > Hi Rajesh,
> > Please find my comments inline
> >
> > On 13/06/13 10:40 PM, "Rajesh Battala" 
> wrote:
> >
> > >Hi All,
> > >While fixing an issue ( https://reviews.apache.org/r/11862/ )in
> > >ssvm-check script I figured out some issues.
> > >
> > >1.There are two ssvm_check scripts(duplicates).
> > >
> > >./services/console-proxy/server/scripts/ssvm-check.sh
> > >./services/secondary-storage/scripts/ssvm-check.sh
> > >
> > >When building the code, these scripts will go to systemvm.zip,
> > >systemvm.zip will be packaged into systemvm.iso.
> > >
> > >systemvm-descriptor.xml will define what all the scripts should package.
> > >As per the descriptor xml,  the ssvm-check script under console-proxy
> > >is getting into systemvm.zip.
> >
> > Shouldn't it be the other way round ? I mean the ssvm script under
> > secondary-storage should have come in ?
> >
> > >
> > >I had verified the ssvm-check script with the fix under console-proxy.
> > >The systemvm.zip is getting update properly and making into
> systemvm.iso.
> > >And ssvm is getting the right script now.
> > >
> > >Changes made in script under
> > >./services/secondary-storage/scripts/ssvm-check.sh is not getting
> > >into systemvm.iso
> > >
> > >I feel the script is redundant and creating confusion.
> > >Can we remove the script in one location?
> >
> > I would remove it from console-proxy for the sake of consistency and
> > make sure the one under secondary-storage gets in. Also while doing so
> > hopefully the final location (folder structure) of the script is not
> > disturbed in the ssvm
> >
> > >
> > >Thanks
> > >Rajesh Battala



Re: Object_Store storage refactor Meeting Notes

2013-06-14 Thread Nitin Mehta


On 13/06/13 10:08 PM, "John Burwell"  wrote:

>All,
>
>Edison Su, Min Chen, Animesh Chaturvedi, and myself met via
>teleconference on 11 June 2013 @ 1:30 PM EDT.  The goal of the meeting
>was determine the path forward for merging the object_store branch by the
>4.2 freeze date, 30 June 2013.  The conversation focused on the following
>topics:
>
>   * Staging area mechanism
>   * Removing dependencies from the Storage to the Hypervisor layer
>   * Dependencies of other patches on object_store
>   * QA's desire to start testing the patch ASAP
>
>Min, Edison, and I agreed that the staging mechanism must age out files
>and use a reference count to ensure that file in-use are not prematurely
>purged.  While we agree that some form of reservation system is required,
>Edison is concerned that it will be too conservative and create
>bottlenecks.  

Can you please elaborate on the fact that it is too conservative - we just
can't purge the files when they are still in use correct ? We can use a
combination of LRU + reference count, trying to purge the LRU files if
their reference count <= 0 as a start ?

>
>As we delved deeper into the subject of the storage to hypervisor
>dependencies and the reservation mechanism, we determined that NFS
>storage would still need to be the size of the secondary storage data
>set.  Since the hypervisor layer has not been completely fitted to the
>new storage layer, NFS would be still required for a number of
>operations.  Based on this realization, we decided to de-scope the
>staging mechanism, and leave the 4.2 object store functionality the same
>as 4.1.  Therefore, NFS will remain the secondary storage of record, and
>object storage will serve as backup/multi-zone sync.

I am not sure how we can comment its going to be the same as 4.1 - is it
from the end user perspective ? The internal semantics and their flow have
changed. This needs to be elaborated and properly documented. Also I am
not sure if the feature is supported on the upgrade path or is it ? Need
more documentation here.


>In 4.3, we will fit the hypervisor layer for the new storage layer which
>will allow object stores to server as secondary storage.  This work will
>include removing the storage to hypervisor dependencies.  For 4.2, Edison
>and Min have implemented the critical foundation necessary to establish
>our next generation storage layer.  There simply was not enough time in
>this development cycle to make these changes and fit the hypervisor layer.
>
>Due to the size of the patch, Animesh voiced QA's concerned regarding
>test scope and impact.  As such, we want to get the merge completed as
>soon as possible to allow testing to begin.  We discussed breaking up the
>patch, but we could not devise a reasonable set of chunks there were both
>isolated and significantly testable.  Therefore, the patch can only be
>delivered in its current state.  We also walked through potential
>dependencies between the storage framework changes and the solidfire
>branch.  It was determined that these two merges could occur
>independently.
>
>Finally, Animesh is going to setup a meeting at Citrix's Santa Clara
>office on 26 June 2013 (the day after Collab ends) to discuss the path
>forward for 4.3 and work through a high-level design/approach to fitting
>the hypervisor layer to exploit the new storage capabilities.  Details
>will be published to the dev mailing list.
>
>Thanks,
>-John
>
>On Jun 11, 2013, at 2:08 AM, Min Chen  wrote:
>
>> It is 11th June. John is not free between 9:15am to 10am, that is why we
>> schedule it at 10:30am.
>> 
>> Thanks
>> -min
>> 
>> On 6/10/13 10:52 PM, "Nitin Mehta"  wrote:
>> 
>>> Hi Min,
>>> When you say tomorrow, what date is it 11th June or 12th ? Can the
>>>time be
>>> preponed by an hour please - its too late here ?
>>> 
>>> Thanks,
>>> -Nitin
>>> 
>>> On 11/06/13 11:06 AM, "Min Chen"  wrote:
>>> 
 Hi there,
 
 To reach consensus on some remaining NFS cache issues on object_store
 storage refactor work in a more effective manner, John, Edison and I
have
 scheduled a GoToMeeting tomorrow to discuss them over the phone, any
 interested parties are welcome to join and brainstorm. Here are
detailed
 GTM information:
 
 Meeting Time: 10:30 AM ­ 12:30 PM PST
 
 Meeting Details:
 
 1.  Please join my meeting.
 https://www1.gotomeeting.com/join/188620897
 
 2.  Use your microphone and speakers (VoIP) - a headset is
recommended.
 Or, call in using your telephone.
 
 United States: +1 (626) 521-0017
 United States (toll-free): 1 877 309 2070
 
 Access Code: 188-620-897
 Audio PIN: Shown after joining the meeting
 
 Meeting ID: 188-620-897
 
 GoToMeeting®
 Online Meetings Made Easy®
 
 Not at your computer? Click the link to join this meeting from your
 iPhone®, iPad® or Android® device via the GoToMeeting app.
 
 Thanks
 -min
>>> 
>> 

Re: committer wanted for review

2013-06-14 Thread Daan Hoogland
Thanks Hiroaki,

On Fri, Jun 14, 2013 at 3:41 AM, Hiroaki KAWAI wrote:

> I'd suggest:
> - fix the generation of double slash itself
>
Is in the patch

> - auto-fix may happen where it is really required
> - and if auto-fix happens, it should log it with
> WARN level.

Good point, I will up the level in an update.

>
>
>
> (2013/06/13 21:15), Daan Hoogland wrote:
>
>> H,
>>
>> Can someone look at Review Request #11861> org/r/11861/ > for me please?
>>
>> Thanks,
>> Daan Hoogland
>>
>>
>


Re: Object based Secondary storage.

2013-06-14 Thread Thomas O'Dowd
Edison,

I've got devcloud running along with the object_store branch and I've
finally been able to test a bit today.

I found some issues (or things that I think are bugs) and would like to
file a few issues. I know where the bug database is and I have an
account but what is the best way to file bugs against this particular
branch? I guess I can select "Future" as the version? What other way are
feature branches usually identified in issues? Perhaps in the subject?
Please let me know the preference.

Also, can you describe (or point me at a document) what the best way to
test against the object_store branch is? So far I have been doing the
following but I'm not sure it is the best?

 a) setup devcloud.
 b) stop any instances on devcloud from previous runs
  xe vm-shutdown --multiple
 c) check out and update the object_store branch.
 d) clean build as described in devcloud doc (ADIDD for short)
 e) deploydb (ADIDD)
 f) start management console (ADIDD) and wait for it.
 g) deploysvr (ADIDD) in another shell.
 h) on devcloud machine use xentop to wait for 2 vms to launch.
(I'm not sure what the nfs vm is used for here??)
 i) login on gui -> infra -> secondary and remove nfs secondary storage
 j) add s3 secondary storage (using cache of old secondary storage?)

Then rest of testing starts from here... (and also perhaps in step j)

Thanks,

Tom.
-- 
Cloudian KK - http://www.cloudian.com/get-started.html
Fancy 100TB of full featured S3 Storage?
Checkout the Cloudian® Community Edition!



Re: Hack Day at CloudStack Collaboration Conference

2013-06-14 Thread Daan Hoogland
I added 'secondary storage maintenance mode'
 as a session. I don't mind to take ideas in advance!

Daan


On Thu, Jun 13, 2013 at 10:55 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Hi,
>
> I was wondering if we have the following documentation (below). If not, I
> was thinking it might be a good session to discuss and start in (at a high
> level) on developing such documentation.
>
> 1) Class diagrams highlighting the main classes that make up the Compute,
> Networking, and Storage components of CloudStack and how they relate to
> each other.
>
> 2) Object-interaction diagrams showing how the main instances in the system
> coordinate execution of tasks.
>
> 3) What kinds of threads are involved in the system (this will help
> developers better understand what resources are shared among threads and
> need to be locked at certain times)?
>
> Thanks!
>
>
> On Thu, Jun 13, 2013 at 12:38 PM, Joe Brockmeier  wrote:
>
> > Hey all,
> >
> > As you know, the conference is coming up in less than two weeks. The
> > first day is going to be a "hack day" using an un-conference/BarCamp
> > type structure where we ask attendees to set the agenda and have spaces
> > set aside to work on things or have more interactive sessions to hammer
> > out ideas.
> >
> > This will probably work best if we start brainstorming before the event,
> > so I've set up a page on the wiki for folks to propose sessions. Please
> > take a minute to add topics you'd like to address during the hack day
> > with a description and suggested leader for that session. (This is
> > usually going to be the person proposing the idea, but might be a
> > suggestion for someone else if you think they'd be better suited.)
> >
> > Here's the page on the wiki - please go wild and add the topics you
> > think we need to hit on during the hack day! (Note, adding a session
> > isn't a guarantee that it'll be chosen, but it's a good way to build
> > interest ahead of time and ensure the right folks are there and ready to
> > roll.)
> >
> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/Hack+Day+at+CCC13
> >
> > Best,
> >
> > jzb
> > --
> > Joe Brockmeier
> > j...@zonker.net
> > Twitter: @jzb
> > http://www.dissociatedpress.net/
> >
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud
> *™*
>


Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check remove the duplicate file from consoleproxy and include the script from secondary storage folder while packing iso

2013-06-14 Thread Rajesh Battala

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11874/
---

Review request for cloudstack and Chip Childers.


Description
---

Issue: .There are two ssvm_check scripts(duplicates).

./services/console-proxy/server/scripts/ssvm-check.sh
./services/secondary-storage/scripts/ssvm-check.sh

When building the code, these scripts will go to systemvm.zip, systemvm.zip 
will be packaged into systemvm.iso. 

systemvm-descriptor.xml will define what all the scripts should package.
As per the descriptor xml,  the ssvm-check script under console-proxy is 
getting into systemvm.zip.

I had verified the ssvm-check script with the fix under console-proxy. The 
systemvm.zip is getting update properly and making into systemvm.iso.
And ssvm is getting the right script now.

Changes made in script under ./services/secondary-storage/scripts/ssvm-check.sh 
is not getting into systemvm.iso

Fixed:
Modified systemvm-descriptor.xml to pick the ssvm-check.sh form 
./services/secondary-storage/scripts/ssvm-check.sh
removed the duplicate file which is creating confusion 
(./services/console-proxy/server/scripts/ssvm-check.sh)


This addresses bug CLOUDSTACK-3004.


Diffs
-

  services/console-proxy/server/scripts/ssvm-check.sh 7b83c98 
  services/console-proxy/server/systemvm-descriptor.xml e34026b 

Diff: https://reviews.apache.org/r/11874/diff/


Testing
---

Tested by generating the systemvm.zip , the ssvm-check file is getting copied 
into the zip from the ./services/secondary-storage/scripts/ssvm-check.sh


Thanks,

Rajesh Battala



RE: Regarding ssvm-check script

2013-06-14 Thread Rajesh Battala
I had fixed the issue and posted the patch for the  review 
@https://reviews.apache.org/r/11874/ 

Thanks
Rajesh Battala

> -Original Message-
> From: Rajesh Battala [mailto:rajesh.batt...@citrix.com]
> Sent: Friday, June 14, 2013 1:38 PM
> To: dev@cloudstack.apache.org
> Subject: RE: Regarding ssvm-check script
> 
> I had created a ticket for the issue
> https://issues.apache.org/jira/browse/CLOUDSTACK-3004
> Will work on it and send the patch for the review.
> 
> Fix would be, remove the script from console proxy folder, modify the
> systemvm-description.xml  to include the ssvm_check file from the proper
> location.
> 
> Thanks
> Rajesh Battala
> 
> > -Original Message-
> > From: Rajesh Battala [mailto:rajesh.batt...@citrix.com]
> > Sent: Friday, June 14, 2013 1:07 PM
> > To: dev@cloudstack.apache.org
> > Subject: RE: Regarding ssvm-check script
> >
> > Nitin,
> > Yes, the ssvm under console proxy should be removed. It make more
> > sense for the script to be coming from secondary-storage folder path.
> > If you are removing the script under consoleproxy, make the change in
> > the systemvm-description.xml to pick the ssvm script from the
> > secondary storage scripts.
> > If you don't make that change systemvm.iso might not have the ssvm
> script.
> >
> > Thanks
> > Rajesh Battala
> >
> > > -Original Message-
> > > From: Nitin Mehta [mailto:nitin.me...@citrix.com]
> > > Sent: Friday, June 14, 2013 12:09 PM
> > > To: dev@cloudstack.apache.org
> > > Subject: Re: Regarding ssvm-check script
> > >
> > > Hi Rajesh,
> > > Please find my comments inline
> > >
> > > On 13/06/13 10:40 PM, "Rajesh Battala" 
> > wrote:
> > >
> > > >Hi All,
> > > >While fixing an issue ( https://reviews.apache.org/r/11862/ )in
> > > >ssvm-check script I figured out some issues.
> > > >
> > > >1.There are two ssvm_check scripts(duplicates).
> > > >
> > > >./services/console-proxy/server/scripts/ssvm-check.sh
> > > >./services/secondary-storage/scripts/ssvm-check.sh
> > > >
> > > >When building the code, these scripts will go to systemvm.zip,
> > > >systemvm.zip will be packaged into systemvm.iso.
> > > >
> > > >systemvm-descriptor.xml will define what all the scripts should package.
> > > >As per the descriptor xml,  the ssvm-check script under
> > > >console-proxy is getting into systemvm.zip.
> > >
> > > Shouldn't it be the other way round ? I mean the ssvm script under
> > > secondary-storage should have come in ?
> > >
> > > >
> > > >I had verified the ssvm-check script with the fix under console-proxy.
> > > >The systemvm.zip is getting update properly and making into
> > systemvm.iso.
> > > >And ssvm is getting the right script now.
> > > >
> > > >Changes made in script under
> > > >./services/secondary-storage/scripts/ssvm-check.sh is not getting
> > > >into systemvm.iso
> > > >
> > > >I feel the script is redundant and creating confusion.
> > > >Can we remove the script in one location?
> > >
> > > I would remove it from console-proxy for the sake of consistency and
> > > make sure the one under secondary-storage gets in. Also while doing
> > > so hopefully the final location (folder structure) of the script is
> > > not disturbed in the ssvm
> > >
> > > >
> > > >Thanks
> > > >Rajesh Battala



Re: Summary of IRC meeting in #cloudstack-meeting, Wed Jun 12 17:08:56 2013

2013-06-14 Thread Noah Slater
While we're talking about bot etiquette... ;) If people used #info and
#action, important takeaway points would be included at the top of the
email. As it is, it's a bit hard to read through the logs if you just want
to get a jist.


On 13 June 2013 15:56, Joe Brockmeier  wrote:

> On Thu, Jun 13, 2013, at 04:48 AM, Daan Hoogland wrote:
> > Reading the meeting summary, I learned about the [off] directive the hard
> > way. Is there a irc-etiquette for dummies somewhere that handles ASFBot
> > and other things newbees should know?
>
> There's a manual for ASFBot here:
>
> http://wilderness.apache.org/manual.html
>
> Best,
>
> jzb
> --
> Joe Brockmeier
> j...@zonker.net
> Twitter: @jzb
> http://www.dissociatedpress.net/
>



-- 
NS


NFS Cache storage query

2013-06-14 Thread Sanjeev Neelarapu
Hi,

I have a query on how to add NFS Cache storage.
Before creating a zone if we create a secondary storage with s3 as the storage 
provider and don't select NFS Cache Storage then we treat it as S3 at region 
level.
Later I create a zone and at "add secondary storage" creation wizard in UI if I 
select NFS as secondary storage provider will it be treated as NFS Cache 
Storage? If not is there a way to add NFS cache storage for that zone?

Thanks,
Sanjeev



instance is not coming on the vmware deployment

2013-06-14 Thread Srikanteswararao Talluri
I am encountering the following error while deploying VM . Has anyone observed 
this issue?

 (DirectAgent-207:10.147.40.24) Failed to authentication SSH user root on host 
10.147.40.168
2013-06-14 21:27:01,711 ERROR [vmware.resource.VmwareResource] 
(DirectAgent-207:10.147.40.24) Unable to execute NetworkUsage command on DomR 
(10.147.40.168), domR may not be ready yet. failure due to Exception: 
java.lang.Exception
Message: Failed to authentication SSH user root on host 10.147.40.168

java.lang.Exception: Failed to authentication SSH user root on host 
10.147.40.168
at com.cloud.utils.ssh.SshHelper.sshExecute(SshHelper.java:144)
at com.cloud.utils.ssh.SshHelper.sshExecute(SshHelper.java:37)
at 
com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(VmwareResource.java:5451)
at 
com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareResource.java:2301)
at 
com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(VmwareResource.java:480)
at 
com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.java:186)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:165)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:679)
2013-06-14 21:27:01,723 DEBUG [cloud.api.ApiServlet] (catalina-exec-25:null) 
===START===  10.101.255.119 -- GET  
command=queryAsyncJobResult&jobId=028ed6bf-3155-497b-b680-f621e7267fbb&response=json&sessionkey=cA5artzeaAoXYJ8O5MNXTCjAdZU%3D&_=1371205895591

Thanks,
~Talluri


RE: instance is not coming on the vmware deployment

2013-06-14 Thread Rajesh Battala
The issue would be ssh keys or the VR is not up

> -Original Message-
> From: Srikanteswararao Talluri [mailto:srikanteswararao.tall...@citrix.com]
> Sent: Friday, June 14, 2013 6:55 PM
> To: dev@cloudstack.apache.org
> Subject: instance is not coming on the vmware deployment
> 
> I am encountering the following error while deploying VM . Has anyone
> observed this issue?
> 
>  (DirectAgent-207:10.147.40.24) Failed to authentication SSH user root on
> host 10.147.40.168
> 2013-06-14 21:27:01,711 ERROR [vmware.resource.VmwareResource]
> (DirectAgent-207:10.147.40.24) Unable to execute NetworkUsage command
> on DomR (10.147.40.168), domR may not be ready yet. failure due to
> Exception: java.lang.Exception
> Message: Failed to authentication SSH user root on host 10.147.40.168
> 
> java.lang.Exception: Failed to authentication SSH user root on host
> 10.147.40.168 at
> com.cloud.utils.ssh.SshHelper.sshExecute(SshHelper.java:144)
> at com.cloud.utils.ssh.SshHelper.sshExecute(SshHelper.java:37)
> at
> com.cloud.hypervisor.vmware.resource.VmwareResource.networkUsage(Vm
> wareResource.java:5451)
> at
> com.cloud.hypervisor.vmware.resource.VmwareResource.execute(VmwareRe
> source.java:2301)
> at
> com.cloud.hypervisor.vmware.resource.VmwareResource.executeRequest(V
> mwareResource.java:480)
> at
> com.cloud.agent.manager.DirectAgentAttache$Task.run(DirectAgentAttache.j
> ava:186)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.acc
> ess$101(ScheduledThreadPoolExecutor.java:165)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run
> (ScheduledThreadPoolExecutor.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.jav
> a:1110)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.ja
> va:603)
> at java.lang.Thread.run(Thread.java:679)
> 2013-06-14 21:27:01,723 DEBUG [cloud.api.ApiServlet] (catalina-exec-25:null)
> ===START===  10.101.255.119 -- GET
> command=queryAsyncJobResult&jobId=028ed6bf-3155-497b-b680-
> f621e7267fbb&response=json&sessionkey=cA5artzeaAoXYJ8O5MNXTCjAdZU
> %3D&_=1371205895591
> 
> 
> Thanks,
> ~Talluri


Re: Object_Store storage refactor Meeting Notes

2013-06-14 Thread Chip Childers
On Fri, Jun 14, 2013 at 01:13:37AM +, Animesh Chaturvedi wrote:
> [Animesh>] I was not sure if we have an open slot in Collab for design 
> session on storage discussed above but now that Joe mentioned the HackDay 
> being an open collaborative day in other email it's probably best to have 
> that discussion as one of the session on HACK day. 
>

Great idea!


Re: NFS Cache storage query

2013-06-14 Thread Chip Childers
On Fri, Jun 14, 2013 at 01:06:30PM +, Sanjeev Neelarapu wrote:
> Hi,
> 
> I have a query on how to add NFS Cache storage.
> Before creating a zone if we create a secondary storage with s3 as the 
> storage provider and don't select NFS Cache Storage then we treat it as S3 at 
> region level.
> Later I create a zone and at "add secondary storage" creation wizard in UI if 
> I select NFS as secondary storage provider will it be treated as NFS Cache 
> Storage? If not is there a way to add NFS cache storage for that zone?
> 
> Thanks,
> Sanjeev
>

Based on the thread talking about this [1], I'm not sure that it will be
implemented this way anymore.

-chip

[1] http://markmail.org/message/c73nagj45q6iktfh


Re: Using "In Progress" status in JIRA

2013-06-14 Thread Chip Childers
On Fri, Jun 14, 2013 at 08:00:33AM +, Koushik Das wrote:
> 
> > -Original Message-
> > From: Animesh Chaturvedi [mailto:animesh.chaturv...@citrix.com]
> > Sent: Friday, June 14, 2013 6:32 AM
> > To: dev@cloudstack.apache.org
> > Subject: Using "In Progress" status in JIRA
> > 
> > Folks
> > 
> > It seems that we do not use "In Progress" status in JIRA as often as we
> > should. Issues seem to change from "Open" to "Resolved" directly. IMHO
> > marking an issue "In Progress" provides much better visibility and helps
> > communicate to community that you are working on that item.
> > 
> 
> Isn't assigning a bug would also mean the same? The 'In Progress' is useful 
> when bugs are triaged on a daily basis, most probably towards end of a 
> release cycle.
> 
> > If for whatever reason you stop working on that item and will not attend to 
> > it
> > you can always mark it back as "Open".
> > 
> 
> If someone has stopped working on a bug simply unassign so that anyone else 
> can pick it up.
> 
> > For 4.2 I see that 40 features/ improvement tickets changed from "Open" to
> > "Resolved" directly. And for Bugs "509" bugs moved from "Open" to
> > "Resolved" directly.
> > 
> 
> I don't see any issues in that.

+1 if it's a quick fix / change.  IMO, setting the In Progress status is
a "nice thing to do for the community" when something is going to take a
long time to work on.

> 
> > 
> > Thanks
> > Animesh
> 


Re: Review Request: double slash fix for windows based nfs servers [CLOUDSTACK-2968]

2013-06-14 Thread daan Hoogland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11861/
---

(Updated June 14, 2013, 2:18 p.m.)


Review request for cloudstack.


Changes
---

made conditional warning message on autofix paths


Description
---

double slash breaks windows based nfs servers [CLOUDSTACK-2968]


This addresses bug CLOUDSTACK-2968.


Diffs (updated)
-

  api/src/com/cloud/storage/template/TemplateInfo.java 6559d73 
  core/src/com/cloud/agent/api/storage/CreateEntityDownloadURLCommand.java 
98a957f 
  core/src/com/cloud/agent/api/storage/DownloadAnswer.java bb7b8a9 
  core/src/com/cloud/storage/template/TemplateLocation.java 58d023a 
  engine/schema/src/com/cloud/storage/VMTemplateHostVO.java b8dfc41 
  
engine/storage/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
 a6880c3 
  server/src/com/cloud/storage/download/DownloadListener.java 1d48803 
  server/src/com/cloud/storage/download/DownloadMonitorImpl.java f72a563 
  server/src/com/cloud/template/HypervisorTemplateAdapter.java 322f32e 
  server/src/com/cloud/template/TemplateManagerImpl.java 517d4ba 

Diff: https://reviews.apache.org/r/11861/diff/


Testing
---

database analysis


Thanks,

daan Hoogland



Re: Object_Store storage refactor Meeting Notes

2013-06-14 Thread John Burwell
Nitin,

Please see my comments in-line below.

Thanks,
-John

On Jun 14, 2013, at 4:16 AM, Nitin Mehta  wrote:

> 
> 
> On 13/06/13 10:08 PM, "John Burwell"  wrote:
> 
>> All,
>> 
>> Edison Su, Min Chen, Animesh Chaturvedi, and myself met via
>> teleconference on 11 June 2013 @ 1:30 PM EDT.  The goal of the meeting
>> was determine the path forward for merging the object_store branch by the
>> 4.2 freeze date, 30 June 2013.  The conversation focused on the following
>> topics:
>> 
>>  * Staging area mechanism
>>  * Removing dependencies from the Storage to the Hypervisor layer
>>  * Dependencies of other patches on object_store
>>  * QA's desire to start testing the patch ASAP
>> 
>> Min, Edison, and I agreed that the staging mechanism must age out files
>> and use a reference count to ensure that file in-use are not prematurely
>> purged.  While we agree that some form of reservation system is required,
>> Edison is concerned that it will be too conservative and create
>> bottlenecks.  
> 
> Can you please elaborate on the fact that it is too conservative - we just
> can't purge the files when they are still in use correct ? We can use a
> combination of LRU + reference count, trying to purge the LRU files if
> their reference count <= 0 as a start ?

The issue is not around determining when to purge a file.  The issue emerges 
around reservation sizes.  Currently, if we take a snapshot of a 2 TB volume, 
we would have to reserve 2 TB in the staging area to ensure that we would have 
enough space for the maximum potential size of the snapshot.  However, it is 
very unlikely that the snapshot will actually be this size.  The concern 
becomes that large reservations would start starving out other processes.  For 
4.2, we didn't feel there was enough time to devise a "smarter" reservation 
mechanism.  Therefore, in 4.3, there should be time to think the implications 
of various approaches through and devise a more efficient approach.

> 
>> 
>> As we delved deeper into the subject of the storage to hypervisor
>> dependencies and the reservation mechanism, we determined that NFS
>> storage would still need to be the size of the secondary storage data
>> set.  Since the hypervisor layer has not been completely fitted to the
>> new storage layer, NFS would be still required for a number of
>> operations.  Based on this realization, we decided to de-scope the
>> staging mechanism, and leave the 4.2 object store functionality the same
>> as 4.1.  Therefore, NFS will remain the secondary storage of record, and
>> object storage will serve as backup/multi-zone sync.
> 
> I am not sure how we can comment its going to be the same as 4.1 - is it
> from the end user perspective ? The internal semantics and their flow have
> changed. This needs to be elaborated and properly documented. Also I am
> not sure if the feature is supported on the upgrade path or is it ? Need
> more documentation here.

From an end user perspective, object stores will remain a backup of secondary 
storage.  The user interface will likely be a bit nicer, but in terms of system 
architecture, the roles of object storage and NFS remain the same in 4.2 and 
4.1.  To my mind, when we support object stores as native secondary storage 
targets, it will be a new feature, and we should continue to support the backup 
model as well.  Therefore, I don't see an upgrade path issue.  

> 
> 
>> In 4.3, we will fit the hypervisor layer for the new storage layer which
>> will allow object stores to server as secondary storage.  This work will
>> include removing the storage to hypervisor dependencies.  For 4.2, Edison
>> and Min have implemented the critical foundation necessary to establish
>> our next generation storage layer.  There simply was not enough time in
>> this development cycle to make these changes and fit the hypervisor layer.
>> 
>> Due to the size of the patch, Animesh voiced QA's concerned regarding
>> test scope and impact.  As such, we want to get the merge completed as
>> soon as possible to allow testing to begin.  We discussed breaking up the
>> patch, but we could not devise a reasonable set of chunks there were both
>> isolated and significantly testable.  Therefore, the patch can only be
>> delivered in its current state.  We also walked through potential
>> dependencies between the storage framework changes and the solidfire
>> branch.  It was determined that these two merges could occur
>> independently.
>> 
>> Finally, Animesh is going to setup a meeting at Citrix's Santa Clara
>> office on 26 June 2013 (the day after Collab ends) to discuss the path
>> forward for 4.3 and work through a high-level design/approach to fitting
>> the hypervisor layer to exploit the new storage capabilities.  Details
>> will be published to the dev mailing list.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 11, 2013, at 2:08 AM, Min Chen  wrote:
>> 
>>> It is 11th June. John is not free between 9:15am to 10am, that is why we
>>> schedule it a

Re: committer wanted for review

2013-06-14 Thread Daan Hoogland
Hiroaki,

- auto-fix may happen where it is really required
>
I do not have a clear view on this, so I took the approach of better safe
then sorry. The submitted is what works. I don't see how the auto-fix
should ever be needed if the source is fixed. Hope you can live with this.

> - and if auto-fix happens, it should log it with
> WARN level.

Applied


regards,


On Fri, Jun 14, 2013 at 10:35 AM, Daan Hoogland wrote:

> Thanks Hiroaki,
>
> On Fri, Jun 14, 2013 at 3:41 AM, Hiroaki KAWAI 
> wrote:
>
>> I'd suggest:
>> - fix the generation of double slash itself
>>
> Is in the patch
>
>> - auto-fix may happen where it is really required
>> - and if auto-fix happens, it should log it with
>> WARN level.
>
> Good point, I will up the level in an update.
>
>>
>>
>>
>> (2013/06/13 21:15), Daan Hoogland wrote:
>>
>>> H,
>>>
>>> Can someone look at Review Request #11861>> org/r/11861/ > for me please?
>>>
>>> Thanks,
>>> Daan Hoogland
>>>
>>>
>>
>


Re: Test halting build every now and then

2013-06-14 Thread Daan Hoogland
I just did the same after it failed, I took my laptop to work yesterday, no
problem. Today at devopsdays it wouldn't goo past the NioTest again. I
checked localhost (with autodomains of different kinds) it didn't matter if
it resolved to 127.0.0.1 or to a public ip adres.

still looking,
Daan


On Thu, Jun 13, 2013 at 10:47 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Not that this is a long-term solution or anything, but I just commented out
> the test. :)
>
>
> On Wed, Jun 12, 2013 at 3:57 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > This is what nslookup 127.0.0.1 brings up for me:
> >
> > Server: 172.16.1.2
> > Address: 172.16.1.2#53
> >
> > 1.0.0.127.in-addr.arpa name = localhost.
> >
> >
> > On Wed, Jun 12, 2013 at 3:54 PM, Chiradeep Vittal <
> > chiradeep.vit...@citrix.com> wrote:
> >
> >> I have the same configuration (Mac OS X Lion) but do not see the issue.
> >> Wonder what
> >> nslookup 127.0.0.1 shows on your host.
> >>
> >>
> >>
> >> On 6/12/13 2:39 PM, "Mike Tutkowski" 
> >> wrote:
> >>
> >> >Ah, too bad. :)
> >> >
> >> >So, it's not a huge deal. I just thought if we already had a solution
> out
> >> >there that I missed that maybe I could implement it.
> >> >
> >> >I would consider it a low-priority issue.
> >> >
> >> >Thanks!
> >> >
> >> >
> >> >On Wed, Jun 12, 2013 at 3:34 PM, Sheng Yang  wrote:
> >> >
> >> >> I've added clean, still unable to reproduce.
> >> >>
> >> >> If we can reproduce it in Eclipse, then it would be very easy to
> track
> >> >>down
> >> >> what's wrong indeed. Otherwise we may need to print debug info
> >> >>everywhere.
> >> >>
> >> >> --Sheng
> >> >>
> >> >>
> >> >> On Wed, Jun 12, 2013 at 2:16 PM, Mike Tutkowski <
> >> >> mike.tutkow...@solidfire.com> wrote:
> >> >>
> >> >> > In my case, I see it when running
> >> >> >
> >> >> > mvn -P developer,systemvm clean install
> >> >> >
> >> >> > from the Terminal in Mac OS X.
> >> >> >
> >> >> > Removing clean seems to give it a better shoot at not halting.
> >> >> >
> >> >> >
> >> >> > On Wed, Jun 12, 2013 at 3:11 PM, Sheng Yang 
> >> wrote:
> >> >> >
> >> >> > > Eclipse didn't complain for me.
> >> >> > >
> >> >> > > BTW: I am using Linux as development environment.
> >> >> > >
> >> >> > > --Sheng
> >> >> > >
> >> >> > >
> >> >> > > On Wed, Jun 12, 2013 at 1:48 PM, Daan Hoogland <
> >> >> > > dhoogl...@schubergphilis.com
> >> >> > > > wrote:
> >> >> > >
> >> >> > > > Macosx?
> >> >> > > > Eclipse?
> >> >> > > >
> >> >> > > > I will try on windows sometime soon (with wireless).
> >> >> > > >
> >> >> > > > -Original Message-
> >> >> > > > From: Sheng Yang [mailto:sh...@yasker.org]
> >> >> > > > Sent: woensdag 12 juni 2013 22:37
> >> >> > > > To: 
> >> >> > > > Subject: Re: Test halting build every now and then
> >> >> > > >
> >> >> > > > I tried to look into this, but it's really hard for me to
> >> >>reproduce
> >> >> > > it(I've
> >> >> > > > run the case for 50 times and no show of the issue).
> >> >> > > >
> >> >> > > > The bash command I used is: for i in {1..50}; do mvn
> >> >>-Dtest=NioTest
> >> >> > test
> >> >> > > > -pl utils; done
> >> >> > > >
> >> >> > > > From the log, it looks like server is up but client didn't
> >> >>connect to
> >> >> > the
> >> >> > > > server. The correct log of whole process should be:
> >> >> > > >
> >> >> > > > 2013-06-12 13:30:50,079 INFO  [utils.testcase.NioTest] (main:)
> >> >>Test
> >> >> > > > 2013-06-12 13:30:50,103 INFO  [utils.nio.NioServer]
> >> >> > > > (NioTestServer-Selector:) NioConnection started and listening
> on
> >> >> > > > 0.0.0.0/0.0.0.0:
> >> >> > > > 2013-06-12 13:30:50,109 INFO  [utils.nio.NioClient]
> >> >> > > > (NioTestServer-Selector:) Connecting to 127.0.0.1:
> >> >> > > > 2013-06-12 13:30:50,351 INFO  [utils.testcase.NioTest]
> >> >> > > > (NioTestServer-Handler-1:) Server: Received CONNECT task
> >> >> > > > 2013-06-12 13:30:50,388 INFO  [utils.nio.NioClient]
> >> >> > > > (NioTestServer-Selector:) SSL: Handshake done
> >> >> > > > 2013-06-12 13:30:50,389 INFO  [utils.nio.NioClient]
> >> >> > > > (NioTestServer-Selector:) Connected to 127.0.0.1:
> >> >> > > > 2013-06-12 13:30:50,389 INFO  [utils.testcase.NioTest]
> >> >> > > > (NioTestServer-Handler-1:) Client: Received CONNECT task
> >> >> > > > 2013-06-12 13:30:51,406 INFO  [utils.testcase.NioTest] (main:)
> >> >> Client:
> >> >> > > Data
> >> >> > > > sent
> >> >> > > > 2013-06-12 13:30:51,406 INFO  [utils.testcase.NioTest] (main:)
> >> >> Client:
> >> >> > > Data
> >> >> > > > sent
> >> >> > > > 2013-06-12 13:30:51,556 INFO  [utils.testcase.NioTest]
> >> >> > > > (NioTestServer-Handler-2:) Server: Received DATA task
> >> >> > > > 2013-06-12 13:30:51,597 INFO  [utils.testcase.NioTest]
> >> >> > > > (NioTestServer-Handler-3:) Server: Received DATA task
> >> >> > > > 2013-06-12 13:30:51,834 INFO  [utils.testcase.NioTest]
> >> >> > > > (NioTestServer-Handler-2:) Verify done.
> >> >> > > > 2013-06-12 13:30:51,856 INFO  [utils.testcase.NioTest]
> >> 

Re: Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check remove the duplicate file from consoleproxy and include the script from secondary storage folder while packing iso

2013-06-14 Thread Nitin Mehta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11874/#review21905
---

Ship it!


Ship It!

- Nitin Mehta


On June 14, 2013, 10:09 a.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11874/
> ---
> 
> (Updated June 14, 2013, 10:09 a.m.)
> 
> 
> Review request for cloudstack and Chip Childers.
> 
> 
> Description
> ---
> 
> Issue: .There are two ssvm_check scripts(duplicates).
> 
> ./services/console-proxy/server/scripts/ssvm-check.sh
> ./services/secondary-storage/scripts/ssvm-check.sh
> 
> When building the code, these scripts will go to systemvm.zip, systemvm.zip 
> will be packaged into systemvm.iso. 
> 
> systemvm-descriptor.xml will define what all the scripts should package.
> As per the descriptor xml,  the ssvm-check script under console-proxy is 
> getting into systemvm.zip.
> 
> I had verified the ssvm-check script with the fix under console-proxy. The 
> systemvm.zip is getting update properly and making into systemvm.iso.
> And ssvm is getting the right script now.
> 
> Changes made in script under 
> ./services/secondary-storage/scripts/ssvm-check.sh is not getting into 
> systemvm.iso
> 
> Fixed:
> Modified systemvm-descriptor.xml to pick the ssvm-check.sh form 
> ./services/secondary-storage/scripts/ssvm-check.sh
> removed the duplicate file which is creating confusion 
> (./services/console-proxy/server/scripts/ssvm-check.sh)
> 
> 
> This addresses bug CLOUDSTACK-3004.
> 
> 
> Diffs
> -
> 
>   services/console-proxy/server/scripts/ssvm-check.sh 7b83c98 
>   services/console-proxy/server/systemvm-descriptor.xml e34026b 
> 
> Diff: https://reviews.apache.org/r/11874/diff/
> 
> 
> Testing
> ---
> 
> Tested by generating the systemvm.zip , the ssvm-check file is getting copied 
> into the zip from the ./services/secondary-storage/scripts/ssvm-check.sh
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



enableStorageMaintenance

2013-06-14 Thread La Motta, David
…works great for putting down the storage into maintenance mode (looking 
forward seeing this for secondary storage as well!).

Now the question is, after I've run it… how do I know when it is done so I can 
operate on the volume?

Poll using updateStoragePool and query the state for "Maintenance"?  What about 
introducing the ability to pass in callback URLs to the REST call?

Thx.



David La Motta
Technical Marketing Engineer
Citrix Solutions

NetApp
919.476.5042
dlamo...@netapp.com





Re: Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check remove the duplicate file from consoleproxy and include the script from secondary storage folder while packing iso

2013-06-14 Thread Nitin Mehta


> On June 14, 2013, 2:44 p.m., Nitin Mehta wrote:
> > Ship It!

Hi Rajesh - I get the error below while trying to apply your fix. Could you 
please correct and resubmit ?
 
Nitins-MacBook-Air:cloudstack nitinmehta$ git apply --whitespace=fix 
../0001-CLOUDSTACK-3004-\[script\]-ssvm_check-remove-the-duplicate-file-from-consoleproxy-and-include-the-script-from-secondary-storage-folder-while-packing-iso.patch
 
error: patch failed: services/console-proxy/server/scripts/ssvm-check.sh:1
error: services/console-proxy/server/scripts/ssvm-check.sh: patch does not apply


- Nitin


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11874/#review21905
---


On June 14, 2013, 10:09 a.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11874/
> ---
> 
> (Updated June 14, 2013, 10:09 a.m.)
> 
> 
> Review request for cloudstack and Chip Childers.
> 
> 
> Description
> ---
> 
> Issue: .There are two ssvm_check scripts(duplicates).
> 
> ./services/console-proxy/server/scripts/ssvm-check.sh
> ./services/secondary-storage/scripts/ssvm-check.sh
> 
> When building the code, these scripts will go to systemvm.zip, systemvm.zip 
> will be packaged into systemvm.iso. 
> 
> systemvm-descriptor.xml will define what all the scripts should package.
> As per the descriptor xml,  the ssvm-check script under console-proxy is 
> getting into systemvm.zip.
> 
> I had verified the ssvm-check script with the fix under console-proxy. The 
> systemvm.zip is getting update properly and making into systemvm.iso.
> And ssvm is getting the right script now.
> 
> Changes made in script under 
> ./services/secondary-storage/scripts/ssvm-check.sh is not getting into 
> systemvm.iso
> 
> Fixed:
> Modified systemvm-descriptor.xml to pick the ssvm-check.sh form 
> ./services/secondary-storage/scripts/ssvm-check.sh
> removed the duplicate file which is creating confusion 
> (./services/console-proxy/server/scripts/ssvm-check.sh)
> 
> 
> This addresses bug CLOUDSTACK-3004.
> 
> 
> Diffs
> -
> 
>   services/console-proxy/server/scripts/ssvm-check.sh 7b83c98 
>   services/console-proxy/server/systemvm-descriptor.xml e34026b 
> 
> Diff: https://reviews.apache.org/r/11874/diff/
> 
> 
> Testing
> ---
> 
> Tested by generating the systemvm.zip , the ssvm-check file is getting copied 
> into the zip from the ./services/secondary-storage/scripts/ssvm-check.sh
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request: double slash fix for windows based nfs servers [CLOUDSTACK-2968]

2013-06-14 Thread John Burwell

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11861/#review21907
---



api/src/com/cloud/storage/template/TemplateInfo.java


.getName() is unnecessary.  Just pass the class:

private static Logger s_logger = Logger.getLogger(TemplateInfo.class);



api/src/com/cloud/storage/template/TemplateInfo.java


Null is not a sufficient check for blank.  Use StringUtils#isBlank to catch 
both null and instances containing only spaces.



api/src/com/cloud/storage/template/TemplateInfo.java


WARN should be used to let an operator know there is a condition in the 
system that could lead to instability.  A data cleanup will not result in 
eventual system instability.  I recommend converting to debug or trace.



api/src/com/cloud/storage/template/TemplateInfo.java


Why are fixing the path in both the getter and setter?  I recommend fixing 
it in either method, not both.



core/src/com/cloud/agent/api/storage/CreateEntityDownloadURLCommand.java


See previous comment regarding logger initialization.



core/src/com/cloud/agent/api/storage/CreateEntityDownloadURLCommand.java


Extract this method to com.cloud.utils.FileUtil and reference from both 
CreateEntityDownloadURLCommand and TemplateInfo.  Also, add unit tests for it.



core/src/com/cloud/storage/template/TemplateLocation.java


Why are is the path being manipulated in addition to the work being down by 
the TemplateInfo class?  These rules should be completely encapsulated in the 
TemplateInfo class.



engine/schema/src/com/cloud/storage/VMTemplateHostVO.java


See previous note about logger initialization



engine/schema/src/com/cloud/storage/VMTemplateHostVO.java


See previous note about extraction of this method and unit testing.



server/src/com/cloud/storage/download/DownloadMonitorImpl.java


Wrap in an if (s_logger.isDebugEnabled()) block to avoid expensive string 
concatenation when DEBUG logging is not enabled.



server/src/com/cloud/storage/download/DownloadMonitorImpl.java


Wrap in an if (s_logger.isDebugEnabled()) block to avoid expensive string 
concatenation when DEBUG logging is not enabled.



server/src/com/cloud/template/TemplateManagerImpl.java


Wrap in an if (s_logger.isDebugEnabled()) block to avoid expensive string 
concatenation when DEBUG logging is not enabled.


- John Burwell


On June 14, 2013, 2:18 p.m., daan Hoogland wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11861/
> ---
> 
> (Updated June 14, 2013, 2:18 p.m.)
> 
> 
> Review request for cloudstack.
> 
> 
> Description
> ---
> 
> double slash breaks windows based nfs servers [CLOUDSTACK-2968]
> 
> 
> This addresses bug CLOUDSTACK-2968.
> 
> 
> Diffs
> -
> 
>   api/src/com/cloud/storage/template/TemplateInfo.java 6559d73 
>   core/src/com/cloud/agent/api/storage/CreateEntityDownloadURLCommand.java 
> 98a957f 
>   core/src/com/cloud/agent/api/storage/DownloadAnswer.java bb7b8a9 
>   core/src/com/cloud/storage/template/TemplateLocation.java 58d023a 
>   engine/schema/src/com/cloud/storage/VMTemplateHostVO.java b8dfc41 
>   
> engine/storage/src/org/apache/cloudstack/storage/motion/AncientDataMotionStrategy.java
>  a6880c3 
>   server/src/com/cloud/storage/download/DownloadListener.java 1d48803 
>   server/src/com/cloud/storage/download/DownloadMonitorImpl.java f72a563 
>   server/src/com/cloud/template/HypervisorTemplateAdapter.java 322f32e 
>   server/src/com/cloud/template/TemplateManagerImpl.java 517d4ba 
> 
> Diff: https://reviews.apache.org/r/11861/diff/
> 
> 
> Testing
> ---
> 
> database analysis
> 
> 
> Thanks,
> 
> daan Hoogland
> 
>



Re: committer wanted for review

2013-06-14 Thread John Burwell
Daan,

I just looked through the review request, and published my comments.

Thanks,
-John

On Jun 14, 2013, at 10:27 AM, Daan Hoogland  wrote:

> Hiroaki,
> 
> - auto-fix may happen where it is really required
>> 
> I do not have a clear view on this, so I took the approach of better safe
> then sorry. The submitted is what works. I don't see how the auto-fix
> should ever be needed if the source is fixed. Hope you can live with this.
> 
>> - and if auto-fix happens, it should log it with
>> WARN level.
> 
> Applied
> 
> 
> regards,
> 
> 
> On Fri, Jun 14, 2013 at 10:35 AM, Daan Hoogland 
> wrote:
> 
>> Thanks Hiroaki,
>> 
>> On Fri, Jun 14, 2013 at 3:41 AM, Hiroaki KAWAI 
>> wrote:
>> 
>>> I'd suggest:
>>> - fix the generation of double slash itself
>>> 
>> Is in the patch
>> 
>>> - auto-fix may happen where it is really required
>>> - and if auto-fix happens, it should log it with
>>> WARN level.
>> 
>> Good point, I will up the level in an update.
>> 
>>> 
>>> 
>>> 
>>> (2013/06/13 21:15), Daan Hoogland wrote:
>>> 
 H,
 
 Can someone look at Review Request #11861> for me please?
 
 Thanks,
 Daan Hoogland
 
 
>>> 
>> 



RE: Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check remove the duplicate file from consoleproxy and include the script from secondary storage folder while packing iso

2013-06-14 Thread Rajesh Battala
Is it because the patch contains removing file ?

> -Original Message-
> From: Nitin Mehta [mailto:nore...@reviews.apache.org] On Behalf Of Nitin
> Mehta
> Sent: Friday, June 14, 2013 8:24 PM
> To: Chip Childers
> Cc: Rajesh Battala; cloudstack; Nitin Mehta
> Subject: Re: Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check
> remove the duplicate file from consoleproxy and include the script from
> secondary storage folder while packing iso
> 
> 
> 
> > On June 14, 2013, 2:44 p.m., Nitin Mehta wrote:
> > > Ship It!
> 
> Hi Rajesh - I get the error below while trying to apply your fix. Could you
> please correct and resubmit ?
> 
> Nitins-MacBook-Air:cloudstack nitinmehta$ git apply --whitespace=fix
> ../0001-CLOUDSTACK-3004-\[script\]-ssvm_check-remove-the-duplicate-file-
> from-consoleproxy-and-include-the-script-from-secondary-storage-folder-
> while-packing-iso.patch
> error: patch failed: services/console-proxy/server/scripts/ssvm-check.sh:1
> error: services/console-proxy/server/scripts/ssvm-check.sh: patch does not
> apply
> 
> 
> - Nitin
> 
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11874/#review21905
> ---
> 
> 
> On June 14, 2013, 10:09 a.m., Rajesh Battala wrote:
> >
> > ---
> > This is an automatically generated e-mail. To reply, visit:
> > https://reviews.apache.org/r/11874/
> > ---
> >
> > (Updated June 14, 2013, 10:09 a.m.)
> >
> >
> > Review request for cloudstack and Chip Childers.
> >
> >
> > Description
> > ---
> >
> > Issue: .There are two ssvm_check scripts(duplicates).
> >
> > ./services/console-proxy/server/scripts/ssvm-check.sh
> > ./services/secondary-storage/scripts/ssvm-check.sh
> >
> > When building the code, these scripts will go to systemvm.zip,
> systemvm.zip will be packaged into systemvm.iso.
> >
> > systemvm-descriptor.xml will define what all the scripts should package.
> > As per the descriptor xml,  the ssvm-check script under console-proxy is
> getting into systemvm.zip.
> >
> > I had verified the ssvm-check script with the fix under console-proxy. The
> systemvm.zip is getting update properly and making into systemvm.iso.
> > And ssvm is getting the right script now.
> >
> > Changes made in script under ./services/secondary-storage/scripts/ssvm-
> check.sh is not getting into systemvm.iso
> >
> > Fixed:
> > Modified systemvm-descriptor.xml to pick the ssvm-check.sh form
> ./services/secondary-storage/scripts/ssvm-check.sh
> > removed the duplicate file which is creating confusion
> (./services/console-proxy/server/scripts/ssvm-check.sh)
> >
> >
> > This addresses bug CLOUDSTACK-3004.
> >
> >
> > Diffs
> > -
> >
> >   services/console-proxy/server/scripts/ssvm-check.sh 7b83c98
> >   services/console-proxy/server/systemvm-descriptor.xml e34026b
> >
> > Diff: https://reviews.apache.org/r/11874/diff/
> >
> >
> > Testing
> > ---
> >
> > Tested by generating the systemvm.zip , the ssvm-check file is getting
> copied into the zip from the ./services/secondary-storage/scripts/ssvm-
> check.sh
> >
> >
> > Thanks,
> >
> > Rajesh Battala
> >
> >



Re: committer wanted for review

2013-06-14 Thread Daan Hoogland
H John,

I browsed through your comments and most I will apply. There is one where
you contradict Hiroaki. This is about the logging level for reporting a
changed path. I am going to follow my heart at this unless there is a
project directive on it.

regards,
Daan


On Fri, Jun 14, 2013 at 5:25 PM, John Burwell  wrote:

> Daan,
>
> I just looked through the review request, and published my comments.
>
> Thanks,
> -John
>
> On Jun 14, 2013, at 10:27 AM, Daan Hoogland 
> wrote:
>
> > Hiroaki,
> >
> > - auto-fix may happen where it is really required
> >>
> > I do not have a clear view on this, so I took the approach of better safe
> > then sorry. The submitted is what works. I don't see how the auto-fix
> > should ever be needed if the source is fixed. Hope you can live with
> this.
> >
> >> - and if auto-fix happens, it should log it with
> >> WARN level.
> >
> > Applied
> >
> >
> > regards,
> >
> >
> > On Fri, Jun 14, 2013 at 10:35 AM, Daan Hoogland  >wrote:
> >
> >> Thanks Hiroaki,
> >>
> >> On Fri, Jun 14, 2013 at 3:41 AM, Hiroaki KAWAI <
> ka...@stratosphere.co.jp>wrote:
> >>
> >>> I'd suggest:
> >>> - fix the generation of double slash itself
> >>>
> >> Is in the patch
> >>
> >>> - auto-fix may happen where it is really required
> >>> - and if auto-fix happens, it should log it with
> >>> WARN level.
> >>
> >> Good point, I will up the level in an update.
> >>
> >>>
> >>>
> >>>
> >>> (2013/06/13 21:15), Daan Hoogland wrote:
> >>>
>  H,
> 
>  Can someone look at Review Request #11861  org/r/11861/ > for me please?
> 
>  Thanks,
>  Daan Hoogland
> 
> 
> >>>
> >>
>
>


Automation analysis improvement

2013-06-14 Thread Rayees Namathponnan
Many of the automation test cases are not tearing down the  account properly; 
due to this resources are not getting released and followed test cases getting 
failed during VM deployment itself.

During automation run accounts are created with random number without any 
reference for test case (eg : test-N5QD8N), and it's hard to identify which 
test case not tearing down the account after complete the test.

Here my suggestion; we should create account name with test case name  (eg : 
test- VPCOffering-N5QD8N)

Any thoughts ?

Regards,
Rayees


Re: committer wanted for review

2013-06-14 Thread John Burwell
Daan,

Since a WARN indicates a condition that could lead to system instability, many 
folks configure their log analysis to trigger notifications on WARN and INFO.  
Does escaping a character in a path warrant meet that criteria?

Thanks,
-John

On Jun 14, 2013, at 11:52 AM, Daan Hoogland  wrote:

> H John,
> 
> I browsed through your comments and most I will apply. There is one where
> you contradict Hiroaki. This is about the logging level for reporting a
> changed path. I am going to follow my heart at this unless there is a
> project directive on it.
> 
> regards,
> Daan
> 
> 
> On Fri, Jun 14, 2013 at 5:25 PM, John Burwell  wrote:
> 
>> Daan,
>> 
>> I just looked through the review request, and published my comments.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 14, 2013, at 10:27 AM, Daan Hoogland 
>> wrote:
>> 
>>> Hiroaki,
>>> 
>>> - auto-fix may happen where it is really required
 
>>> I do not have a clear view on this, so I took the approach of better safe
>>> then sorry. The submitted is what works. I don't see how the auto-fix
>>> should ever be needed if the source is fixed. Hope you can live with
>> this.
>>> 
 - and if auto-fix happens, it should log it with
 WARN level.
>>> 
>>> Applied
>>> 
>>> 
>>> regards,
>>> 
>>> 
>>> On Fri, Jun 14, 2013 at 10:35 AM, Daan Hoogland >> wrote:
>>> 
 Thanks Hiroaki,
 
 On Fri, Jun 14, 2013 at 3:41 AM, Hiroaki KAWAI <
>> ka...@stratosphere.co.jp>wrote:
 
> I'd suggest:
> - fix the generation of double slash itself
> 
 Is in the patch
 
> - auto-fix may happen where it is really required
> - and if auto-fix happens, it should log it with
> WARN level.
 
 Good point, I will up the level in an update.
 
> 
> 
> 
> (2013/06/13 21:15), Daan Hoogland wrote:
> 
>> H,
>> 
>> Can someone look at Review Request #11861> org/r/11861/ > for me please?
>> 
>> Thanks,
>> Daan Hoogland
>> 
>> 
> 
 
>> 
>> 



SRX Integration Issues.

2013-06-14 Thread Sean Truman
All,

I am trying to add an SRX 100 to Cloud Stack and keep getting a "Illegal
Group Reference"

Here is how I am trying to add the config.
IP Address: 10.0.2.1
Username: root
Password: password
Type: Juniper SRX Firewall
Public Interface: fe-0/0/0.0
Private Interface: fe-0/0/1.0
Usage interface:
Number of Retries: 2
Timeout: 300
Public network: untrust
Private network: trust
Capacity: 10



Here is my SRX configuration.

http://pastebin.com/nTVEM92p


Here is the only logs I get from management-server.log

http://pastebin.com/pWB0Kbtu

Any help would be greatly appreciated.

v/r
Sean


Re: NFS Cache storage query

2013-06-14 Thread Min Chen
Hi Sanjeev,

In 4.2 release, we require that a NFS cache storage has to be added if
you choose S3 as the storage provider since we haven't refactored
hypervisor side code to handle s3 directly by bypassing NFS caching, which
is the goal for 4.3 release. I see an issue with current UI, where user
can only add cache storage when he/she adds a S3 storage. We may need to
provide a way from UI to allow users to configure and display their NFS
cache. You can file a JIRA ticket for this UI enhancement.

Thanks
-min

On 6/14/13 6:35 AM, "Chip Childers"  wrote:

>On Fri, Jun 14, 2013 at 01:06:30PM +, Sanjeev Neelarapu wrote:
>> Hi,
>> 
>> I have a query on how to add NFS Cache storage.
>> Before creating a zone if we create a secondary storage with s3 as the
>>storage provider and don't select NFS Cache Storage then we treat it as
>>S3 at region level.
>> Later I create a zone and at "add secondary storage" creation wizard in
>>UI if I select NFS as secondary storage provider will it be treated as
>>NFS Cache Storage? If not is there a way to add NFS cache storage for
>>that zone?
>> 
>> Thanks,
>> Sanjeev
>>
>
>Based on the thread talking about this [1], I'm not sure that it will be
>implemented this way anymore.
>
>-chip
>
>[1] http://markmail.org/message/c73nagj45q6iktfh



Re: Object_Store storage refactor Meeting Notes

2013-06-14 Thread Min Chen
One comment regarding upgrade path: due to internal DB schema change
documented in our FS:
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup+Objec
t+Store+Plugin+Framework, we do need to handle upgrade cases in 4.2, which
are mainly related to data migration in DB level. I am working on that
right now, will check in a version to object_branch today.

Thanks
-min

On 6/14/13 7:17 AM, "John Burwell"  wrote:

>Nitin,
>
>Please see my comments in-line below.
>
>Thanks,
>-John
>
>On Jun 14, 2013, at 4:16 AM, Nitin Mehta  wrote:
>
>> 
>> 
>> On 13/06/13 10:08 PM, "John Burwell"  wrote:
>> 
>>> All,
>>> 
>>> Edison Su, Min Chen, Animesh Chaturvedi, and myself met via
>>> teleconference on 11 June 2013 @ 1:30 PM EDT.  The goal of the meeting
>>> was determine the path forward for merging the object_store branch by
>>>the
>>> 4.2 freeze date, 30 June 2013.  The conversation focused on the
>>>following
>>> topics:
>>> 
>>> * Staging area mechanism
>>> * Removing dependencies from the Storage to the Hypervisor layer
>>> * Dependencies of other patches on object_store
>>> * QA's desire to start testing the patch ASAP
>>> 
>>> Min, Edison, and I agreed that the staging mechanism must age out files
>>> and use a reference count to ensure that file in-use are not
>>>prematurely
>>> purged.  While we agree that some form of reservation system is
>>>required,
>>> Edison is concerned that it will be too conservative and create
>>> bottlenecks.  
>> 
>> Can you please elaborate on the fact that it is too conservative - we
>>just
>> can't purge the files when they are still in use correct ? We can use a
>> combination of LRU + reference count, trying to purge the LRU files if
>> their reference count <= 0 as a start ?
>
>The issue is not around determining when to purge a file.  The issue
>emerges around reservation sizes.  Currently, if we take a snapshot of a
>2 TB volume, we would have to reserve 2 TB in the staging area to ensure
>that we would have enough space for the maximum potential size of the
>snapshot.  However, it is very unlikely that the snapshot will actually
>be this size.  The concern becomes that large reservations would start
>starving out other processes.  For 4.2, we didn't feel there was enough
>time to devise a "smarter" reservation mechanism.  Therefore, in 4.3,
>there should be time to think the implications of various approaches
>through and devise a more efficient approach.
>
>> 
>>> 
>>> As we delved deeper into the subject of the storage to hypervisor
>>> dependencies and the reservation mechanism, we determined that NFS
>>> storage would still need to be the size of the secondary storage data
>>> set.  Since the hypervisor layer has not been completely fitted to the
>>> new storage layer, NFS would be still required for a number of
>>> operations.  Based on this realization, we decided to de-scope the
>>> staging mechanism, and leave the 4.2 object store functionality the
>>>same
>>> as 4.1.  Therefore, NFS will remain the secondary storage of record,
>>>and
>>> object storage will serve as backup/multi-zone sync.
>> 
>> I am not sure how we can comment its going to be the same as 4.1 - is it
>> from the end user perspective ? The internal semantics and their flow
>>have
>> changed. This needs to be elaborated and properly documented. Also I am
>> not sure if the feature is supported on the upgrade path or is it ? Need
>> more documentation here.
>
>From an end user perspective, object stores will remain a backup of
>secondary storage.  The user interface will likely be a bit nicer, but in
>terms of system architecture, the roles of object storage and NFS remain
>the same in 4.2 and 4.1.  To my mind, when we support object stores as
>native secondary storage targets, it will be a new feature, and we should
>continue to support the backup model as well.  Therefore, I don't see an
>upgrade path issue.
>
>> 
>> 
>>> In 4.3, we will fit the hypervisor layer for the new storage layer
>>>which
>>> will allow object stores to server as secondary storage.  This work
>>>will
>>> include removing the storage to hypervisor dependencies.  For 4.2,
>>>Edison
>>> and Min have implemented the critical foundation necessary to establish
>>> our next generation storage layer.  There simply was not enough time in
>>> this development cycle to make these changes and fit the hypervisor
>>>layer.
>>> 
>>> Due to the size of the patch, Animesh voiced QA's concerned regarding
>>> test scope and impact.  As such, we want to get the merge completed as
>>> soon as possible to allow testing to begin.  We discussed breaking up
>>>the
>>> patch, but we could not devise a reasonable set of chunks there were
>>>both
>>> isolated and significantly testable.  Therefore, the patch can only be
>>> delivered in its current state.  We also walked through potential
>>> dependencies between the storage framework changes and the solidfire
>>> branch.  It was determined that these two merges could occ

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread John Burwell
Mike,

Querying the SAN only indicates the number of IOPS currently in use.  The 
allocator needs to check the number of IOPS committed which is tracked by 
CloudStack.  For 4.2, we should be able to query the number of IOPS committed 
to a DataStore, and determine whether or not the number requested can be 
fulfilled by that device.  It seems to be that a DataStore#getCommittedIOPS() : 
Long method would be sufficient.  DataStore's that don't support provisioned 
IOPS would return null. 

As I mentioned previously, I am very reluctant for any feature to come into 
master that can exhaust resources.

Thanks,
-John

On Jun 13, 2013, at 9:27 PM, Mike Tutkowski  
wrote:

> Yeah, I'm not sure I could come up with anything near an accurate
> assessment of how many IOPS are currently available on the SAN (or even a
> total number that are available for volumes). Not sure if there's yet an
> API call for that.
> 
> If I did know this number (total number of IOPS supported by the SAN), we'd
> still have to keep track of the total number of volumes we've created from
> CS on the SAN in terms of their IOPS. Also, if an admin issues an API call
> directly to the SAN to tweak the number of IOPS on a given volume or set of
> volumes (not supported from CS, but supported via the SolidFire API), our
> numbers in CS would be off.
> 
> I'm thinking verifying sufficient number of IOPS is a really good idea for
> a future release.
> 
> For 4.2 I think we should stick to having the allocator detect if storage
> QoS is desired and if the storage pool in question supports that feature.
> 
> If you really are over provisioned on your SAN in terms of IOPS or
> capacity, the SAN can let the admin know in several different ways (e-mail,
> SNMP, GUI).
> 
> 
> On Thu, Jun 13, 2013 at 7:02 PM, John Burwell  wrote:
> 
>> Mike,
>> 
>> Please see my comments in-line below.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 13, 2013, at 6:09 PM, Mike Tutkowski 
>> wrote:
>> 
>>> Comments below in red.
>>> 
>>> Thanks
>>> 
>>> 
>>> On Thu, Jun 13, 2013 at 3:58 PM, John Burwell 
>> wrote:
>>> 
 Mike,
 
 Overall, I agree with the steps to below for 4.2.  However, we may want
>> to
 throw an exception if we can not fulfill a requested QoS.  If the user
>> is
 expecting that the hypervisor will provide a particular QoS, and that is
 not possible, it seems like we should inform them rather than silently
 ignoring their request.
 
>>> 
>>> Sure, that sounds reasonable.
>>> 
>>> We'd have to come up with some way for the allocators to know about the
>>> requested storage QoS and then query the candidate drivers.
>>> 
>>> Any thoughts on how we might do that?
>>> 
>>> 
 
 To collect my thoughts from previous parts of the thread, I am
 uncomfortable with the idea that the management server can overcommit a
 resource.  You had mentioned querying the device for available IOPS.
>> While
 that would be nice, it seems like we could fall back to a max IOPS and
 overcommit factor manually calculated and entered by the
 administrator/operator.  I think such threshold and allocation rails
>> should
 be added for both provisioned IOPS and throttled I/O -- it is a basic
 feature of any cloud orchestration platform.
 
>>> 
>>> Are you thinking this ability would make it into 4.2? Just curious what
>>> release we're talking about here. For the SolidFire SAN, you might have,
>>> say, four separate storage nodes to start (200,000 IOPS) and then later
>> add
>>> a new node (now you're at 250,000 IOPS). CS would have to have a way to
>>> know that the number of supported IOPS has increased.
>> 
>> Yes, I think we need some *basic*/conservative rails in 4.2.  For example,
>> we may only support expanding capacity in 4.2, and not handle any
>> re-balance scenarios --  node failure, addition, etc.   Extrapolating a
>> bit, the throttled I/O enhancement seems like it needs a similar set of
>> rails defined per host.
>> 
>>> 
>>> 
 
 For 4.3, I don't like the idea that a QoS would be expressed in a
 implementation specific manner.  I think we need to arrive at a general
 model that can be exploited by the allocators and planners.  We should
 restrict implementation specific key-value pairs (call them details,
 extended data, whatever) to information that is unique to the driver and
 would provide no useful information to the management server's
 orchestration functions.  A resource QoS does not seem to fit those
 criteria.
 
>>> 
>>> I wonder if this would be a good discussion topic for Sunday's CS Collab
>>> Conf hack day that Joe just sent out an e-mail about?
>> 
>> Yes, it would -- I will put something in the wiki topic.  It will also be
>> part of my talk on Monday -- How to Run from Zombie which include some of
>> my opinions on the topic.
>> 
>>> 
>>> 
 
 Thanks,
 -John
 
 On Jun 13, 2013, at 5:44 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.

Bugs on Master

2013-06-14 Thread Will Stevens
11 days ago I pulled the master code into my branch.  Master was at:
48913679e80e50228b1bd4b3d17fe5245461626a

When I pulled, I had Egress firewall rules working perfectly.  After the
pull I now get the following error when trying to create Egress firewall
rules:
ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
exception executing api command: createEgressFirewallRule
java.lang.NullPointerException
at
com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(FirewallManagerImpl.java:485)
at
com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(FirewallManagerImpl.java:191)
at
com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
at
com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRule(FirewallManagerImpl.java:157)
at
org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRuleCmd.create(CreateEgressFirewallRuleCmd.java:252)
at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

---

So I merged in master this morning to see if that issue was fixed.  Now I
can not create a Network Service offering and select anything but Virtual
Router from any of the dropdowns for capabilities such as 'Firewall',
'Source NAT', etc...

There are no JS errors, the dropdown just sits and thinks about it for a
second and does not change away from Virtual Router.

So now I can't use my service provider at all, so my development is
completely stalled.

Ideas???

ws


Re: SRX Integration Issues.

2013-06-14 Thread Jayapal Reddy Uradi
Hi,

I am not sure about the error but please see the below example configuration 
and correct your configuration.


Example confirmation: 

> Public Interface: fe-0/0/4.52
> Private Interface: fe-0/0/1

fe-0/0/1 - private interface
fe-0/0/4.52 - public interface where my public network vlan id is 52.

Example commands:
set interfaces fe-0/0/1 description "Private network"
set interfaces fe-0/0/1 vlan-tagging

set interfaces fe-0/0/4 unit 52 vlan-id 52
set interfaces fe-0/0/4 unit 52 family inet filter input untrust

Thanks,
Jayapal

On 14-Jun-2013, at 9:42 PM, Sean Truman 
 wrote:

> All,
> 
> I am trying to add an SRX 100 to Cloud Stack and keep getting a "Illegal
> Group Reference"
> 
> Here is how I am trying to add the config.
> IP Address: 10.0.2.1
> Username: root
> Password: password
> Type: Juniper SRX Firewall
> Public Interface: fe-0/0/0.0
> Private Interface: fe-0/0/1.0
> Usage interface:
> Number of Retries: 2
> Timeout: 300
> Public network: untrust
> Private network: trust
> Capacity: 10
> 
> 
> 
> Here is my SRX configuration.
> 
> http://pastebin.com/nTVEM92p
> 
> 
> Here is the only logs I get from management-server.log
> 
> http://pastebin.com/pWB0Kbtu
> 
> Any help would be greatly appreciated.
> 
> v/r
> Sean



Re: SRX Integration Issues.

2013-06-14 Thread Sean Truman
I am using untagged VLAN on my public side. It's failing on the test.xml 
looking for trust group!

Sean 

On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi 
 wrote:

> Hi,
> 
> I am not sure about the error but please see the below example configuration 
> and correct your configuration.
> 
> 
> Example confirmation: 
> 
>> Public Interface: fe-0/0/4.52
>> Private Interface: fe-0/0/1
> 
> fe-0/0/1 - private interface
> fe-0/0/4.52 - public interface where my public network vlan id is 52.
> 
> Example commands:
> set interfaces fe-0/0/1 description "Private network"
> set interfaces fe-0/0/1 vlan-tagging
> 
> set interfaces fe-0/0/4 unit 52 vlan-id 52
> set interfaces fe-0/0/4 unit 52 family inet filter input untrust
> 
> Thanks,
> Jayapal
> 
> On 14-Jun-2013, at 9:42 PM, Sean Truman 
> wrote:
> 
>> All,
>> 
>> I am trying to add an SRX 100 to Cloud Stack and keep getting a "Illegal
>> Group Reference"
>> 
>> Here is how I am trying to add the config.
>> IP Address: 10.0.2.1
>> Username: root
>> Password: password
>> Type: Juniper SRX Firewall
>> Public Interface: fe-0/0/0.0
>> Private Interface: fe-0/0/1.0
>> Usage interface:
>> Number of Retries: 2
>> Timeout: 300
>> Public network: untrust
>> Private network: trust
>> Capacity: 10
>> 
>> 
>> 
>> Here is my SRX configuration.
>> 
>> http://pastebin.com/nTVEM92p
>> 
>> 
>> Here is the only logs I get from management-server.log
>> 
>> http://pastebin.com/pWB0Kbtu
>> 
>> Any help would be greatly appreciated.
>> 
>> v/r
>> Sean
> 


Re: Object based Secondary storage.

2013-06-14 Thread Min Chen
HI Tom,

You can file JIRA ticket for object_store branch by prefixing your bug
with "Object_Store_Refactor" and mentioning that it is using build from
object_store. Here is an example bug filed from Sangeetha against
object_store branch build:
https://issues.apache.org/jira/browse/CLOUDSTACK-2528.
If you use devcloud for testing, you may run into an issue where ssvm
cannot access public url when you register a template, so register
template will fail. You may have to set up internal web server inside
devcloud and post template to be registered there to give a URL that
devcloud can access. We mainly used devcloud to run our TestNG automation
test earlier, and then switched to real hypervisor for real testing.
Thanks
-min

On 6/14/13 1:46 AM, "Thomas O'Dowd"  wrote:

>Edison,
>
>I've got devcloud running along with the object_store branch and I've
>finally been able to test a bit today.
>
>I found some issues (or things that I think are bugs) and would like to
>file a few issues. I know where the bug database is and I have an
>account but what is the best way to file bugs against this particular
>branch? I guess I can select "Future" as the version? What other way are
>feature branches usually identified in issues? Perhaps in the subject?
>Please let me know the preference.
>
>Also, can you describe (or point me at a document) what the best way to
>test against the object_store branch is? So far I have been doing the
>following but I'm not sure it is the best?
>
> a) setup devcloud.
> b) stop any instances on devcloud from previous runs
>  xe vm-shutdown --multiple
> c) check out and update the object_store branch.
> d) clean build as described in devcloud doc (ADIDD for short)
> e) deploydb (ADIDD)
> f) start management console (ADIDD) and wait for it.
> g) deploysvr (ADIDD) in another shell.
> h) on devcloud machine use xentop to wait for 2 vms to launch.
>(I'm not sure what the nfs vm is used for here??)
> i) login on gui -> infra -> secondary and remove nfs secondary storage
> j) add s3 secondary storage (using cache of old secondary storage?)
>
>Then rest of testing starts from here... (and also perhaps in step j)
>
>Thanks,
>
>Tom.
>-- 
>Cloudian KK - http://www.cloudian.com/get-started.html
>Fancy 100TB of full featured S3 Storage?
>Checkout the Cloudian® Community Edition!
>



Re: Hack Day at CloudStack Collaboration Conference

2013-06-14 Thread Joe Brockmeier
Hi Mike, 

On Thu, Jun 13, 2013, at 03:55 PM, Mike Tutkowski wrote:
> I was wondering if we have the following documentation (below). If not, I
> was thinking it might be a good session to discuss and start in (at a
> high level) on developing such documentation.
> 
> 1) Class diagrams highlighting the main classes that make up the Compute,
> Networking, and Storage components of CloudStack and how they relate to
> each other.
> 
> 2) Object-interaction diagrams showing how the main instances in the
> system coordinate execution of tasks.
> 
> 3) What kinds of threads are involved in the system (this will help
> developers better understand what resources are shared among threads and
> need to be locked at certain times)?

AFAIK, this doesn't exist currently. Would love to see this - so if you
want to take lead, I'm happy to attend to help facilitate and take down
notes, etc.

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: Git question about applying patch files

2013-06-14 Thread Sheng Yang
Well, base on your situation, seems only thing you can do is use git-merge
to merge your branch back to master.

But since you're not a committer, so you cannot push (merged) master
directly. Some time people would create another remote repo and ask for
pull. But that's for very big changes mostly.

I think the clearest way right now, is:
1. on your branch: git diff master > patch_file
2. git checkout master
3. git checkout -b work_branch
4. patch -Np1 < patch_file
5. Do git-commit. Ensure all your modification is in the tree.

In step 4, you may want to separate your big patch to smaller ones to make
it easy for review/understanding.

And next time you can do git-rebase to keep the work branch clean and easy
to maintain.

--Sheng


On Thu, Jun 13, 2013 at 11:15 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> So...let's see...getting back to doing this now. :) I had to finish up
> implementing comments from a code review.
>
> Here is how I've been developing. Please let me know which option provided
> to me in this e-mail chain best fits my situation. I can, of course, do
> development in Git differently next release if it makes sense to change
> (perhaps using rebase instead of merge).
>
> Initially (as in right before I started developing code for 4.2), I got a
> fresh copy of the CS repo and then I created a branch off of master called
> solidfire_plugin.
>
> I did my development work in this branch.
>
> Every now and then (like weekly), I performed another fetch from the CS
> repo and merged its master (what I call upstream/master) into
> solidfire_plugin. I've probably performed about four of five such merges
> during my development.
>
> Thanks for the advice! :)
>
>
> On Thu, Jun 13, 2013 at 10:45 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > Thanks, everyone!
> >
> > Once I finish up implementing review suggestions, I can try again with
> > building a squashed patch file.
> >
> >
> > On Thu, Jun 13, 2013 at 8:41 AM, John Burwell 
> wrote:
> >
> >> Prasanna,
> >>
> >> +1 to using rebase on feature branches.
> >>
> >> At least as I understand things and have experienced rebase, it
> >> preserves  all commits on the feature branch.  For Review Board and
> >> master merges, those commits need to be collapsed, or in git parlance,
> >> squashed.  The script I referenced below squashes the commits and
> >> works regardless of whether you have been using rebase or merge on
> >> your feature branch.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 13, 2013, at 2:01 AM, Prasanna Santhanam  wrote:
> >>
> >> > The 'cleanest cleanest' way is to use rebase as Sheng recommends but I
> >> > know people who've used git successfully with just doing merge. It's
> >> > (rebase) one of those features of git you discover only after
> >> > using-abusing it for long enough. But if you're adventurous ..  :)
> >> >
> >> > Do NOT do a rebase if you've done merges until now on your branch.
> >> >
> >> > Here's a nice post explaining how to work with rebase for those
> >> > hesitant to use it:
> >> > http://mettadore.com/analysis/a-simple-git-rebase-workflow-explained/
> >> >
> >> > --
> >> > Prasanna.,
> >> >
> >> > On Thu, Jun 13, 2013 at 01:50:15AM -0400, John Burwell wrote:
> >> >> Mike,
> >> >>
> >> >> The cleanest way have found to create these patches is  actually
> >> >> create a temporary work branch from master, merge the feature branch
> >> >> into it with the squashed option, and then generate the patch.  This
> >> >> gist (https://gist.github.com/jburwell/5771480) is the shell script
> >> >> I used to generate the S3-backed Secondary Storage patch submissions
> >> >> to Review Board.  It should be fairly easy to adapt by adjusting the
> >> >> FEATURE_BRANCH and WORK_HOME values.
> >> >>
> >> >> Thanks,
> >> >> -John
> >> >>
> >> >> On Jun 12, 2013, at 6:25 PM, Mike Tutkowski <
> >> mike.tutkow...@solidfire.com> wrote:
> >> >>
> >> >>> I have a branch, solidfire_plugin, off of master in my local repo.
> >> >>>
> >> >>> I wanted to submit a patch to Review Board.
> >> >>>
> >> >>> Essentially, I followed these steps (where upstream is the official
> CS
> >> >>> repo):
> >> >>>
> >> >>> git checkout master
> >> >>>
> >> >>> git fetch upstream
> >> >>>
> >> >>> git reset --hard upstream/master
> >> >>>
> >> >>> git checkout solidfire_plugin
> >> >>>
> >> >>> git merge master
> >> >>>
> >> >>> git format-patch master --stdout > solidfire_plugin.patch (this
> >> collected
> >> >>> six commits worth of work)
> >> >>>
> >> >>> git checkout master
> >> >>>
> >> >>> git am solidfire_plugin.patch
> >> >>> This final command lead to this error message (below). I was
> surprised
> >> >>> because I had just performed a merge from master to solidfire_plugin
> >> before
> >> >>> generating the patch file (so I was thinking the patch file should
> >> cleanly
> >> >>> apply on master).
> >> >>>
> >> >>> Any thoughts on this?
> >> >>>
> >> >>> Thanks!
> >> >>>
> >> >>> mtutkowski-lt:cloudstack mtutkowski

Re: Hack Day at CloudStack Collaboration Conference

2013-06-14 Thread Joe Brockmeier
On Fri, Jun 14, 2013, at 03:46 AM, Daan Hoogland wrote:
> I added 'secondary storage maintenance mode'
>  as a session. I don't mind to take ideas in advance!

Thanks Daan! Look forward to meeting you at the conference! 

Best,

jzb
-- 
Joe Brockmeier
j...@zonker.net
Twitter: @jzb
http://www.dissociatedpress.net/


Re: tests involving deploying a vm fail

2013-06-14 Thread Shane Witbeck
HI Prachi, 

I've attached the management server log file. 

Thanks, 
Shane


On Thursday, June 13, 2013 at 1:59 PM, Prachi Damle wrote:

> Shane,
> 
> Can you share the management server log file?
> 
> Prachi
> 
> -Original Message-
> From: Shane Witbeck [mailto:sh...@digitalsanctum.com] 
> Sent: Thursday, June 13, 2013 9:18 AM
> To: dev@cloudstack.apache.org (mailto:dev@cloudstack.apache.org)
> Subject: tests involving deploying a vm fail
> 
> Hi all, 
> 
> I'm attempting to run the following:
> 
> mvn -Pdeveloper,marvin.test -Dmarvin.config=setup/dev/advanced.cfg -pl 
> :cloud-marvin integration-test
> 
> from instructions [1]. It seems all tests involving deploying a VM fail for 
> me [2]. I've also tried running the /smoke/test_deploy_vm.py test in 
> isolation and I get the following:
> 
> cloudstackAPIException: Execute cmd: asyncquery failed, due to: {errorcode : 
> 533, errortext : u'Unable to create a deployment for VM[User|testvm]'}
> 
> I've also noticed that if I try manually deploying a VM, it will fail unless 
> I manually create a "local" disk offering instead of the "shared" types that 
> are available by default.
> 
> I'm running the management server on my mac and using devcloud2 running on 
> vbox. This is against latest 4.2-snapshot code.
> 
> Does someone have any pointers on why deploying a VM fails for me? 
> 
> 
> Thanks, 
> Shane
> 
> 
> [1] 
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Marvin+-+Testing+with+Python#Marvin-TestingwithPython-
> [2] https://gist.github.com/digitalsanctum/5774980
> 
> 




Re: tests involving deploying a vm fail

2013-06-14 Thread Chip Childers
Unfortunately, attachments are dropped on this list.  You'll need to
use paste bin or something and provide a URL.

On Fri, Jun 14, 2013 at 1:10 PM, Shane Witbeck  wrote:
> HI Prachi,
>
> I've attached the management server log file.
>
> Thanks,
> Shane
>
> On Thursday, June 13, 2013 at 1:59 PM, Prachi Damle wrote:
>
> Shane,
>
> Can you share the management server log file?
>
> Prachi
>
> -Original Message-
> From: Shane Witbeck [mailto:sh...@digitalsanctum.com]
> Sent: Thursday, June 13, 2013 9:18 AM
> To: dev@cloudstack.apache.org
> Subject: tests involving deploying a vm fail
>
> Hi all,
>
> I'm attempting to run the following:
>
> mvn -Pdeveloper,marvin.test -Dmarvin.config=setup/dev/advanced.cfg -pl
> :cloud-marvin integration-test
>
> from instructions [1]. It seems all tests involving deploying a VM fail for
> me [2]. I've also tried running the /smoke/test_deploy_vm.py test in
> isolation and I get the following:
>
> cloudstackAPIException: Execute cmd: asyncquery failed, due to: {errorcode :
> 533, errortext : u'Unable to create a deployment for VM[User|testvm]'}
>
> I've also noticed that if I try manually deploying a VM, it will fail unless
> I manually create a "local" disk offering instead of the "shared" types that
> are available by default.
>
> I'm running the management server on my mac and using devcloud2 running on
> vbox. This is against latest 4.2-snapshot code.
>
> Does someone have any pointers on why deploying a VM fails for me?
>
>
> Thanks,
> Shane
>
>
> [1]
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Marvin+-+Testing+with+Python#Marvin-TestingwithPython-
> [2] https://gist.github.com/digitalsanctum/5774980
>
>


Re: Bugs on Master

2013-06-14 Thread Chiradeep Vittal
Are you able to use CloudMonkey? Perhaps it is a UI issue?

On 6/14/13 9:50 AM, "Will Stevens"  wrote:

>11 days ago I pulled the master code into my branch.  Master was at:
>48913679e80e50228b1bd4b3d17fe5245461626a
>
>When I pulled, I had Egress firewall rules working perfectly.  After the
>pull I now get the following error when trying to create Egress firewall
>rules:
>ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
>exception executing api command: createEgressFirewallRule
>java.lang.NullPointerException
>at
>com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Firewa
>llManagerImpl.java:485)
>at
>com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Firewall
>ManagerImpl.java:191)
>at
>com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD
>ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>at
>com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRule(Fi
>rewallManagerImpl.java:157)
>at
>org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRuleCm
>d.create(CreateEgressFirewallRuleCmd.java:252)
>at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
>at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
>at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
>at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
>at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>at 
>org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
>at
>org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216
>)
>at 
>org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>at 
>org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>at
>org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCo
>llection.java:230)
>at
>org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:
>114)
>at 
>org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>at org.mortbay.jetty.Server.handle(Server.java:326)
>at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>at
>org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnect
>ion.java:928)
>at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>at
>org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:41
>0)
>at
>org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:5
>82)
>
>---
>
>So I merged in master this morning to see if that issue was fixed.  Now I
>can not create a Network Service offering and select anything but Virtual
>Router from any of the dropdowns for capabilities such as 'Firewall',
>'Source NAT', etc...
>
>There are no JS errors, the dropdown just sits and thinks about it for a
>second and does not change away from Virtual Router.
>
>So now I can't use my service provider at all, so my development is
>completely stalled.
>
>Ideas???
>
>ws



Re: Automation analysis improvement

2013-06-14 Thread Chiradeep Vittal
+1

On 6/14/13 8:54 AM, "Rayees Namathponnan" 
wrote:

>Many of the automation test cases are not tearing down the  account
>properly; due to this resources are not getting released and followed
>test cases getting failed during VM deployment itself.
>
>During automation run accounts are created with random number without any
>reference for test case (eg : test-N5QD8N), and it's hard to identify
>which test case not tearing down the account after complete the test.
>
>Here my suggestion; we should create account name with test case name
>(eg : test- VPCOffering-N5QD8N)
>
>Any thoughts ?
>
>Regards,
>Rayees



Re: Hack Day at CloudStack Collaboration Conference

2013-06-14 Thread Mike Tutkowski
Sounds good

I will update the Wiki.


On Fri, Jun 14, 2013 at 10:58 AM, Joe Brockmeier  wrote:

> Hi Mike,
>
> On Thu, Jun 13, 2013, at 03:55 PM, Mike Tutkowski wrote:
> > I was wondering if we have the following documentation (below). If not, I
> > was thinking it might be a good session to discuss and start in (at a
> > high level) on developing such documentation.
> >
> > 1) Class diagrams highlighting the main classes that make up the Compute,
> > Networking, and Storage components of CloudStack and how they relate to
> > each other.
> >
> > 2) Object-interaction diagrams showing how the main instances in the
> > system coordinate execution of tasks.
> >
> > 3) What kinds of threads are involved in the system (this will help
> > developers better understand what resources are shared among threads and
> > need to be locked at certain times)?
>
> AFAIK, this doesn't exist currently. Would love to see this - so if you
> want to take lead, I'm happy to attend to help facilitate and take down
> notes, etc.
>
> Best,
>
> jzb
> --
> Joe Brockmeier
> j...@zonker.net
> Twitter: @jzb
> http://www.dissociatedpress.net/
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud
*™*


Re: Master Branch problem cannot login

2013-06-14 Thread Chiradeep Vittal
The column is added by the upgrade script setup/db/db/schema-410to420.sql
Since 4.1 the schema is always 4.0 + upgrade scripts, I believe.

On 6/13/13 11:36 PM, "Soheil Eizadi"  wrote:

>I synced today, initialized the database and built image and found that I
>could not login to the CS UI. Not sure exactly what is going on but looks
>like the default field for Account Table was not created, when I tried to
>manually create it I got an error. The problem corrected itself when I
>ran the deployDB mvn build a second time after I had created the field
>manually. I can now log into the CS UI, but I thought I would document it
>here.
>-Soheil
>
>Below is more detail log.
>Administrators-MacBook-Pro-7:cloudstack seizadi$ git branch
>* infoblox
>  master
>Administrators-MacBook-Pro-7:cloudstack seizadi$ git merge master
>Already up-to-date.
>
>From AccountVO.java:
>
>@Entity
>
>@Table(name="account")
>
>public class AccountVO implements Account {
>
>@Id
>
>@GeneratedValue(strategy=GenerationType.IDENTITY)
>
>@Column(name="id")
>
>private long id;
>
>
>.
>
>
>
>@Column(name = "default")
>
>boolean isDefault;
>
>From create-schema.sql I don't see any field default defined:
>
>CREATE TABLE  `cloud`.`account` (
>  `id` bigint unsigned NOT NULL auto_increment,
>  `account_name` varchar(100) COMMENT 'an account name set by the creator
>of the account, defaults to username for single accounts',
>  `uuid` varchar(40),
>  `type` int(1) unsigned NOT NULL,
>  `domain_id` bigint unsigned,
>  `state` varchar(10) NOT NULL default 'enabled',
>  `removed` datetime COMMENT 'date removed',
>  `cleanup_needed` tinyint(1) NOT NULL default '0',
>  `network_domain` varchar(255),
>  `default_zone_id` bigint unsigned,
>  PRIMARY KEY  (`id`),
>  INDEX i_account__removed(`removed`),
>  CONSTRAINT `fk_account__default_zone_id` FOREIGN KEY
>`fk_account__default_zone_id`(`default_zone_id`) REFERENCES
>`data_center`(`id`) ON DELETE CASCADE,
>  INDEX `i_account__cleanup_needed`(`cleanup_needed`),
>  INDEX `i_account__account_name__domain_id__removed`(`account_name`,
>`domain_id`, `removed`),
>  CONSTRAINT `fk_account__domain_id` FOREIGN KEY(`domain_id`) REFERENCES
>`domain` (`id`),
>  INDEX `i_account__domain_id`(`domain_id`),
>  CONSTRAINT `uc_account__uuid` UNIQUE (`uuid`)
>) ENGINE=InnoDB DEFAULT CHARSET=utf8;
>
>mysql> select * from account;
>++--+--+--+---
>+-+-+++---
>--+
>| id | account_name | uuid | type |
>domain_id | state   | removed | cleanup_needed | network_domain |
>default_zone_id |
>++--+--+--+---
>+-+-+++---
>--+
>|  1 | system   | 401c6676-d1f2-11e2-a780-ee6f6199dc83 |1 |
>  1 | enabled | NULL|  0 | NULL   |
>NULL |
>|  2 | admin| 401c74fe-d1f2-11e2-a780-ee6f6199dc83 |1 |
>  1 | enabled | NULL|  0 | NULL   |
>NULL |
>++--+--+--+---
>+-+-+++---
>--+
>2 rows in set (0.00 sec)
>
>
>ERROR [cloud.api.ApiServlet] (2071009240@qtp-761108485-3:) unknown
>exception writing api response
>com.cloud.utils.exception.CloudRuntimeException: DB Exception on:
>com.mysql.jdbc.PreparedStatement@52c0152d: SELECT account.id,
>account.account_name, account.type, account.domain_id, account.state,
>account.removed, account.cleanup_needed, account.network_domain,
>account.uuid, account.default_zone_id, account.default FROM account WHERE
>account.id = 2
>at com.cloud.utils.db.GenericDaoBase.findById(GenericDaoBase.java:979)
>at 
>com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD
>ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>at 
>com.cloud.utils.db.GenericDaoBase.findByIdIncludingRemoved(GenericDaoBase.
>java:939)
>at 
>com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD
>ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>at 
>com.cloud.user.AccountManagerImpl.getAccount(AccountManagerImpl.java:1632)
>at com.cloud.api.ApiServer.loginUser(ApiServer.java:808)
>at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:210)
>at com.cloud.api.ApiServlet.doPost(ApiServlet.java:71)
>at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
>at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>at 
>org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
>at 
>org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216
>)
>at 
>org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>at 
>org.mortbay.jetty.handler.ContextHandler.handle(Contex

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Mike Tutkowski
Hi John,

Are you thinking we add a column on to the storage pool table, IOPS_Count,
where we add and subtract committed IOPS?

That is easy enough.

How do you want to determine what the SAN is capable of supporting IOPS
wise? Remember we're dealing with a dynamic SAN here...as you add storage
nodes to the cluster, the number of IOPS increases. Do we have a thread we
can use to query external devices like this SAN to update the supported
number of IOPS?

Also, how do you want to enforce the IOPS limit? Do we pass in an
overcommit ration to the plug-in when it's created? We would need to store
this in the storage_pool table, as well, I believe.

We should also get Wei involved in this as his feature will need similar
functionality.

Also, we should do this FAST as we have only two weeks left and many of us
will be out for several days for the CS Collab Conference.

Thanks


On Fri, Jun 14, 2013 at 10:46 AM, John Burwell  wrote:

> Mike,
>
> Querying the SAN only indicates the number of IOPS currently in use.  The
> allocator needs to check the number of IOPS committed which is tracked by
> CloudStack.  For 4.2, we should be able to query the number of IOPS
> committed to a DataStore, and determine whether or not the number requested
> can be fulfilled by that device.  It seems to be that a
> DataStore#getCommittedIOPS() : Long method would be sufficient.
>  DataStore's that don't support provisioned IOPS would return null.
>
> As I mentioned previously, I am very reluctant for any feature to come
> into master that can exhaust resources.
>
> Thanks,
> -John
>
> On Jun 13, 2013, at 9:27 PM, Mike Tutkowski 
> wrote:
>
> > Yeah, I'm not sure I could come up with anything near an accurate
> > assessment of how many IOPS are currently available on the SAN (or even a
> > total number that are available for volumes). Not sure if there's yet an
> > API call for that.
> >
> > If I did know this number (total number of IOPS supported by the SAN),
> we'd
> > still have to keep track of the total number of volumes we've created
> from
> > CS on the SAN in terms of their IOPS. Also, if an admin issues an API
> call
> > directly to the SAN to tweak the number of IOPS on a given volume or set
> of
> > volumes (not supported from CS, but supported via the SolidFire API), our
> > numbers in CS would be off.
> >
> > I'm thinking verifying sufficient number of IOPS is a really good idea
> for
> > a future release.
> >
> > For 4.2 I think we should stick to having the allocator detect if storage
> > QoS is desired and if the storage pool in question supports that feature.
> >
> > If you really are over provisioned on your SAN in terms of IOPS or
> > capacity, the SAN can let the admin know in several different ways
> (e-mail,
> > SNMP, GUI).
> >
> >
> > On Thu, Jun 13, 2013 at 7:02 PM, John Burwell 
> wrote:
> >
> >> Mike,
> >>
> >> Please see my comments in-line below.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Jun 13, 2013, at 6:09 PM, Mike Tutkowski <
> mike.tutkow...@solidfire.com>
> >> wrote:
> >>
> >>> Comments below in red.
> >>>
> >>> Thanks
> >>>
> >>>
> >>> On Thu, Jun 13, 2013 at 3:58 PM, John Burwell 
> >> wrote:
> >>>
>  Mike,
> 
>  Overall, I agree with the steps to below for 4.2.  However, we may
> want
> >> to
>  throw an exception if we can not fulfill a requested QoS.  If the user
> >> is
>  expecting that the hypervisor will provide a particular QoS, and that
> is
>  not possible, it seems like we should inform them rather than silently
>  ignoring their request.
> 
> >>>
> >>> Sure, that sounds reasonable.
> >>>
> >>> We'd have to come up with some way for the allocators to know about the
> >>> requested storage QoS and then query the candidate drivers.
> >>>
> >>> Any thoughts on how we might do that?
> >>>
> >>>
> 
>  To collect my thoughts from previous parts of the thread, I am
>  uncomfortable with the idea that the management server can overcommit
> a
>  resource.  You had mentioned querying the device for available IOPS.
> >> While
>  that would be nice, it seems like we could fall back to a max IOPS and
>  overcommit factor manually calculated and entered by the
>  administrator/operator.  I think such threshold and allocation rails
> >> should
>  be added for both provisioned IOPS and throttled I/O -- it is a basic
>  feature of any cloud orchestration platform.
> 
> >>>
> >>> Are you thinking this ability would make it into 4.2? Just curious what
> >>> release we're talking about here. For the SolidFire SAN, you might
> have,
> >>> say, four separate storage nodes to start (200,000 IOPS) and then later
> >> add
> >>> a new node (now you're at 250,000 IOPS). CS would have to have a way to
> >>> know that the number of supported IOPS has increased.
> >>
> >> Yes, I think we need some *basic*/conservative rails in 4.2.  For
> example,
> >> we may only support expanding capacity in 4.2, and not handle any
> >> re-balance scenarios

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Mike Tutkowski
"As I mentioned previously, I am very reluctant for any feature to come
into master that can exhaust resources."

Just wanted to mention that, worst case, the SAN would fail creation of the
volume before allowing a new volume to break the system.


On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Hi John,
>
> Are you thinking we add a column on to the storage pool table, IOPS_Count,
> where we add and subtract committed IOPS?
>
> That is easy enough.
>
> How do you want to determine what the SAN is capable of supporting IOPS
> wise? Remember we're dealing with a dynamic SAN here...as you add storage
> nodes to the cluster, the number of IOPS increases. Do we have a thread we
> can use to query external devices like this SAN to update the supported
> number of IOPS?
>
> Also, how do you want to enforce the IOPS limit? Do we pass in an
> overcommit ration to the plug-in when it's created? We would need to store
> this in the storage_pool table, as well, I believe.
>
> We should also get Wei involved in this as his feature will need similar
> functionality.
>
> Also, we should do this FAST as we have only two weeks left and many of us
> will be out for several days for the CS Collab Conference.
>
> Thanks
>
>
> On Fri, Jun 14, 2013 at 10:46 AM, John Burwell  wrote:
>
>> Mike,
>>
>> Querying the SAN only indicates the number of IOPS currently in use.  The
>> allocator needs to check the number of IOPS committed which is tracked by
>> CloudStack.  For 4.2, we should be able to query the number of IOPS
>> committed to a DataStore, and determine whether or not the number requested
>> can be fulfilled by that device.  It seems to be that a
>> DataStore#getCommittedIOPS() : Long method would be sufficient.
>>  DataStore's that don't support provisioned IOPS would return null.
>>
>> As I mentioned previously, I am very reluctant for any feature to come
>> into master that can exhaust resources.
>>
>> Thanks,
>> -John
>>
>> On Jun 13, 2013, at 9:27 PM, Mike Tutkowski 
>> wrote:
>>
>> > Yeah, I'm not sure I could come up with anything near an accurate
>> > assessment of how many IOPS are currently available on the SAN (or even
>> a
>> > total number that are available for volumes). Not sure if there's yet an
>> > API call for that.
>> >
>> > If I did know this number (total number of IOPS supported by the SAN),
>> we'd
>> > still have to keep track of the total number of volumes we've created
>> from
>> > CS on the SAN in terms of their IOPS. Also, if an admin issues an API
>> call
>> > directly to the SAN to tweak the number of IOPS on a given volume or
>> set of
>> > volumes (not supported from CS, but supported via the SolidFire API),
>> our
>> > numbers in CS would be off.
>> >
>> > I'm thinking verifying sufficient number of IOPS is a really good idea
>> for
>> > a future release.
>> >
>> > For 4.2 I think we should stick to having the allocator detect if
>> storage
>> > QoS is desired and if the storage pool in question supports that
>> feature.
>> >
>> > If you really are over provisioned on your SAN in terms of IOPS or
>> > capacity, the SAN can let the admin know in several different ways
>> (e-mail,
>> > SNMP, GUI).
>> >
>> >
>> > On Thu, Jun 13, 2013 at 7:02 PM, John Burwell 
>> wrote:
>> >
>> >> Mike,
>> >>
>> >> Please see my comments in-line below.
>> >>
>> >> Thanks,
>> >> -John
>> >>
>> >> On Jun 13, 2013, at 6:09 PM, Mike Tutkowski <
>> mike.tutkow...@solidfire.com>
>> >> wrote:
>> >>
>> >>> Comments below in red.
>> >>>
>> >>> Thanks
>> >>>
>> >>>
>> >>> On Thu, Jun 13, 2013 at 3:58 PM, John Burwell 
>> >> wrote:
>> >>>
>>  Mike,
>> 
>>  Overall, I agree with the steps to below for 4.2.  However, we may
>> want
>> >> to
>>  throw an exception if we can not fulfill a requested QoS.  If the
>> user
>> >> is
>>  expecting that the hypervisor will provide a particular QoS, and
>> that is
>>  not possible, it seems like we should inform them rather than
>> silently
>>  ignoring their request.
>> 
>> >>>
>> >>> Sure, that sounds reasonable.
>> >>>
>> >>> We'd have to come up with some way for the allocators to know about
>> the
>> >>> requested storage QoS and then query the candidate drivers.
>> >>>
>> >>> Any thoughts on how we might do that?
>> >>>
>> >>>
>> 
>>  To collect my thoughts from previous parts of the thread, I am
>>  uncomfortable with the idea that the management server can
>> overcommit a
>>  resource.  You had mentioned querying the device for available IOPS.
>> >> While
>>  that would be nice, it seems like we could fall back to a max IOPS
>> and
>>  overcommit factor manually calculated and entered by the
>>  administrator/operator.  I think such threshold and allocation rails
>> >> should
>>  be added for both provisioned IOPS and throttled I/O -- it is a basic
>>  feature of any cloud orchestration platform.
>> 
>> >>>
>> >>> Are you thinking this ability would make it int

Re: SRX Integration Issues.

2013-06-14 Thread Sheng Yang
It looks like a string issue of Java itself. What exactly failed on
test.xml?

--Sheng


On Fri, Jun 14, 2013 at 9:55 AM, Sean Truman  wrote:

> I am using untagged VLAN on my public side. It's failing on the test.xml
> looking for trust group!
>
> Sean
>
> On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi <
> jayapalreddy.ur...@citrix.com> wrote:
>
> > Hi,
> >
> > I am not sure about the error but please see the below example
> configuration and correct your configuration.
> >
> >
> > Example confirmation:
> >
> >> Public Interface: fe-0/0/4.52
> >> Private Interface: fe-0/0/1
> >
> > fe-0/0/1 - private interface
> > fe-0/0/4.52 - public interface where my public network vlan id is 52.
> >
> > Example commands:
> > set interfaces fe-0/0/1 description "Private network"
> > set interfaces fe-0/0/1 vlan-tagging
> >
> > set interfaces fe-0/0/4 unit 52 vlan-id 52
> > set interfaces fe-0/0/4 unit 52 family inet filter input untrust
> >
> > Thanks,
> > Jayapal
> >
> > On 14-Jun-2013, at 9:42 PM, Sean Truman 
> > wrote:
> >
> >> All,
> >>
> >> I am trying to add an SRX 100 to Cloud Stack and keep getting a "Illegal
> >> Group Reference"
> >>
> >> Here is how I am trying to add the config.
> >> IP Address: 10.0.2.1
> >> Username: root
> >> Password: password
> >> Type: Juniper SRX Firewall
> >> Public Interface: fe-0/0/0.0
> >> Private Interface: fe-0/0/1.0
> >> Usage interface:
> >> Number of Retries: 2
> >> Timeout: 300
> >> Public network: untrust
> >> Private network: trust
> >> Capacity: 10
> >>
> >>
> >>
> >> Here is my SRX configuration.
> >>
> >> http://pastebin.com/nTVEM92p
> >>
> >>
> >> Here is the only logs I get from management-server.log
> >>
> >> http://pastebin.com/pWB0Kbtu
> >>
> >> Any help would be greatly appreciated.
> >>
> >> v/r
> >> Sean
> >
>


Re: Bugs on Master

2013-06-14 Thread Will Stevens
I will try that.  I am doing some testing right now.  I am compiling and
running just master now to validate everything.

I will be in touch when I have more details...

ws


On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
chiradeep.vit...@citrix.com> wrote:

> Are you able to use CloudMonkey? Perhaps it is a UI issue?
>
> On 6/14/13 9:50 AM, "Will Stevens"  wrote:
>
> >11 days ago I pulled the master code into my branch.  Master was at:
> >48913679e80e50228b1bd4b3d17fe5245461626a
> >
> >When I pulled, I had Egress firewall rules working perfectly.  After the
> >pull I now get the following error when trying to create Egress firewall
> >rules:
> >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
> >exception executing api command: createEgressFirewallRule
> >java.lang.NullPointerException
> >at
> >com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Firewa
> >llManagerImpl.java:485)
> >at
> >com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Firewall
> >ManagerImpl.java:191)
> >at
> >com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD
> >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> >at
> >com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRule(Fi
> >rewallManagerImpl.java:157)
> >at
> >org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRuleCm
> >d.create(CreateEgressFirewallRuleCmd.java:252)
> >at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
> >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
> >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
> >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
> >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> >at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
> >at
> >org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
> >at
> >org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216
> >)
> >at
> >org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> >at
> >org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> >at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> >at
> >org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCo
> >llection.java:230)
> >at
> >org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:
> >114)
> >at
> >org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> >at org.mortbay.jetty.Server.handle(Server.java:326)
> >at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> >at
> >org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnect
> >ion.java:928)
> >at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> >at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> >at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> >at
> >org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:41
> >0)
> >at
> >org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:5
> >82)
> >
> >---
> >
> >So I merged in master this morning to see if that issue was fixed.  Now I
> >can not create a Network Service offering and select anything but Virtual
> >Router from any of the dropdowns for capabilities such as 'Firewall',
> >'Source NAT', etc...
> >
> >There are no JS errors, the dropdown just sits and thinks about it for a
> >second and does not change away from Virtual Router.
> >
> >So now I can't use my service provider at all, so my development is
> >completely stalled.
> >
> >Ideas???
> >
> >ws
>
>


Re: SRX Integration Issues.

2013-06-14 Thread Sean Truman
Looking up the trust group, I am not seeing any exceptions in the log files.

Sean

On Jun 14, 2013, at 12:54 PM, Sheng Yang  wrote:

> It looks like a string issue of Java itself. What exactly failed on
> test.xml?
> 
> --Sheng
> 
> 
> On Fri, Jun 14, 2013 at 9:55 AM, Sean Truman  wrote:
> 
>> I am using untagged VLAN on my public side. It's failing on the test.xml
>> looking for trust group!
>> 
>> Sean
>> 
>> On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi <
>> jayapalreddy.ur...@citrix.com> wrote:
>> 
>>> Hi,
>>> 
>>> I am not sure about the error but please see the below example
>> configuration and correct your configuration.
>>> 
>>> 
>>> Example confirmation:
>>> 
 Public Interface: fe-0/0/4.52
 Private Interface: fe-0/0/1
>>> 
>>> fe-0/0/1 - private interface
>>> fe-0/0/4.52 - public interface where my public network vlan id is 52.
>>> 
>>> Example commands:
>>> set interfaces fe-0/0/1 description "Private network"
>>> set interfaces fe-0/0/1 vlan-tagging
>>> 
>>> set interfaces fe-0/0/4 unit 52 vlan-id 52
>>> set interfaces fe-0/0/4 unit 52 family inet filter input untrust
>>> 
>>> Thanks,
>>> Jayapal
>>> 
>>> On 14-Jun-2013, at 9:42 PM, Sean Truman 
>>> wrote:
>>> 
 All,
 
 I am trying to add an SRX 100 to Cloud Stack and keep getting a "Illegal
 Group Reference"
 
 Here is how I am trying to add the config.
 IP Address: 10.0.2.1
 Username: root
 Password: password
 Type: Juniper SRX Firewall
 Public Interface: fe-0/0/0.0
 Private Interface: fe-0/0/1.0
 Usage interface:
 Number of Retries: 2
 Timeout: 300
 Public network: untrust
 Private network: trust
 Capacity: 10
 
 
 
 Here is my SRX configuration.
 
 http://pastebin.com/nTVEM92p
 
 
 Here is the only logs I get from management-server.log
 
 http://pastebin.com/pWB0Kbtu
 
 Any help would be greatly appreciated.
 
 v/r
 Sean
>> 


Re: Test halting build every now and then

2013-06-14 Thread Shane Witbeck
I actually opened an issue for this: 

https://issues.apache.org/jira/browse/CLOUDSTACK-2863 

I've seen the failure using both wireless and ether on Mac OS X (10.8.4) from 
Terminal.


Thanks, 
Shane


On Friday, June 14, 2013 at 10:36 AM, Daan Hoogland wrote:

> localhost 



Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Mike Tutkowski
1) We want number of IOPS currently supported by the SAN.

2) We want the number of IOPS that are committed (sum of min IOPS for each
volume).

We could do the following to keep track of IOPS:

The plug-in could have a timer thread that goes off every, say, 1 minute.

It could query the SAN for the number of nodes that make up the SAN and
multiple this by 50,000. This is essentially the number of supported IOPS
of the SAN.

The next API call could be to get all of the volumes on the SAN. Iterate
through them all and add up their min IOPS values. This is the number of
IOPS the SAN is committed to.

These two numbers can then be updated in the storage_pool table (a column
for each value).

The allocators can get these values as needed (and they would be as
accurate as the last time the thread asked the SAN for this info).

These two fields, the min IOPS of the volume to create, and the overcommit
ratio of the plug-in would tell the allocator if it can select the given
storage pool.

What do you think?


On Fri, Jun 14, 2013 at 11:45 AM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> "As I mentioned previously, I am very reluctant for any feature to come
> into master that can exhaust resources."
>
> Just wanted to mention that, worst case, the SAN would fail creation of
> the volume before allowing a new volume to break the system.
>
>
> On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
>> Hi John,
>>
>> Are you thinking we add a column on to the storage pool table,
>> IOPS_Count, where we add and subtract committed IOPS?
>>
>> That is easy enough.
>>
>> How do you want to determine what the SAN is capable of supporting IOPS
>> wise? Remember we're dealing with a dynamic SAN here...as you add storage
>> nodes to the cluster, the number of IOPS increases. Do we have a thread we
>> can use to query external devices like this SAN to update the supported
>> number of IOPS?
>>
>> Also, how do you want to enforce the IOPS limit? Do we pass in an
>> overcommit ration to the plug-in when it's created? We would need to store
>> this in the storage_pool table, as well, I believe.
>>
>> We should also get Wei involved in this as his feature will need similar
>> functionality.
>>
>> Also, we should do this FAST as we have only two weeks left and many of
>> us will be out for several days for the CS Collab Conference.
>>
>> Thanks
>>
>>
>> On Fri, Jun 14, 2013 at 10:46 AM, John Burwell wrote:
>>
>>> Mike,
>>>
>>> Querying the SAN only indicates the number of IOPS currently in use.
>>>  The allocator needs to check the number of IOPS committed which is tracked
>>> by CloudStack.  For 4.2, we should be able to query the number of IOPS
>>> committed to a DataStore, and determine whether or not the number requested
>>> can be fulfilled by that device.  It seems to be that a
>>> DataStore#getCommittedIOPS() : Long method would be sufficient.
>>>  DataStore's that don't support provisioned IOPS would return null.
>>>
>>> As I mentioned previously, I am very reluctant for any feature to come
>>> into master that can exhaust resources.
>>>
>>> Thanks,
>>> -John
>>>
>>> On Jun 13, 2013, at 9:27 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com> wrote:
>>>
>>> > Yeah, I'm not sure I could come up with anything near an accurate
>>> > assessment of how many IOPS are currently available on the SAN (or
>>> even a
>>> > total number that are available for volumes). Not sure if there's yet
>>> an
>>> > API call for that.
>>> >
>>> > If I did know this number (total number of IOPS supported by the SAN),
>>> we'd
>>> > still have to keep track of the total number of volumes we've created
>>> from
>>> > CS on the SAN in terms of their IOPS. Also, if an admin issues an API
>>> call
>>> > directly to the SAN to tweak the number of IOPS on a given volume or
>>> set of
>>> > volumes (not supported from CS, but supported via the SolidFire API),
>>> our
>>> > numbers in CS would be off.
>>> >
>>> > I'm thinking verifying sufficient number of IOPS is a really good idea
>>> for
>>> > a future release.
>>> >
>>> > For 4.2 I think we should stick to having the allocator detect if
>>> storage
>>> > QoS is desired and if the storage pool in question supports that
>>> feature.
>>> >
>>> > If you really are over provisioned on your SAN in terms of IOPS or
>>> > capacity, the SAN can let the admin know in several different ways
>>> (e-mail,
>>> > SNMP, GUI).
>>> >
>>> >
>>> > On Thu, Jun 13, 2013 at 7:02 PM, John Burwell 
>>> wrote:
>>> >
>>> >> Mike,
>>> >>
>>> >> Please see my comments in-line below.
>>> >>
>>> >> Thanks,
>>> >> -John
>>> >>
>>> >> On Jun 13, 2013, at 6:09 PM, Mike Tutkowski <
>>> mike.tutkow...@solidfire.com>
>>> >> wrote:
>>> >>
>>> >>> Comments below in red.
>>> >>>
>>> >>> Thanks
>>> >>>
>>> >>>
>>> >>> On Thu, Jun 13, 2013 at 3:58 PM, John Burwell 
>>> >> wrote:
>>> >>>
>>>  Mike,
>>> 
>>>  Overall, I agree with the steps to below for 4.2.  However, we may
>>> wan

Re: SRX Integration Issues.

2013-06-14 Thread Sean Truman
Looking through the source their isn't much logging, plus it's all over SSL so 
I cannot see the traffic being passed using tcpdump.

Sean

On Jun 14, 2013, at 12:54 PM, Sheng Yang  wrote:

> It looks like a string issue of Java itself. What exactly failed on
> test.xml?
> 
> --Sheng
> 
> 
> On Fri, Jun 14, 2013 at 9:55 AM, Sean Truman  wrote:
> 
>> I am using untagged VLAN on my public side. It's failing on the test.xml
>> looking for trust group!
>> 
>> Sean
>> 
>> On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi <
>> jayapalreddy.ur...@citrix.com> wrote:
>> 
>>> Hi,
>>> 
>>> I am not sure about the error but please see the below example
>> configuration and correct your configuration.
>>> 
>>> 
>>> Example confirmation:
>>> 
 Public Interface: fe-0/0/4.52
 Private Interface: fe-0/0/1
>>> 
>>> fe-0/0/1 - private interface
>>> fe-0/0/4.52 - public interface where my public network vlan id is 52.
>>> 
>>> Example commands:
>>> set interfaces fe-0/0/1 description "Private network"
>>> set interfaces fe-0/0/1 vlan-tagging
>>> 
>>> set interfaces fe-0/0/4 unit 52 vlan-id 52
>>> set interfaces fe-0/0/4 unit 52 family inet filter input untrust
>>> 
>>> Thanks,
>>> Jayapal
>>> 
>>> On 14-Jun-2013, at 9:42 PM, Sean Truman 
>>> wrote:
>>> 
 All,
 
 I am trying to add an SRX 100 to Cloud Stack and keep getting a "Illegal
 Group Reference"
 
 Here is how I am trying to add the config.
 IP Address: 10.0.2.1
 Username: root
 Password: password
 Type: Juniper SRX Firewall
 Public Interface: fe-0/0/0.0
 Private Interface: fe-0/0/1.0
 Usage interface:
 Number of Retries: 2
 Timeout: 300
 Public network: untrust
 Private network: trust
 Capacity: 10
 
 
 
 Here is my SRX configuration.
 
 http://pastebin.com/nTVEM92p
 
 
 Here is the only logs I get from management-server.log
 
 http://pastebin.com/pWB0Kbtu
 
 Any help would be greatly appreciated.
 
 v/r
 Sean
>> 


Re: Automation analysis improvement

2013-06-14 Thread Ahmad Emneina
I'm +1 on this. I feel global setting (relating to expunge and cleanup) should 
be set to aggressively expunge deleted resources, then delete the user 
resources... before deleting the account. That way we can verify garbage 
collection of resources is working properly.

Ahmad

On Jun 14, 2013, at 10:21 AM, Chiradeep Vittal  
wrote:

> +1
> 
> On 6/14/13 8:54 AM, "Rayees Namathponnan" 
> wrote:
> 
>> Many of the automation test cases are not tearing down the  account
>> properly; due to this resources are not getting released and followed
>> test cases getting failed during VM deployment itself.
>> 
>> During automation run accounts are created with random number without any
>> reference for test case (eg : test-N5QD8N), and it's hard to identify
>> which test case not tearing down the account after complete the test.
>> 
>> Here my suggestion; we should create account name with test case name
>> (eg : test- VPCOffering-N5QD8N)
>> 
>> Any thoughts ?
>> 
>> Regards,
>> Rayees
> 


Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Simon Weller
I'd like to comment on this briefly. 



I think an assumption is being made that the SAN is being dedicated to a CS 
instance. 

My person opinion that this whole IOPS calculation is getting rather 
complicated, and could probably be much simpler than this. Over subscription is 
a fact of life on virtually all storage, and is really no different in concept 
than multiple virt instances on a single piece of hardware. All decent SANs 
offer many management options for the storage engineers to keep track of IOPS 
utilization, and plan for spindle augmentation as required. 
Is it really the job of CS to become yet another management layer on top of 
this? 

- Original Message -

From: "Mike Tutkowski"  
To: dev@cloudstack.apache.org 
Cc: "John Burwell" , "Wei Zhou"  
Sent: Friday, June 14, 2013 1:00:26 PM 
Subject: Re: [MERGE] disk_io_throttling to MASTER 

1) We want number of IOPS currently supported by the SAN. 

2) We want the number of IOPS that are committed (sum of min IOPS for each 
volume). 

We could do the following to keep track of IOPS: 

The plug-in could have a timer thread that goes off every, say, 1 minute. 

It could query the SAN for the number of nodes that make up the SAN and 
multiple this by 50,000. This is essentially the number of supported IOPS 
of the SAN. 

The next API call could be to get all of the volumes on the SAN. Iterate 
through them all and add up their min IOPS values. This is the number of 
IOPS the SAN is committed to. 

These two numbers can then be updated in the storage_pool table (a column 
for each value). 

The allocators can get these values as needed (and they would be as 
accurate as the last time the thread asked the SAN for this info). 

These two fields, the min IOPS of the volume to create, and the overcommit 
ratio of the plug-in would tell the allocator if it can select the given 
storage pool. 

What do you think? 


On Fri, Jun 14, 2013 at 11:45 AM, Mike Tutkowski < 
mike.tutkow...@solidfire.com> wrote: 

> "As I mentioned previously, I am very reluctant for any feature to come 
> into master that can exhaust resources." 
> 
> Just wanted to mention that, worst case, the SAN would fail creation of 
> the volume before allowing a new volume to break the system. 
> 
> 
> On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski < 
> mike.tutkow...@solidfire.com> wrote: 
> 
>> Hi John, 
>> 
>> Are you thinking we add a column on to the storage pool table, 
>> IOPS_Count, where we add and subtract committed IOPS? 
>> 
>> That is easy enough. 
>> 
>> How do you want to determine what the SAN is capable of supporting IOPS 
>> wise? Remember we're dealing with a dynamic SAN here...as you add storage 
>> nodes to the cluster, the number of IOPS increases. Do we have a thread we 
>> can use to query external devices like this SAN to update the supported 
>> number of IOPS? 
>> 
>> Also, how do you want to enforce the IOPS limit? Do we pass in an 
>> overcommit ration to the plug-in when it's created? We would need to store 
>> this in the storage_pool table, as well, I believe. 
>> 
>> We should also get Wei involved in this as his feature will need similar 
>> functionality. 
>> 
>> Also, we should do this FAST as we have only two weeks left and many of 
>> us will be out for several days for the CS Collab Conference. 
>> 
>> Thanks 
>> 
>> 
>> On Fri, Jun 14, 2013 at 10:46 AM, John Burwell wrote: 
>> 
>>> Mike, 
>>> 
>>> Querying the SAN only indicates the number of IOPS currently in use. 
>>> The allocator needs to check the number of IOPS committed which is tracked 
>>> by CloudStack. For 4.2, we should be able to query the number of IOPS 
>>> committed to a DataStore, and determine whether or not the number requested 
>>> can be fulfilled by that device. It seems to be that a 
>>> DataStore#getCommittedIOPS() : Long method would be sufficient. 
>>> DataStore's that don't support provisioned IOPS would return null. 
>>> 
>>> As I mentioned previously, I am very reluctant for any feature to come 
>>> into master that can exhaust resources. 
>>> 
>>> Thanks, 
>>> -John 
>>> 
>>> On Jun 13, 2013, at 9:27 PM, Mike Tutkowski < 
>>> mike.tutkow...@solidfire.com> wrote: 
>>> 
>>> > Yeah, I'm not sure I could come up with anything near an accurate 
>>> > assessment of how many IOPS are currently available on the SAN (or 
>>> even a 
>>> > total number that are available for volumes). Not sure if there's yet 
>>> an 
>>> > API call for that. 
>>> > 
>>> > If I did know this number (total number of IOPS supported by the SAN), 
>>> we'd 
>>> > still have to keep track of the total number of volumes we've created 
>>> from 
>>> > CS on the SAN in terms of their IOPS. Also, if an admin issues an API 
>>> call 
>>> > directly to the SAN to tweak the number of IOPS on a given volume or 
>>> set of 
>>> > volumes (not supported from CS, but supported via the SolidFire API), 
>>> our 
>>> > numbers in CS would be off. 
>>> > 
>>> > I'm thinking verifying sufficient number of

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Mike Tutkowski
That is the route I was trying to go with this, Simon.

I agree with what you're saying.

We cannot assume the SAN is dedicated to CS. That is why - if we have to do
this for 4.2 - I need to have a dedicated timer thread (or something) that
updates the total and consumed IOPS. If we just use the volumes created
from CS, it will often be wrong (if the SAN is used for other purposes).

I really don't think this is a 4.2 feature. It seems like feature
creep...and really late in the game.


On Fri, Jun 14, 2013 at 12:20 PM, Simon Weller  wrote:

> I'd like to comment on this briefly.
>
>
>
> I think an assumption is being made that the SAN is being dedicated to a
> CS instance.
>
> My person opinion that this whole IOPS calculation is getting rather
> complicated, and could probably be much simpler than this. Over
> subscription is a fact of life on virtually all storage, and is really no
> different in concept than multiple virt instances on a single piece of
> hardware. All decent SANs offer many management options for the storage
> engineers to keep track of IOPS utilization, and plan for spindle
> augmentation as required.
> Is it really the job of CS to become yet another management layer on top
> of this?
>
> - Original Message -
>
> From: "Mike Tutkowski" 
> To: dev@cloudstack.apache.org
> Cc: "John Burwell" , "Wei Zhou"  >
> Sent: Friday, June 14, 2013 1:00:26 PM
> Subject: Re: [MERGE] disk_io_throttling to MASTER
>
> 1) We want number of IOPS currently supported by the SAN.
>
> 2) We want the number of IOPS that are committed (sum of min IOPS for each
> volume).
>
> We could do the following to keep track of IOPS:
>
> The plug-in could have a timer thread that goes off every, say, 1 minute.
>
> It could query the SAN for the number of nodes that make up the SAN and
> multiple this by 50,000. This is essentially the number of supported IOPS
> of the SAN.
>
> The next API call could be to get all of the volumes on the SAN. Iterate
> through them all and add up their min IOPS values. This is the number of
> IOPS the SAN is committed to.
>
> These two numbers can then be updated in the storage_pool table (a column
> for each value).
>
> The allocators can get these values as needed (and they would be as
> accurate as the last time the thread asked the SAN for this info).
>
> These two fields, the min IOPS of the volume to create, and the overcommit
> ratio of the plug-in would tell the allocator if it can select the given
> storage pool.
>
> What do you think?
>
>
> On Fri, Jun 14, 2013 at 11:45 AM, Mike Tutkowski <
> mike.tutkow...@solidfire.com> wrote:
>
> > "As I mentioned previously, I am very reluctant for any feature to come
> > into master that can exhaust resources."
> >
> > Just wanted to mention that, worst case, the SAN would fail creation of
> > the volume before allowing a new volume to break the system.
> >
> >
> > On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> Hi John,
> >>
> >> Are you thinking we add a column on to the storage pool table,
> >> IOPS_Count, where we add and subtract committed IOPS?
> >>
> >> That is easy enough.
> >>
> >> How do you want to determine what the SAN is capable of supporting IOPS
> >> wise? Remember we're dealing with a dynamic SAN here...as you add
> storage
> >> nodes to the cluster, the number of IOPS increases. Do we have a thread
> we
> >> can use to query external devices like this SAN to update the supported
> >> number of IOPS?
> >>
> >> Also, how do you want to enforce the IOPS limit? Do we pass in an
> >> overcommit ration to the plug-in when it's created? We would need to
> store
> >> this in the storage_pool table, as well, I believe.
> >>
> >> We should also get Wei involved in this as his feature will need similar
> >> functionality.
> >>
> >> Also, we should do this FAST as we have only two weeks left and many of
> >> us will be out for several days for the CS Collab Conference.
> >>
> >> Thanks
> >>
> >>
> >> On Fri, Jun 14, 2013 at 10:46 AM, John Burwell  >wrote:
> >>
> >>> Mike,
> >>>
> >>> Querying the SAN only indicates the number of IOPS currently in use.
> >>> The allocator needs to check the number of IOPS committed which is
> tracked
> >>> by CloudStack. For 4.2, we should be able to query the number of IOPS
> >>> committed to a DataStore, and determine whether or not the number
> requested
> >>> can be fulfilled by that device. It seems to be that a
> >>> DataStore#getCommittedIOPS() : Long method would be sufficient.
> >>> DataStore's that don't support provisioned IOPS would return null.
> >>>
> >>> As I mentioned previously, I am very reluctant for any feature to come
> >>> into master that can exhaust resources.
> >>>
> >>> Thanks,
> >>> -John
> >>>
> >>> On Jun 13, 2013, at 9:27 PM, Mike Tutkowski <
> >>> mike.tutkow...@solidfire.com> wrote:
> >>>
> >>> > Yeah, I'm not sure I could come up with anything near an accurate
> >>> > assessment of how many IOPS are currently a

Re: Bugs on Master

2013-06-14 Thread Will Stevens
Chiradeep, can you send me the format of the cloudmonkey call for the api
request 'createNetworkOffering' with 'supportedservices' of
'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I can
not figure out the format of this call.

I have confirmed that I can reproduce the issue of not being able to select
capability dropdowns in multiple browsers on master.

Thanks,

Will


On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens  wrote:

> I will try that.  I am doing some testing right now.  I am compiling and
> running just master now to validate everything.
>
> I will be in touch when I have more details...
>
> ws
>
>
> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
> chiradeep.vit...@citrix.com> wrote:
>
>> Are you able to use CloudMonkey? Perhaps it is a UI issue?
>>
>> On 6/14/13 9:50 AM, "Will Stevens"  wrote:
>>
>> >11 days ago I pulled the master code into my branch.  Master was at:
>> >48913679e80e50228b1bd4b3d17fe5245461626a
>> >
>> >When I pulled, I had Egress firewall rules working perfectly.  After the
>> >pull I now get the following error when trying to create Egress firewall
>> >rules:
>> >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
>> >exception executing api command: createEgressFirewallRule
>> >java.lang.NullPointerException
>> >at
>>
>> >com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Firewa
>> >llManagerImpl.java:485)
>> >at
>>
>> >com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Firewall
>> >ManagerImpl.java:191)
>> >at
>>
>> >com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD
>> >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>> >at
>>
>> >com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRule(Fi
>> >rewallManagerImpl.java:157)
>> >at
>>
>> >org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRuleCm
>> >d.create(CreateEgressFirewallRuleCmd.java:252)
>> >at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
>> >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
>> >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
>> >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>> >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
>> >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>> >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>> >at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>> >at
>> >org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
>> >at
>>
>> >org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216
>> >)
>> >at
>> >org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>> >at
>> >org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>> >at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>> >at
>>
>> >org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCo
>> >llection.java:230)
>> >at
>>
>> >org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:
>> >114)
>> >at
>> >org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>> >at org.mortbay.jetty.Server.handle(Server.java:326)
>> >at
>> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>> >at
>>
>> >org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnect
>> >ion.java:928)
>> >at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>> >at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>> >at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>> >at
>>
>> >org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:41
>> >0)
>> >at
>>
>> >org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:5
>> >82)
>> >
>> >---
>> >
>> >So I merged in master this morning to see if that issue was fixed.  Now I
>> >can not create a Network Service offering and select anything but Virtual
>> >Router from any of the dropdowns for capabilities such as 'Firewall',
>> >'Source NAT', etc...
>> >
>> >There are no JS errors, the dropdown just sits and thinks about it for a
>> >second and does not change away from Virtual Router.
>> >
>> >So now I can't use my service provider at all, so my development is
>> >completely stalled.
>> >
>> >Ideas???
>> >
>> >ws
>>
>>
>


Re: Bugs on Master

2013-06-14 Thread Will Stevens
BTW, I am using cloudmonkey 4.1.0...  Thx


On Fri, Jun 14, 2013 at 3:04 PM, Will Stevens  wrote:

> Chiradeep, can you send me the format of the cloudmonkey call for the api
> request 'createNetworkOffering' with 'supportedservices' of
> 'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I can
> not figure out the format of this call.
>
> I have confirmed that I can reproduce the issue of not being able to
> select capability dropdowns in multiple browsers on master.
>
> Thanks,
>
> Will
>
>
> On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens wrote:
>
>> I will try that.  I am doing some testing right now.  I am compiling and
>> running just master now to validate everything.
>>
>> I will be in touch when I have more details...
>>
>> ws
>>
>>
>> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
>> chiradeep.vit...@citrix.com> wrote:
>>
>>> Are you able to use CloudMonkey? Perhaps it is a UI issue?
>>>
>>> On 6/14/13 9:50 AM, "Will Stevens"  wrote:
>>>
>>> >11 days ago I pulled the master code into my branch.  Master was at:
>>> >48913679e80e50228b1bd4b3d17fe5245461626a
>>> >
>>> >When I pulled, I had Egress firewall rules working perfectly.  After the
>>> >pull I now get the following error when trying to create Egress firewall
>>> >rules:
>>> >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
>>> >exception executing api command: createEgressFirewallRule
>>> >java.lang.NullPointerException
>>> >at
>>>
>>> >com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Firewa
>>> >llManagerImpl.java:485)
>>> >at
>>>
>>> >com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Firewall
>>> >ManagerImpl.java:191)
>>> >at
>>>
>>> >com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorD
>>> >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>>> >at
>>>
>>> >com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRule(Fi
>>> >rewallManagerImpl.java:157)
>>> >at
>>>
>>> >org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRuleCm
>>> >d.create(CreateEgressFirewallRuleCmd.java:252)
>>> >at com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
>>> >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
>>> >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
>>> >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>>> >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
>>> >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>>> >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>>> >at
>>> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>>> >at
>>> >org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:401)
>>> >at
>>>
>>> >org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216
>>> >)
>>> >at
>>> >org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>>> >at
>>> >org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
>>> >at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>>> >at
>>>
>>> >org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCo
>>> >llection.java:230)
>>> >at
>>>
>>> >org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:
>>> >114)
>>> >at
>>> >org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>>> >at org.mortbay.jetty.Server.handle(Server.java:326)
>>> >at
>>> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>>> >at
>>>
>>> >org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnect
>>> >ion.java:928)
>>> >at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
>>> >at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>>> >at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>>> >at
>>>
>>> >org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:41
>>> >0)
>>> >at
>>>
>>> >org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:5
>>> >82)
>>> >
>>> >---
>>> >
>>> >So I merged in master this morning to see if that issue was fixed.  Now
>>> I
>>> >can not create a Network Service offering and select anything but
>>> Virtual
>>> >Router from any of the dropdowns for capabilities such as 'Firewall',
>>> >'Source NAT', etc...
>>> >
>>> >There are no JS errors, the dropdown just sits and thinks about it for a
>>> >second and does not change away from Virtual Router.
>>> >
>>> >So now I can't use my service provider at all, so my development is
>>> >completely stalled.
>>> >
>>> >Ideas???
>>> >
>>> >ws
>>>
>>>
>>
>


Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread John Burwell
Simon,

Yes, it is CloudStack's job to protect, as best it can, from oversubscribing 
resources.  I would argue that resource management is one, if not the most, 
important functions of the system.  It is no different than the 
allocation/planning performed for hosts relative to cores and memory.  We can 
still oversubscribe resources, but we have rails + knobs and dials to avoid it. 
 Without these controls in place, we could easily allow users to deploy 
workloads that overrun resources harming all tenants.

I also think that we are over thinking this issue for provisioned IOPS.  When 
the DataStore is configured, the administrator/operator simply needs to tell us 
the total number of IOPS that can be committed to it and an overcommitment 
factor.  As we allocate volumes to that DataStore, we sum up the committed IOPS 
of the existing Volumes attached to the DataStore, apply the overcommitment 
factor, and determine whether or not the requested minimum IOPS for the new 
volume can be fulfilled.  We can provide both general and vendor specific 
documentation for determining these values -- be they to consume the entire 
device or a portion of it.

Querying the device is unnecessary and deceptive.  CloudStack resource 
management is not interested in the current state of the device which could be 
anywhere from extremely heavy to extremely light at any given time.  We are 
interested in the worst case load that is anticipated for resource.  In my 
view, it is up to administrators/operators to instrument their environment to 
understand usage patterns and capacity.  We should provide information that 
will help determine what should be instrumented/monitored, but that function 
should be performed outside of CloudStack.

Thanks,
-John

On Jun 14, 2013, at 2:20 PM, Simon Weller  wrote:

> I'd like to comment on this briefly. 
> 
> 
> 
> I think an assumption is being made that the SAN is being dedicated to a CS 
> instance. 
> 
> My person opinion that this whole IOPS calculation is getting rather 
> complicated, and could probably be much simpler than this. Over subscription 
> is a fact of life on virtually all storage, and is really no different in 
> concept than multiple virt instances on a single piece of hardware. All 
> decent SANs offer many management options for the storage engineers to keep 
> track of IOPS utilization, and plan for spindle augmentation as required. 
> Is it really the job of CS to become yet another management layer on top of 
> this? 
> 
> - Original Message -
> 
> From: "Mike Tutkowski"  
> To: dev@cloudstack.apache.org 
> Cc: "John Burwell" , "Wei Zhou"  
> Sent: Friday, June 14, 2013 1:00:26 PM 
> Subject: Re: [MERGE] disk_io_throttling to MASTER 
> 
> 1) We want number of IOPS currently supported by the SAN. 
> 
> 2) We want the number of IOPS that are committed (sum of min IOPS for each 
> volume). 
> 
> We could do the following to keep track of IOPS: 
> 
> The plug-in could have a timer thread that goes off every, say, 1 minute. 
> 
> It could query the SAN for the number of nodes that make up the SAN and 
> multiple this by 50,000. This is essentially the number of supported IOPS 
> of the SAN. 
> 
> The next API call could be to get all of the volumes on the SAN. Iterate 
> through them all and add up their min IOPS values. This is the number of 
> IOPS the SAN is committed to. 
> 
> These two numbers can then be updated in the storage_pool table (a column 
> for each value). 
> 
> The allocators can get these values as needed (and they would be as 
> accurate as the last time the thread asked the SAN for this info). 
> 
> These two fields, the min IOPS of the volume to create, and the overcommit 
> ratio of the plug-in would tell the allocator if it can select the given 
> storage pool. 
> 
> What do you think? 
> 
> 
> On Fri, Jun 14, 2013 at 11:45 AM, Mike Tutkowski < 
> mike.tutkow...@solidfire.com> wrote: 
> 
>> "As I mentioned previously, I am very reluctant for any feature to come 
>> into master that can exhaust resources." 
>> 
>> Just wanted to mention that, worst case, the SAN would fail creation of 
>> the volume before allowing a new volume to break the system. 
>> 
>> 
>> On Fri, Jun 14, 2013 at 11:35 AM, Mike Tutkowski < 
>> mike.tutkow...@solidfire.com> wrote: 
>> 
>>> Hi John, 
>>> 
>>> Are you thinking we add a column on to the storage pool table, 
>>> IOPS_Count, where we add and subtract committed IOPS? 
>>> 
>>> That is easy enough. 
>>> 
>>> How do you want to determine what the SAN is capable of supporting IOPS 
>>> wise? Remember we're dealing with a dynamic SAN here...as you add storage 
>>> nodes to the cluster, the number of IOPS increases. Do we have a thread we 
>>> can use to query external devices like this SAN to update the supported 
>>> number of IOPS? 
>>> 
>>> Also, how do you want to enforce the IOPS limit? Do we pass in an 
>>> overcommit ration to the plug-in when it's created? We would need to store 
>>> this in the stora

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Mike Tutkowski
"the administrator/operator simply needs to tell us the total number of
IOPS that can be committed to it and an overcommitment factor."

Are you thinking when we create a plug-in as primary storage that we say -
up front - how many IOPS the SAN can handle?

That is not a good move, in my opinion. Our SAN is designed to start small
and grow to PBs. As the need arises for more storage, the admin purchases
additional storage nodes that join the cluster and the performance and
capacity go up.

We need to know how many IOPS total the SAN can handle and what it is
committed to currently (the sum of the number of volumes' min IOPS).

We also cannot assume the SAN is dedicated to CS.


On Fri, Jun 14, 2013 at 1:59 PM, John Burwell  wrote:

> Simon,
>
> Yes, it is CloudStack's job to protect, as best it can, from
> oversubscribing resources.  I would argue that resource management is one,
> if not the most, important functions of the system.  It is no different
> than the allocation/planning performed for hosts relative to cores and
> memory.  We can still oversubscribe resources, but we have rails + knobs
> and dials to avoid it.  Without these controls in place, we could easily
> allow users to deploy workloads that overrun resources harming all tenants.
>
> I also think that we are over thinking this issue for provisioned IOPS.
>  When the DataStore is configured, the administrator/operator simply needs
> to tell us the total number of IOPS that can be committed to it and an
> overcommitment factor.  As we allocate volumes to that DataStore, we sum up
> the committed IOPS of the existing Volumes attached to the DataStore, apply
> the overcommitment factor, and determine whether or not the requested
> minimum IOPS for the new volume can be fulfilled.  We can provide both
> general and vendor specific documentation for determining these values --
> be they to consume the entire device or a portion of it.
>
> Querying the device is unnecessary and deceptive.  CloudStack resource
> management is not interested in the current state of the device which could
> be anywhere from extremely heavy to extremely light at any given time.  We
> are interested in the worst case load that is anticipated for resource.  In
> my view, it is up to administrators/operators to instrument their
> environment to understand usage patterns and capacity.  We should provide
> information that will help determine what should be instrumented/monitored,
> but that function should be performed outside of CloudStack.
>
> Thanks,
> -John
>
> On Jun 14, 2013, at 2:20 PM, Simon Weller  wrote:
>
> > I'd like to comment on this briefly.
> >
> >
> >
> > I think an assumption is being made that the SAN is being dedicated to a
> CS instance.
> >
> > My person opinion that this whole IOPS calculation is getting rather
> complicated, and could probably be much simpler than this. Over
> subscription is a fact of life on virtually all storage, and is really no
> different in concept than multiple virt instances on a single piece of
> hardware. All decent SANs offer many management options for the storage
> engineers to keep track of IOPS utilization, and plan for spindle
> augmentation as required.
> > Is it really the job of CS to become yet another management layer on top
> of this?
> >
> > - Original Message -
> >
> > From: "Mike Tutkowski" 
> > To: dev@cloudstack.apache.org
> > Cc: "John Burwell" , "Wei Zhou" <
> ustcweiz...@gmail.com>
> > Sent: Friday, June 14, 2013 1:00:26 PM
> > Subject: Re: [MERGE] disk_io_throttling to MASTER
> >
> > 1) We want number of IOPS currently supported by the SAN.
> >
> > 2) We want the number of IOPS that are committed (sum of min IOPS for
> each
> > volume).
> >
> > We could do the following to keep track of IOPS:
> >
> > The plug-in could have a timer thread that goes off every, say, 1 minute.
> >
> > It could query the SAN for the number of nodes that make up the SAN and
> > multiple this by 50,000. This is essentially the number of supported IOPS
> > of the SAN.
> >
> > The next API call could be to get all of the volumes on the SAN. Iterate
> > through them all and add up their min IOPS values. This is the number of
> > IOPS the SAN is committed to.
> >
> > These two numbers can then be updated in the storage_pool table (a column
> > for each value).
> >
> > The allocators can get these values as needed (and they would be as
> > accurate as the last time the thread asked the SAN for this info).
> >
> > These two fields, the min IOPS of the volume to create, and the
> overcommit
> > ratio of the plug-in would tell the allocator if it can select the given
> > storage pool.
> >
> > What do you think?
> >
> >
> > On Fri, Jun 14, 2013 at 11:45 AM, Mike Tutkowski <
> > mike.tutkow...@solidfire.com> wrote:
> >
> >> "As I mentioned previously, I am very reluctant for any feature to come
> >> into master that can exhaust resources."
> >>
> >> Just wanted to mention that, worst case, the SAN would fail creation of
> >

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Mike Tutkowski
Also, as far as I remember, we just introduced overcommit ratios (for CPU
and memory) when creating/editing clusters in 4.2, so we did survive as a
product before that (albeit great) feature was introduced.


On Fri, Jun 14, 2013 at 2:06 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> "the administrator/operator simply needs to tell us the total number of
> IOPS that can be committed to it and an overcommitment factor."
>
> Are you thinking when we create a plug-in as primary storage that we say -
> up front - how many IOPS the SAN can handle?
>
> That is not a good move, in my opinion. Our SAN is designed to start small
> and grow to PBs. As the need arises for more storage, the admin purchases
> additional storage nodes that join the cluster and the performance and
> capacity go up.
>
> We need to know how many IOPS total the SAN can handle and what it is
> committed to currently (the sum of the number of volumes' min IOPS).
>
> We also cannot assume the SAN is dedicated to CS.
>
>
> On Fri, Jun 14, 2013 at 1:59 PM, John Burwell  wrote:
>
>> Simon,
>>
>> Yes, it is CloudStack's job to protect, as best it can, from
>> oversubscribing resources.  I would argue that resource management is one,
>> if not the most, important functions of the system.  It is no different
>> than the allocation/planning performed for hosts relative to cores and
>> memory.  We can still oversubscribe resources, but we have rails + knobs
>> and dials to avoid it.  Without these controls in place, we could easily
>> allow users to deploy workloads that overrun resources harming all tenants.
>>
>> I also think that we are over thinking this issue for provisioned IOPS.
>>  When the DataStore is configured, the administrator/operator simply needs
>> to tell us the total number of IOPS that can be committed to it and an
>> overcommitment factor.  As we allocate volumes to that DataStore, we sum up
>> the committed IOPS of the existing Volumes attached to the DataStore, apply
>> the overcommitment factor, and determine whether or not the requested
>> minimum IOPS for the new volume can be fulfilled.  We can provide both
>> general and vendor specific documentation for determining these values --
>> be they to consume the entire device or a portion of it.
>>
>> Querying the device is unnecessary and deceptive.  CloudStack resource
>> management is not interested in the current state of the device which could
>> be anywhere from extremely heavy to extremely light at any given time.  We
>> are interested in the worst case load that is anticipated for resource.  In
>> my view, it is up to administrators/operators to instrument their
>> environment to understand usage patterns and capacity.  We should provide
>> information that will help determine what should be instrumented/monitored,
>> but that function should be performed outside of CloudStack.
>>
>> Thanks,
>> -John
>>
>> On Jun 14, 2013, at 2:20 PM, Simon Weller  wrote:
>>
>> > I'd like to comment on this briefly.
>> >
>> >
>> >
>> > I think an assumption is being made that the SAN is being dedicated to
>> a CS instance.
>> >
>> > My person opinion that this whole IOPS calculation is getting rather
>> complicated, and could probably be much simpler than this. Over
>> subscription is a fact of life on virtually all storage, and is really no
>> different in concept than multiple virt instances on a single piece of
>> hardware. All decent SANs offer many management options for the storage
>> engineers to keep track of IOPS utilization, and plan for spindle
>> augmentation as required.
>> > Is it really the job of CS to become yet another management layer on
>> top of this?
>> >
>> > - Original Message -
>> >
>> > From: "Mike Tutkowski" 
>> > To: dev@cloudstack.apache.org
>> > Cc: "John Burwell" , "Wei Zhou" <
>> ustcweiz...@gmail.com>
>> > Sent: Friday, June 14, 2013 1:00:26 PM
>> > Subject: Re: [MERGE] disk_io_throttling to MASTER
>> >
>> > 1) We want number of IOPS currently supported by the SAN.
>> >
>> > 2) We want the number of IOPS that are committed (sum of min IOPS for
>> each
>> > volume).
>> >
>> > We could do the following to keep track of IOPS:
>> >
>> > The plug-in could have a timer thread that goes off every, say, 1
>> minute.
>> >
>> > It could query the SAN for the number of nodes that make up the SAN and
>> > multiple this by 50,000. This is essentially the number of supported
>> IOPS
>> > of the SAN.
>> >
>> > The next API call could be to get all of the volumes on the SAN. Iterate
>> > through them all and add up their min IOPS values. This is the number of
>> > IOPS the SAN is committed to.
>> >
>> > These two numbers can then be updated in the storage_pool table (a
>> column
>> > for each value).
>> >
>> > The allocators can get these values as needed (and they would be as
>> > accurate as the last time the thread asked the SAN for this info).
>> >
>> > These two fields, the min IOPS of the volume to create, and the
>> overcommit
>> > ratio of t

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread John Burwell
Mike,

I apologize for not being clear -- this conversation has been admittedly 
disjoint.  I think we should allow the maximum IOPS and overcommitment values 
to be updated though I would recommend restricting updates to to be an 
increasing value for 4.2 (e.g. users can increase the number of total IOPS from 
200,000 to 250,000, but not decrease from 250,000 to 200,000).  While not 
ideal, given the amount of time we have left for 4.2, it will cover most cases, 
and we can address the implications of reducing resource capacity in 4.3.  This 
approach addresses both of your concerns.  First, it allows the 
administrator/operator to determine what portion of the device they wish to 
dedicate.  For example, if the device has a total capacity of 200,000 IOPS, and 
they only want CS to use 25% of the device then they set the maximum total IOPS 
to 50,000.  Second, as they grow capacity, they can update the DataStore to 
increase the number of IOPS they want to dedicate to CS' use.  I would imagine 
expansion of capacity happens infrequently enough that increasing the maximum 
IOPS value would not be a significant burden.

Thanks,
-John

On Jun 14, 2013, at 4:06 PM, Mike Tutkowski  
wrote:

> "the administrator/operator simply needs to tell us the total number of
> IOPS that can be committed to it and an overcommitment factor."
> 
> Are you thinking when we create a plug-in as primary storage that we say -
> up front - how many IOPS the SAN can handle?
> 
> That is not a good move, in my opinion. Our SAN is designed to start small
> and grow to PBs. As the need arises for more storage, the admin purchases
> additional storage nodes that join the cluster and the performance and
> capacity go up.
> 
> We need to know how many IOPS total the SAN can handle and what it is
> committed to currently (the sum of the number of volumes' min IOPS).
> 
> We also cannot assume the SAN is dedicated to CS.
> 
> 
> On Fri, Jun 14, 2013 at 1:59 PM, John Burwell  wrote:
> 
>> Simon,
>> 
>> Yes, it is CloudStack's job to protect, as best it can, from
>> oversubscribing resources.  I would argue that resource management is one,
>> if not the most, important functions of the system.  It is no different
>> than the allocation/planning performed for hosts relative to cores and
>> memory.  We can still oversubscribe resources, but we have rails + knobs
>> and dials to avoid it.  Without these controls in place, we could easily
>> allow users to deploy workloads that overrun resources harming all tenants.
>> 
>> I also think that we are over thinking this issue for provisioned IOPS.
>> When the DataStore is configured, the administrator/operator simply needs
>> to tell us the total number of IOPS that can be committed to it and an
>> overcommitment factor.  As we allocate volumes to that DataStore, we sum up
>> the committed IOPS of the existing Volumes attached to the DataStore, apply
>> the overcommitment factor, and determine whether or not the requested
>> minimum IOPS for the new volume can be fulfilled.  We can provide both
>> general and vendor specific documentation for determining these values --
>> be they to consume the entire device or a portion of it.
>> 
>> Querying the device is unnecessary and deceptive.  CloudStack resource
>> management is not interested in the current state of the device which could
>> be anywhere from extremely heavy to extremely light at any given time.  We
>> are interested in the worst case load that is anticipated for resource.  In
>> my view, it is up to administrators/operators to instrument their
>> environment to understand usage patterns and capacity.  We should provide
>> information that will help determine what should be instrumented/monitored,
>> but that function should be performed outside of CloudStack.
>> 
>> Thanks,
>> -John
>> 
>> On Jun 14, 2013, at 2:20 PM, Simon Weller  wrote:
>> 
>>> I'd like to comment on this briefly.
>>> 
>>> 
>>> 
>>> I think an assumption is being made that the SAN is being dedicated to a
>> CS instance.
>>> 
>>> My person opinion that this whole IOPS calculation is getting rather
>> complicated, and could probably be much simpler than this. Over
>> subscription is a fact of life on virtually all storage, and is really no
>> different in concept than multiple virt instances on a single piece of
>> hardware. All decent SANs offer many management options for the storage
>> engineers to keep track of IOPS utilization, and plan for spindle
>> augmentation as required.
>>> Is it really the job of CS to become yet another management layer on top
>> of this?
>>> 
>>> - Original Message -
>>> 
>>> From: "Mike Tutkowski" 
>>> To: dev@cloudstack.apache.org
>>> Cc: "John Burwell" , "Wei Zhou" <
>> ustcweiz...@gmail.com>
>>> Sent: Friday, June 14, 2013 1:00:26 PM
>>> Subject: Re: [MERGE] disk_io_throttling to MASTER
>>> 
>>> 1) We want number of IOPS currently supported by the SAN.
>>> 
>>> 2) We want the number of IOPS that are committed (sum of min IOPS fo

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Mike Tutkowski
I am OK with that approach, John.

So, let me review to make sure I follow you correctly:

We introduce two new parameters to the plug-in: Number of Total IOPS for
the SAN and an overcommit ratio. (On a side note, if we are just
multiplying the two numbers, why don't we have the user just input their
product?)

Are both of these new parameters to the create storage pool API command or
are they passed into the create storage pool API command through its url
parameter?

If they are new parameters, we should make two new columns in the
storage_pool table.

If they are passed in via the url parameter, they should go in the
storage_pool_details table.

For 4.2, if someone wants to change these values, they must update the DB
manually.

Every time a volume is created or deleted for the SolidFire plug-in, the
Current IOPS value (sum of all volumes' Min IOPS that are associated with
the plug-in) is updated.

The allocator can use these fields to determine if it can fit in a new
volume.

Does it look like my understanding is OK?


On Fri, Jun 14, 2013 at 2:14 PM, John Burwell  wrote:

> Mike,
>
> I apologize for not being clear -- this conversation has been admittedly
> disjoint.  I think we should allow the maximum IOPS and overcommitment
> values to be updated though I would recommend restricting updates to to be
> an increasing value for 4.2 (e.g. users can increase the number of total
> IOPS from 200,000 to 250,000, but not decrease from 250,000 to 200,000).
>  While not ideal, given the amount of time we have left for 4.2, it will
> cover most cases, and we can address the implications of reducing resource
> capacity in 4.3.  This approach addresses both of your concerns.  First, it
> allows the administrator/operator to determine what portion of the device
> they wish to dedicate.  For example, if the device has a total capacity of
> 200,000 IOPS, and they only want CS to use 25% of the device then they set
> the maximum total IOPS to 50,000.  Second, as they grow capacity, they can
> update the DataStore to increase the number of IOPS they want to dedicate
> to CS' use.  I would imagine expansion of capacity happens infrequently
> enough that increasing the maximum IOPS value would not be a significant
> burden.
>
> Thanks,
> -John
>
> On Jun 14, 2013, at 4:06 PM, Mike Tutkowski 
> wrote:
>
> > "the administrator/operator simply needs to tell us the total number of
> > IOPS that can be committed to it and an overcommitment factor."
> >
> > Are you thinking when we create a plug-in as primary storage that we say
> -
> > up front - how many IOPS the SAN can handle?
> >
> > That is not a good move, in my opinion. Our SAN is designed to start
> small
> > and grow to PBs. As the need arises for more storage, the admin purchases
> > additional storage nodes that join the cluster and the performance and
> > capacity go up.
> >
> > We need to know how many IOPS total the SAN can handle and what it is
> > committed to currently (the sum of the number of volumes' min IOPS).
> >
> > We also cannot assume the SAN is dedicated to CS.
> >
> >
> > On Fri, Jun 14, 2013 at 1:59 PM, John Burwell 
> wrote:
> >
> >> Simon,
> >>
> >> Yes, it is CloudStack's job to protect, as best it can, from
> >> oversubscribing resources.  I would argue that resource management is
> one,
> >> if not the most, important functions of the system.  It is no different
> >> than the allocation/planning performed for hosts relative to cores and
> >> memory.  We can still oversubscribe resources, but we have rails + knobs
> >> and dials to avoid it.  Without these controls in place, we could easily
> >> allow users to deploy workloads that overrun resources harming all
> tenants.
> >>
> >> I also think that we are over thinking this issue for provisioned IOPS.
> >> When the DataStore is configured, the administrator/operator simply
> needs
> >> to tell us the total number of IOPS that can be committed to it and an
> >> overcommitment factor.  As we allocate volumes to that DataStore, we
> sum up
> >> the committed IOPS of the existing Volumes attached to the DataStore,
> apply
> >> the overcommitment factor, and determine whether or not the requested
> >> minimum IOPS for the new volume can be fulfilled.  We can provide both
> >> general and vendor specific documentation for determining these values
> --
> >> be they to consume the entire device or a portion of it.
> >>
> >> Querying the device is unnecessary and deceptive.  CloudStack resource
> >> management is not interested in the current state of the device which
> could
> >> be anywhere from extremely heavy to extremely light at any given time.
>  We
> >> are interested in the worst case load that is anticipated for resource.
>  In
> >> my view, it is up to administrators/operators to instrument their
> >> environment to understand usage patterns and capacity.  We should
> provide
> >> information that will help determine what should be
> instrumented/monitored,
> >> but that function should be

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread Simon Weller

John, 


I'm not arguing that CloudStack's job isn't to provide resource management. The 
challenge here is we're talking about managing a resource that is extremely 
complex in a very 'one size fits all' manner. For example, lets say you have a 
SAN that supports storage tiering, and can dynamically move data shards to 
different tiers of disks where the IOP vs capacity varies. So your data shard 
starts on an array of fast, lower capacity disks, and based on various criteria 
gets relocated to slower, higher capacity disks. In this scenario , your IOP 
max capacity has just changed for some subset (or all) of the data tied to this 
primary storage object based on some usage profile. Likewise, you may have some 
high use shards that never get demoted to larger disks, so you run out of your 
primary tier storage capacity. 
My point is, It's hard to account for these scenarios in an absolute world. I'm 
just concerned that we're getting ourselves tied up trying to paint all storage 
as being the same, when in fact every product and project whether commercial or 
open source has a different set of features and objectives. 


- Si 
- Original Message -

From: "John Burwell"  
To: dev@cloudstack.apache.org 
Sent: Friday, June 14, 2013 2:59:47 PM 
Subject: Re: [MERGE] disk_io_throttling to MASTER 

Simon, 

Yes, it is CloudStack's job to protect, as best it can, from oversubscribing 
resources. I would argue that resource management is one, if not the most, 
important functions of the system. It is no different than the 
allocation/planning performed for hosts relative to cores and memory. We can 
still oversubscribe resources, but we have rails + knobs and dials to avoid it. 
Without these controls in place, we could easily allow users to deploy 
workloads that overrun resources harming all tenants. 



I also think that we are over thinking this issue for provisioned IOPS. When 
the DataStore is configured, the administrator/operator simply needs to tell us 
the total number of IOPS that can be committed to it and an overcommitment 
factor. As we allocate volumes to that DataStore, we sum up the committed IOPS 
of the existing Volumes attached to the DataStore, apply the overcommitment 
factor, and determine whether or not the requested minimum IOPS for the new 
volume can be fulfilled. We can provide both general and vendor specific 
documentation for determining these values -- be they to consume the entire 
device or a portion of it. 

Querying the device is unnecessary and deceptive. CloudStack resource 
management is not interested in the current state of the device which could be 
anywhere from extremely heavy to extremely light at any given time. We are 
interested in the worst case load that is anticipated for resource. In my view, 
it is up to administrators/operators to instrument their environment to 
understand usage patterns and capacity. We should provide information that will 
help determine what should be instrumented/monitored, but that function should 
be performed outside of CloudStack. 



Thanks, 
-John 

On Jun 14, 2013, at 2:20 PM, Simon Weller  wrote: 

> I'd like to comment on this briefly. 
> 
> 
> 
> I think an assumption is being made that the SAN is being dedicated to a CS 
> instance. 
> 
> My person opinion that this whole IOPS calculation is getting rather 
> complicated, and could probably be much simpler than this. Over subscription 
> is a fact of life on virtually all storage, and is really no different in 
> concept than multiple virt instances on a single piece of hardware. All 
> decent SANs offer many management options for the storage engineers to keep 
> track of IOPS utilization, and plan for spindle augmentation as required. 
> Is it really the job of CS to become yet another management layer on top of 
> this? 
> 
> - Original Message - 
> 
> From: "Mike Tutkowski"  
> To: dev@cloudstack.apache.org 
> Cc: "John Burwell" , "Wei Zhou"  
> Sent: Friday, June 14, 2013 1:00:26 PM 
> Subject: Re: [MERGE] disk_io_throttling to MASTER 
> 
> 1) We want number of IOPS currently supported by the SAN. 
> 
> 2) We want the number of IOPS that are committed (sum of min IOPS for each 
> volume). 
> 
> We could do the following to keep track of IOPS: 
> 
> The plug-in could have a timer thread that goes off every, say, 1 minute. 
> 
> It could query the SAN for the number of nodes that make up the SAN and 
> multiple this by 50,000. This is essentially the number of supported IOPS 
> of the SAN. 
> 
> The next API call could be to get all of the volumes on the SAN. Iterate 
> through them all and add up their min IOPS values. This is the number of 
> IOPS the SAN is committed to. 
> 
> These two numbers can then be updated in the storage_pool table (a column 
> for each value). 
> 
> The allocators can get these values as needed (and they would be as 
> accurate as the last time the thread asked the SAN for this info). 
> 
> These two fields, the min IOPS of the v

Re: [MERGE] disk_io_throttling to MASTER

2013-06-14 Thread John Burwell
Simon,

I completely agree with you regarding SAN complexities and their associated 
performance guarantees.  To that end, I am encouraging that we start out with a 
very simple approach -- let the operator tell us how many IOPS we can commit to 
the device rather than trying to somehow devise it from a management API.  For 
4.2, my primary concern is ensuring that we never oversubscribe the device 
which is a much easier problem than optimizing allocation to get the maximum 
IOPS from it.  Oversubscription is a system stability issue against which we 
must always guard.  In terms of optimization, I propose that we attack that 
problem in a later release once we have strong protections against 
oversubscription.

In terms of homogenizing storage (or hypervisors or network devices), it is a 
side effect of the integration and resource management provided by CloudStack.  
There is an inherent trade-off made between flexibility and optimization when 
adopting platforms such as CloudStack.  My personal goal is to minimize that 
trade-off as much as possible which, in my experience, is best achieved by 
keeping things as simple as possible.

Thanks,
-John

On Jun 14, 2013, at 4:39 PM, Simon Weller  wrote:

> 
> John, 
> 
> 
> I'm not arguing that CloudStack's job isn't to provide resource management. 
> The challenge here is we're talking about managing a resource that is 
> extremely complex in a very 'one size fits all' manner. For example, lets say 
> you have a SAN that supports storage tiering, and can dynamically move data 
> shards to different tiers of disks where the IOP vs capacity varies. So your 
> data shard starts on an array of fast, lower capacity disks, and based on 
> various criteria gets relocated to slower, higher capacity disks. In this 
> scenario , your IOP max capacity has just changed for some subset (or all) of 
> the data tied to this primary storage object based on some usage profile. 
> Likewise, you may have some high use shards that never get demoted to larger 
> disks, so you run out of your primary tier storage capacity. 
> My point is, It's hard to account for these scenarios in an absolute world. 
> I'm just concerned that we're getting ourselves tied up trying to paint all 
> storage as being the same, when in fact every product and project whether 
> commercial or open source has a different set of features and objectives. 
> 
> 
> - Si 
> - Original Message -
> 
> From: "John Burwell"  
> To: dev@cloudstack.apache.org 
> Sent: Friday, June 14, 2013 2:59:47 PM 
> Subject: Re: [MERGE] disk_io_throttling to MASTER 
> 
> Simon, 
> 
> Yes, it is CloudStack's job to protect, as best it can, from oversubscribing 
> resources. I would argue that resource management is one, if not the most, 
> important functions of the system. It is no different than the 
> allocation/planning performed for hosts relative to cores and memory. We can 
> still oversubscribe resources, but we have rails + knobs and dials to avoid 
> it. Without these controls in place, we could easily allow users to deploy 
> workloads that overrun resources harming all tenants. 
> 
> 
> 
> I also think that we are over thinking this issue for provisioned IOPS. When 
> the DataStore is configured, the administrator/operator simply needs to tell 
> us the total number of IOPS that can be committed to it and an overcommitment 
> factor. As we allocate volumes to that DataStore, we sum up the committed 
> IOPS of the existing Volumes attached to the DataStore, apply the 
> overcommitment factor, and determine whether or not the requested minimum 
> IOPS for the new volume can be fulfilled. We can provide both general and 
> vendor specific documentation for determining these values -- be they to 
> consume the entire device or a portion of it. 
> 
> Querying the device is unnecessary and deceptive. CloudStack resource 
> management is not interested in the current state of the device which could 
> be anywhere from extremely heavy to extremely light at any given time. We are 
> interested in the worst case load that is anticipated for resource. In my 
> view, it is up to administrators/operators to instrument their environment to 
> understand usage patterns and capacity. We should provide information that 
> will help determine what should be instrumented/monitored, but that function 
> should be performed outside of CloudStack. 
> 
> 
> 
> Thanks, 
> -John 
> 
> On Jun 14, 2013, at 2:20 PM, Simon Weller  wrote: 
> 
>> I'd like to comment on this briefly. 
>> 
>> 
>> 
>> I think an assumption is being made that the SAN is being dedicated to a CS 
>> instance. 
>> 
>> My person opinion that this whole IOPS calculation is getting rather 
>> complicated, and could probably be much simpler than this. Over subscription 
>> is a fact of life on virtually all storage, and is really no different in 
>> concept than multiple virt instances on a single piece of hardware. All 
>> decent SANs offer many management options 

Re: SRX Integration Issues.

2013-06-14 Thread Sean Truman
SOLVED: My password had a $ in it.. which has to be escaped.. I added more
logging to the SRX source to track it down.

v/r
Sean


On Fri, Jun 14, 2013 at 1:00 PM, Sean Truman  wrote:

> Looking through the source their isn't much logging, plus it's all over
> SSL so I cannot see the traffic being passed using tcpdump.
>
> Sean
>
> On Jun 14, 2013, at 12:54 PM, Sheng Yang  wrote:
>
> > It looks like a string issue of Java itself. What exactly failed on
> > test.xml?
> >
> > --Sheng
> >
> >
> > On Fri, Jun 14, 2013 at 9:55 AM, Sean Truman  wrote:
> >
> >> I am using untagged VLAN on my public side. It's failing on the test.xml
> >> looking for trust group!
> >>
> >> Sean
> >>
> >> On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi <
> >> jayapalreddy.ur...@citrix.com> wrote:
> >>
> >>> Hi,
> >>>
> >>> I am not sure about the error but please see the below example
> >> configuration and correct your configuration.
> >>>
> >>>
> >>> Example confirmation:
> >>>
>  Public Interface: fe-0/0/4.52
>  Private Interface: fe-0/0/1
> >>>
> >>> fe-0/0/1 - private interface
> >>> fe-0/0/4.52 - public interface where my public network vlan id is 52.
> >>>
> >>> Example commands:
> >>> set interfaces fe-0/0/1 description "Private network"
> >>> set interfaces fe-0/0/1 vlan-tagging
> >>>
> >>> set interfaces fe-0/0/4 unit 52 vlan-id 52
> >>> set interfaces fe-0/0/4 unit 52 family inet filter input untrust
> >>>
> >>> Thanks,
> >>> Jayapal
> >>>
> >>> On 14-Jun-2013, at 9:42 PM, Sean Truman 
> >>> wrote:
> >>>
>  All,
> 
>  I am trying to add an SRX 100 to Cloud Stack and keep getting a
> "Illegal
>  Group Reference"
> 
>  Here is how I am trying to add the config.
>  IP Address: 10.0.2.1
>  Username: root
>  Password: password
>  Type: Juniper SRX Firewall
>  Public Interface: fe-0/0/0.0
>  Private Interface: fe-0/0/1.0
>  Usage interface:
>  Number of Retries: 2
>  Timeout: 300
>  Public network: untrust
>  Private network: trust
>  Capacity: 10
> 
> 
> 
>  Here is my SRX configuration.
> 
>  http://pastebin.com/nTVEM92p
> 
> 
>  Here is the only logs I get from management-server.log
> 
>  http://pastebin.com/pWB0Kbtu
> 
>  Any help would be greatly appreciated.
> 
>  v/r
>  Sean
> >>
>


Re: SRX Integration Issues.

2013-06-14 Thread Sheng Yang
I meant, this looks like mgmt server Java error rather than SRX error.

This is from your log:

   1. 2013-06-14 09:26:29,327 WARN  [cloud.api.ApiDispatcher]
   (Job-Executor-37:job-65) class com.cloud.api.ServerApiException : Illegal
   group reference


See also:
http://stackoverflow.com/questions/11913709/why-does-replaceall-fail-with-illegal-group-reference
http://cephas.net/blog/2006/02/09/javalangillegalargumentexception-illegal-group-reference-replaceall-and-dollar-signs/
http://webtrouble.blogspot.com/2009/04/java-illegal-group-reference.html

Maybe some substitute functions got wrong in the code due to the string got
illegal characters.

e.g. replaceXmlValue() in JuniperSrxResource.java used replaceAll()
function that may result in "Illegal group reference exception."
private String replaceXmlValue(String xml, String marker, String value)
{
marker = "\\s*%" + marker + "%\\s*";

if (value == null) {
value = "";
}

return xml.replaceAll(marker, value);
}

--Sheng


On Fri, Jun 14, 2013 at 11:00 AM, Sean Truman  wrote:

> Looking through the source their isn't much logging, plus it's all over
> SSL so I cannot see the traffic being passed using tcpdump.
>
> Sean
>
> On Jun 14, 2013, at 12:54 PM, Sheng Yang  wrote:
>
> > It looks like a string issue of Java itself. What exactly failed on
> > test.xml?
> >
> > --Sheng
> >
> >
> > On Fri, Jun 14, 2013 at 9:55 AM, Sean Truman  wrote:
> >
> >> I am using untagged VLAN on my public side. It's failing on the test.xml
> >> looking for trust group!
> >>
> >> Sean
> >>
> >> On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi <
> >> jayapalreddy.ur...@citrix.com> wrote:
> >>
> >>> Hi,
> >>>
> >>> I am not sure about the error but please see the below example
> >> configuration and correct your configuration.
> >>>
> >>>
> >>> Example confirmation:
> >>>
>  Public Interface: fe-0/0/4.52
>  Private Interface: fe-0/0/1
> >>>
> >>> fe-0/0/1 - private interface
> >>> fe-0/0/4.52 - public interface where my public network vlan id is 52.
> >>>
> >>> Example commands:
> >>> set interfaces fe-0/0/1 description "Private network"
> >>> set interfaces fe-0/0/1 vlan-tagging
> >>>
> >>> set interfaces fe-0/0/4 unit 52 vlan-id 52
> >>> set interfaces fe-0/0/4 unit 52 family inet filter input untrust
> >>>
> >>> Thanks,
> >>> Jayapal
> >>>
> >>> On 14-Jun-2013, at 9:42 PM, Sean Truman 
> >>> wrote:
> >>>
>  All,
> 
>  I am trying to add an SRX 100 to Cloud Stack and keep getting a
> "Illegal
>  Group Reference"
> 
>  Here is how I am trying to add the config.
>  IP Address: 10.0.2.1
>  Username: root
>  Password: password
>  Type: Juniper SRX Firewall
>  Public Interface: fe-0/0/0.0
>  Private Interface: fe-0/0/1.0
>  Usage interface:
>  Number of Retries: 2
>  Timeout: 300
>  Public network: untrust
>  Private network: trust
>  Capacity: 10
> 
> 
> 
>  Here is my SRX configuration.
> 
>  http://pastebin.com/nTVEM92p
> 
> 
>  Here is the only logs I get from management-server.log
> 
>  http://pastebin.com/pWB0Kbtu
> 
>  Any help would be greatly appreciated.
> 
>  v/r
>  Sean
> >>
>


Re: SRX Integration Issues.

2013-06-14 Thread Sheng Yang
Oh yes, that explain the thing...

--Sheng


On Fri, Jun 14, 2013 at 1:56 PM, Sean Truman  wrote:

> SOLVED: My password had a $ in it.. which has to be escaped.. I added more
> logging to the SRX source to track it down.
>
> v/r
> Sean
>
>
> On Fri, Jun 14, 2013 at 1:00 PM, Sean Truman  wrote:
>
> > Looking through the source their isn't much logging, plus it's all over
> > SSL so I cannot see the traffic being passed using tcpdump.
> >
> > Sean
> >
> > On Jun 14, 2013, at 12:54 PM, Sheng Yang  wrote:
> >
> > > It looks like a string issue of Java itself. What exactly failed on
> > > test.xml?
> > >
> > > --Sheng
> > >
> > >
> > > On Fri, Jun 14, 2013 at 9:55 AM, Sean Truman 
> wrote:
> > >
> > >> I am using untagged VLAN on my public side. It's failing on the
> test.xml
> > >> looking for trust group!
> > >>
> > >> Sean
> > >>
> > >> On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi <
> > >> jayapalreddy.ur...@citrix.com> wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> I am not sure about the error but please see the below example
> > >> configuration and correct your configuration.
> > >>>
> > >>>
> > >>> Example confirmation:
> > >>>
> >  Public Interface: fe-0/0/4.52
> >  Private Interface: fe-0/0/1
> > >>>
> > >>> fe-0/0/1 - private interface
> > >>> fe-0/0/4.52 - public interface where my public network vlan id is 52.
> > >>>
> > >>> Example commands:
> > >>> set interfaces fe-0/0/1 description "Private network"
> > >>> set interfaces fe-0/0/1 vlan-tagging
> > >>>
> > >>> set interfaces fe-0/0/4 unit 52 vlan-id 52
> > >>> set interfaces fe-0/0/4 unit 52 family inet filter input untrust
> > >>>
> > >>> Thanks,
> > >>> Jayapal
> > >>>
> > >>> On 14-Jun-2013, at 9:42 PM, Sean Truman 
> > >>> wrote:
> > >>>
> >  All,
> > 
> >  I am trying to add an SRX 100 to Cloud Stack and keep getting a
> > "Illegal
> >  Group Reference"
> > 
> >  Here is how I am trying to add the config.
> >  IP Address: 10.0.2.1
> >  Username: root
> >  Password: password
> >  Type: Juniper SRX Firewall
> >  Public Interface: fe-0/0/0.0
> >  Private Interface: fe-0/0/1.0
> >  Usage interface:
> >  Number of Retries: 2
> >  Timeout: 300
> >  Public network: untrust
> >  Private network: trust
> >  Capacity: 10
> > 
> > 
> > 
> >  Here is my SRX configuration.
> > 
> >  http://pastebin.com/nTVEM92p
> > 
> > 
> >  Here is the only logs I get from management-server.log
> > 
> >  http://pastebin.com/pWB0Kbtu
> > 
> >  Any help would be greatly appreciated.
> > 
> >  v/r
> >  Sean
> > >>
> >
>


Re: SRX Integration Issues.

2013-06-14 Thread Sean Truman
I am going to enter a bug for this and submit a patch..

v/r
Sean


On Fri, Jun 14, 2013 at 4:03 PM, Sheng Yang  wrote:

> Oh yes, that explain the thing...
>
> --Sheng
>
>
> On Fri, Jun 14, 2013 at 1:56 PM, Sean Truman  wrote:
>
> > SOLVED: My password had a $ in it.. which has to be escaped.. I added
> more
> > logging to the SRX source to track it down.
> >
> > v/r
> > Sean
> >
> >
> > On Fri, Jun 14, 2013 at 1:00 PM, Sean Truman  wrote:
> >
> > > Looking through the source their isn't much logging, plus it's all over
> > > SSL so I cannot see the traffic being passed using tcpdump.
> > >
> > > Sean
> > >
> > > On Jun 14, 2013, at 12:54 PM, Sheng Yang  wrote:
> > >
> > > > It looks like a string issue of Java itself. What exactly failed on
> > > > test.xml?
> > > >
> > > > --Sheng
> > > >
> > > >
> > > > On Fri, Jun 14, 2013 at 9:55 AM, Sean Truman 
> > wrote:
> > > >
> > > >> I am using untagged VLAN on my public side. It's failing on the
> > test.xml
> > > >> looking for trust group!
> > > >>
> > > >> Sean
> > > >>
> > > >> On Jun 14, 2013, at 11:51 AM, Jayapal Reddy Uradi <
> > > >> jayapalreddy.ur...@citrix.com> wrote:
> > > >>
> > > >>> Hi,
> > > >>>
> > > >>> I am not sure about the error but please see the below example
> > > >> configuration and correct your configuration.
> > > >>>
> > > >>>
> > > >>> Example confirmation:
> > > >>>
> > >  Public Interface: fe-0/0/4.52
> > >  Private Interface: fe-0/0/1
> > > >>>
> > > >>> fe-0/0/1 - private interface
> > > >>> fe-0/0/4.52 - public interface where my public network vlan id is
> 52.
> > > >>>
> > > >>> Example commands:
> > > >>> set interfaces fe-0/0/1 description "Private network"
> > > >>> set interfaces fe-0/0/1 vlan-tagging
> > > >>>
> > > >>> set interfaces fe-0/0/4 unit 52 vlan-id 52
> > > >>> set interfaces fe-0/0/4 unit 52 family inet filter input untrust
> > > >>>
> > > >>> Thanks,
> > > >>> Jayapal
> > > >>>
> > > >>> On 14-Jun-2013, at 9:42 PM, Sean Truman 
> > > >>> wrote:
> > > >>>
> > >  All,
> > > 
> > >  I am trying to add an SRX 100 to Cloud Stack and keep getting a
> > > "Illegal
> > >  Group Reference"
> > > 
> > >  Here is how I am trying to add the config.
> > >  IP Address: 10.0.2.1
> > >  Username: root
> > >  Password: password
> > >  Type: Juniper SRX Firewall
> > >  Public Interface: fe-0/0/0.0
> > >  Private Interface: fe-0/0/1.0
> > >  Usage interface:
> > >  Number of Retries: 2
> > >  Timeout: 300
> > >  Public network: untrust
> > >  Private network: trust
> > >  Capacity: 10
> > > 
> > > 
> > > 
> > >  Here is my SRX configuration.
> > > 
> > >  http://pastebin.com/nTVEM92p
> > > 
> > > 
> > >  Here is the only logs I get from management-server.log
> > > 
> > >  http://pastebin.com/pWB0Kbtu
> > > 
> > >  Any help would be greatly appreciated.
> > > 
> > >  v/r
> > >  Sean
> > > >>
> > >
> >
>


Re: Bugs on Master

2013-06-14 Thread Chiradeep Vittal
Did you build the nonoss build? You have to add the SRX provider using
addNetworkServiceProvider api, enable it and then the drop down for
network offering should work.


On 6/14/13 12:09 PM, "Will Stevens"  wrote:

>BTW, I am using cloudmonkey 4.1.0...  Thx
>
>
>On Fri, Jun 14, 2013 at 3:04 PM, Will Stevens 
>wrote:
>
>> Chiradeep, can you send me the format of the cloudmonkey call for the
>>api
>> request 'createNetworkOffering' with 'supportedservices' of
>> 'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I can
>> not figure out the format of this call.
>>
>> I have confirmed that I can reproduce the issue of not being able to
>> select capability dropdowns in multiple browsers on master.
>>
>> Thanks,
>>
>> Will
>>
>>
>> On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens
>>wrote:
>>
>>> I will try that.  I am doing some testing right now.  I am compiling
>>>and
>>> running just master now to validate everything.
>>>
>>> I will be in touch when I have more details...
>>>
>>> ws
>>>
>>>
>>> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
>>> chiradeep.vit...@citrix.com> wrote:
>>>
 Are you able to use CloudMonkey? Perhaps it is a UI issue?

 On 6/14/13 9:50 AM, "Will Stevens"  wrote:

 >11 days ago I pulled the master code into my branch.  Master was at:
 >48913679e80e50228b1bd4b3d17fe5245461626a
 >
 >When I pulled, I had Egress firewall rules working perfectly.  After
the
 >pull I now get the following error when trying to create Egress
firewall
 >rules:
 >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
 >exception executing api command: createEgressFirewallRule
 >java.lang.NullPointerException
 >at

 
>com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Fi
>rewa
 >llManagerImpl.java:485)
 >at

 
>com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Fire
>wall
 >ManagerImpl.java:191)
 >at

 
>com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercep
>torD
 >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
 >at

 
>com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRul
>e(Fi
 >rewallManagerImpl.java:157)
 >at

 
>org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRu
>leCm
 >d.create(CreateEgressFirewallRuleCmd.java:252)
 >at 
com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
 >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
 >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
 >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
 >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
 >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 >at
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 >at
 
>org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:40
>1)
 >at

 
>org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java
>:216
 >)
 >at
 
>org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:18
>2)
 >at
 
>org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:76
>6)
 >at 
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 >at

 
>org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandl
>erCo
 >llection.java:230)
 >at

 
>org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.j
>ava:
 >114)
 >at
 
>org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:15
>2)
 >at org.mortbay.jetty.Server.handle(Server.java:326)
 >at
 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 >at

 
>org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpCon
>nect
 >ion.java:928)
 >at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 >at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 >at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 >at

 
>org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.jav
>a:41
 >0)
 >at

 
>org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.ja
>va:5
 >82)
 >
 >---
 >
 >So I merged in master this morning to see if that issue was fixed.
Now
 I
 >can not create a Network Service offering and select anything but
 Virtual
 >Router from any of the dropdowns for capabilities such as 'Firewall',
 >'Source NAT', etc...
 >
 >There are no JS errors, the dropdown just sits and thinks about it
for a
>

Re: Bugs on Master

2013-06-14 Thread Sheng Yang
Could you check if following commit fixed your problem? It's checked in 3
hours ago.

commit 4b2eb18cfc82093640b2cb6c47c0378e69b9f8a2
Author: Jessica Wang 
Date:   Fri Jun 14 14:17:50 2013 -0700

CLOUDSTACK-2981: UI - create network offering - fix a bug that provider
dropdown always bounced back to the first enabled option. It should only
bounce back to the first enabled option when the selected option is
disabled.

--Sheng


On Fri, Jun 14, 2013 at 4:39 PM, Chiradeep Vittal <
chiradeep.vit...@citrix.com> wrote:

> Did you build the nonoss build? You have to add the SRX provider using
> addNetworkServiceProvider api, enable it and then the drop down for
> network offering should work.
>
>
> On 6/14/13 12:09 PM, "Will Stevens"  wrote:
>
> >BTW, I am using cloudmonkey 4.1.0...  Thx
> >
> >
> >On Fri, Jun 14, 2013 at 3:04 PM, Will Stevens 
> >wrote:
> >
> >> Chiradeep, can you send me the format of the cloudmonkey call for the
> >>api
> >> request 'createNetworkOffering' with 'supportedservices' of
> >> 'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I can
> >> not figure out the format of this call.
> >>
> >> I have confirmed that I can reproduce the issue of not being able to
> >> select capability dropdowns in multiple browsers on master.
> >>
> >> Thanks,
> >>
> >> Will
> >>
> >>
> >> On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens
> >>wrote:
> >>
> >>> I will try that.  I am doing some testing right now.  I am compiling
> >>>and
> >>> running just master now to validate everything.
> >>>
> >>> I will be in touch when I have more details...
> >>>
> >>> ws
> >>>
> >>>
> >>> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
> >>> chiradeep.vit...@citrix.com> wrote:
> >>>
>  Are you able to use CloudMonkey? Perhaps it is a UI issue?
> 
>  On 6/14/13 9:50 AM, "Will Stevens"  wrote:
> 
>  >11 days ago I pulled the master code into my branch.  Master was at:
>  >48913679e80e50228b1bd4b3d17fe5245461626a
>  >
>  >When I pulled, I had Egress firewall rules working perfectly.  After
> the
>  >pull I now get the following error when trying to create Egress
> firewall
>  >rules:
>  >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
>  >exception executing api command: createEgressFirewallRule
>  >java.lang.NullPointerException
>  >at
> 
> 
> >com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Fi
> >rewa
>  >llManagerImpl.java:485)
>  >at
> 
> 
> >com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Fire
> >wall
>  >ManagerImpl.java:191)
>  >at
> 
> 
> >com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercep
> >torD
>  >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>  >at
> 
> 
> >com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRul
> >e(Fi
>  >rewallManagerImpl.java:157)
>  >at
> 
> 
> >org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRu
> >leCm
>  >d.create(CreateEgressFirewallRuleCmd.java:252)
>  >at
> com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
>  >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
>  >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
>  >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>  >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>  >at
>  org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>  >at
> 
> >org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:40
> >1)
>  >at
> 
> 
> >org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java
> >:216
>  >)
>  >at
> 
> >org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:18
> >2)
>  >at
> 
> >org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:76
> >6)
>  >at
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>  >at
> 
> 
> >org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandl
> >erCo
>  >llection.java:230)
>  >at
> 
> 
> >org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.j
> >ava:
>  >114)
>  >at
> 
> >org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:15
> >2)
>  >at org.mortbay.jetty.Server.handle(Server.java:326)
>  >at
> 
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>  >at
> 
> 
> >org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpCon
> >nect
>  >ion.java:928)
>  >at org.mortbay.jetty.HttpParser.parseNext

Re: Bugs on Master

2013-06-14 Thread Will Stevens
Yes, I am building nonoss.  I actually have written my own network service
provider plugin, so I was just using junipersrx as an example. I will
actually be using my own.

My problem is that the API docs for createNetworkOffering (
http://cloudstack.apache.org/docs/api/apidocs-4.1/root_admin/createNetworkOffering.html)
do not have any documentation for how '*supportedservices*' is supposed to
be formatted when it is passed via the API.  I believe it should be an
array of objects which have things like 'name', 'provider', etc, but there
is no documentation for how that should be formatted when it is passed via
the API.  It appears there is a documentation gap here.

Sheng, I just saw your note.  I will repull to see if it fixes the problem.


Thanks...


On Fri, Jun 14, 2013 at 7:39 PM, Chiradeep Vittal <
chiradeep.vit...@citrix.com> wrote:

> Did you build the nonoss build? You have to add the SRX provider using
> addNetworkServiceProvider api, enable it and then the drop down for
> network offering should work.
>
>
> On 6/14/13 12:09 PM, "Will Stevens"  wrote:
>
> >BTW, I am using cloudmonkey 4.1.0...  Thx
> >
> >
> >On Fri, Jun 14, 2013 at 3:04 PM, Will Stevens 
> >wrote:
> >
> >> Chiradeep, can you send me the format of the cloudmonkey call for the
> >>api
> >> request 'createNetworkOffering' with 'supportedservices' of
> >> 'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I can
> >> not figure out the format of this call.
> >>
> >> I have confirmed that I can reproduce the issue of not being able to
> >> select capability dropdowns in multiple browsers on master.
> >>
> >> Thanks,
> >>
> >> Will
> >>
> >>
> >> On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens
> >>wrote:
> >>
> >>> I will try that.  I am doing some testing right now.  I am compiling
> >>>and
> >>> running just master now to validate everything.
> >>>
> >>> I will be in touch when I have more details...
> >>>
> >>> ws
> >>>
> >>>
> >>> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
> >>> chiradeep.vit...@citrix.com> wrote:
> >>>
>  Are you able to use CloudMonkey? Perhaps it is a UI issue?
> 
>  On 6/14/13 9:50 AM, "Will Stevens"  wrote:
> 
>  >11 days ago I pulled the master code into my branch.  Master was at:
>  >48913679e80e50228b1bd4b3d17fe5245461626a
>  >
>  >When I pulled, I had Egress firewall rules working perfectly.  After
> the
>  >pull I now get the following error when trying to create Egress
> firewall
>  >rules:
>  >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:) unhandled
>  >exception executing api command: createEgressFirewallRule
>  >java.lang.NullPointerException
>  >at
> 
> 
> >com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Fi
> >rewa
>  >llManagerImpl.java:485)
>  >at
> 
> 
> >com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Fire
> >wall
>  >ManagerImpl.java:191)
>  >at
> 
> 
> >com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercep
> >torD
>  >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>  >at
> 
> 
> >com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRul
> >e(Fi
>  >rewallManagerImpl.java:157)
>  >at
> 
> 
> >org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRu
> >leCm
>  >d.create(CreateEgressFirewallRuleCmd.java:252)
>  >at
> com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
>  >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
>  >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
>  >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>  >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>  >at
>  org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>  >at
> 
> >org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:40
> >1)
>  >at
> 
> 
> >org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java
> >:216
>  >)
>  >at
> 
> >org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:18
> >2)
>  >at
> 
> >org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:76
> >6)
>  >at
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>  >at
> 
> 
> >org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandl
> >erCo
>  >llection.java:230)
>  >at
> 
> 
> >org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.j
> >ava:
>  >114)
>  >at
> 
> >org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:15
> >2)
>  >at o

Re: Bugs on Master

2013-06-14 Thread Will Stevens
Thanks Sheng, that fixed the problem with the UI.


On Fri, Jun 14, 2013 at 8:06 PM, Will Stevens  wrote:

> Yes, I am building nonoss.  I actually have written my own network service
> provider plugin, so I was just using junipersrx as an example. I will
> actually be using my own.
>
> My problem is that the API docs for createNetworkOffering (
> http://cloudstack.apache.org/docs/api/apidocs-4.1/root_admin/createNetworkOffering.html)
> do not have any documentation for how '*supportedservices*' is supposed
> to be formatted when it is passed via the API.  I believe it should be an
> array of objects which have things like 'name', 'provider', etc, but there
> is no documentation for how that should be formatted when it is passed via
> the API.  It appears there is a documentation gap here.
>
> Sheng, I just saw your note.  I will repull to see if it fixes the
> problem.
>
> Thanks...
>
>
> On Fri, Jun 14, 2013 at 7:39 PM, Chiradeep Vittal <
> chiradeep.vit...@citrix.com> wrote:
>
>> Did you build the nonoss build? You have to add the SRX provider using
>> addNetworkServiceProvider api, enable it and then the drop down for
>> network offering should work.
>>
>>
>> On 6/14/13 12:09 PM, "Will Stevens"  wrote:
>>
>> >BTW, I am using cloudmonkey 4.1.0...  Thx
>> >
>> >
>> >On Fri, Jun 14, 2013 at 3:04 PM, Will Stevens 
>> >wrote:
>> >
>> >> Chiradeep, can you send me the format of the cloudmonkey call for the
>> >>api
>> >> request 'createNetworkOffering' with 'supportedservices' of
>> >> 'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I
>> can
>> >> not figure out the format of this call.
>> >>
>> >> I have confirmed that I can reproduce the issue of not being able to
>> >> select capability dropdowns in multiple browsers on master.
>> >>
>> >> Thanks,
>> >>
>> >> Will
>> >>
>> >>
>> >> On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens
>> >>wrote:
>> >>
>> >>> I will try that.  I am doing some testing right now.  I am compiling
>> >>>and
>> >>> running just master now to validate everything.
>> >>>
>> >>> I will be in touch when I have more details...
>> >>>
>> >>> ws
>> >>>
>> >>>
>> >>> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
>> >>> chiradeep.vit...@citrix.com> wrote:
>> >>>
>>  Are you able to use CloudMonkey? Perhaps it is a UI issue?
>> 
>>  On 6/14/13 9:50 AM, "Will Stevens"  wrote:
>> 
>>  >11 days ago I pulled the master code into my branch.  Master was at:
>>  >48913679e80e50228b1bd4b3d17fe5245461626a
>>  >
>>  >When I pulled, I had Egress firewall rules working perfectly.  After
>> the
>>  >pull I now get the following error when trying to create Egress
>> firewall
>>  >rules:
>>  >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:)
>> unhandled
>>  >exception executing api command: createEgressFirewallRule
>>  >java.lang.NullPointerException
>>  >at
>> 
>> 
>>
>> >com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule(Fi
>> >rewa
>>  >llManagerImpl.java:485)
>>  >at
>> 
>> 
>>
>> >com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(Fire
>> >wall
>>  >ManagerImpl.java:191)
>>  >at
>> 
>> 
>>
>> >com.cloud.utils.component.ComponentInstantiationPostProcessor$Intercep
>> >torD
>>  >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>>  >at
>> 
>> 
>>
>> >com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewallRul
>> >e(Fi
>>  >rewallManagerImpl.java:157)
>>  >at
>> 
>> 
>>
>> >org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewallRu
>> >leCm
>>  >d.create(CreateEgressFirewallRuleCmd.java:252)
>>  >at
>> com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101)
>>  >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
>>  >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
>>  >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>>  >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
>>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>>  >at
>> 
>> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>>  >at
>> 
>>
>> >org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:40
>> >1)
>>  >at
>> 
>> 
>>
>> >org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java
>> >:216
>>  >)
>>  >at
>> 
>>
>> >org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:18
>> >2)
>>  >at
>> 
>>
>> >org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:76
>> >6)
>>  >at
>> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
>>  >at
>> 
>> 
>>
>> >org.mortbay.jetty.handler.ContextHandlerCollection.han

Re: Bugs on Master

2013-06-14 Thread Chiradeep Vittal
Now that the UI works, just use Firebug to figure out the API.
And file a doc bug on the API docs.
(and submit a fix for the said bug :))

On 6/14/13 5:43 PM, "Will Stevens"  wrote:

>Thanks Sheng, that fixed the problem with the UI.
>
>
>On Fri, Jun 14, 2013 at 8:06 PM, Will Stevens 
>wrote:
>
>> Yes, I am building nonoss.  I actually have written my own network
>>service
>> provider plugin, so I was just using junipersrx as an example. I will
>> actually be using my own.
>>
>> My problem is that the API docs for createNetworkOffering (
>> 
>>http://cloudstack.apache.org/docs/api/apidocs-4.1/root_admin/createNetwor
>>kOffering.html)
>> do not have any documentation for how '*supportedservices*' is supposed
>> to be formatted when it is passed via the API.  I believe it should be
>>an
>> array of objects which have things like 'name', 'provider', etc, but
>>there
>> is no documentation for how that should be formatted when it is passed
>>via
>> the API.  It appears there is a documentation gap here.
>>
>> Sheng, I just saw your note.  I will repull to see if it fixes the
>> problem.
>>
>> Thanks...
>>
>>
>> On Fri, Jun 14, 2013 at 7:39 PM, Chiradeep Vittal <
>> chiradeep.vit...@citrix.com> wrote:
>>
>>> Did you build the nonoss build? You have to add the SRX provider using
>>> addNetworkServiceProvider api, enable it and then the drop down for
>>> network offering should work.
>>>
>>>
>>> On 6/14/13 12:09 PM, "Will Stevens"  wrote:
>>>
>>> >BTW, I am using cloudmonkey 4.1.0...  Thx
>>> >
>>> >
>>> >On Fri, Jun 14, 2013 at 3:04 PM, Will Stevens 
>>> >wrote:
>>> >
>>> >> Chiradeep, can you send me the format of the cloudmonkey call for
>>>the
>>> >>api
>>> >> request 'createNetworkOffering' with 'supportedservices' of
>>> >> 'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I
>>> can
>>> >> not figure out the format of this call.
>>> >>
>>> >> I have confirmed that I can reproduce the issue of not being able to
>>> >> select capability dropdowns in multiple browsers on master.
>>> >>
>>> >> Thanks,
>>> >>
>>> >> Will
>>> >>
>>> >>
>>> >> On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens
>>> >>wrote:
>>> >>
>>> >>> I will try that.  I am doing some testing right now.  I am
>>>compiling
>>> >>>and
>>> >>> running just master now to validate everything.
>>> >>>
>>> >>> I will be in touch when I have more details...
>>> >>>
>>> >>> ws
>>> >>>
>>> >>>
>>> >>> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
>>> >>> chiradeep.vit...@citrix.com> wrote:
>>> >>>
>>>  Are you able to use CloudMonkey? Perhaps it is a UI issue?
>>> 
>>>  On 6/14/13 9:50 AM, "Will Stevens"  wrote:
>>> 
>>>  >11 days ago I pulled the master code into my branch.  Master was
>>>at:
>>>  >48913679e80e50228b1bd4b3d17fe5245461626a
>>>  >
>>>  >When I pulled, I had Egress firewall rules working perfectly.
>>>After
>>> the
>>>  >pull I now get the following error when trying to create Egress
>>> firewall
>>>  >rules:
>>>  >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:)
>>> unhandled
>>>  >exception executing api command: createEgressFirewallRule
>>>  >java.lang.NullPointerException
>>>  >at
>>> 
>>> 
>>>
>>> 
com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule
(Fi
>>> >rewa
>>>  >llManagerImpl.java:485)
>>>  >at
>>> 
>>> 
>>>
>>> 
com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(F
ire
>>> >wall
>>>  >ManagerImpl.java:191)
>>>  >at
>>> 
>>> 
>>>
>>> 
com.cloud.utils.component.ComponentInstantiationPostProcessor$Inter
cep
>>> >torD
>>>  >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>>>  >at
>>> 
>>> 
>>>
>>> 
com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewall
Rul
>>> >e(Fi
>>>  >rewallManagerImpl.java:157)
>>>  >at
>>> 
>>> 
>>>
>>> 
org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewal
lRu
>>> >leCm
>>>  >d.create(CreateEgressFirewallRuleCmd.java:252)
>>>  >at
>>> 
>>>com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101
>>>)
>>>  >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
>>>  >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
>>>  >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>>>  >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
>>>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>>>  >at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>>>  >at
>>> 
>>> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>>>  >at
>>> 
>>>
>>> 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java
:40
>>> >1)
>>>  >at
>>> 
>>> 
>>>
>>> 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityH

[MERGE]object_store branch into master [Second round]

2013-06-14 Thread Edison Su
Hi all, 
 The second round, call for merge object_store branch into master, is 
coming!
  The issues fixed:
   1. All the major issues addressed by John are fixed:
1.1 A cache storage replacement algorithm is added: 
StorageCacheReplacementAlgorithmLRU, based on the reference count and least 
recently used. 
1.2 A new S3 transport is added, can upload > 5G template into 
S3 directly.
1.3 Retry, if S3 upload failed.
1.4 some comments from https://reviews.apache.org/r/11277/, 
mostly, the coding style, are addressed and clean up some unused code.
 2. DB upgrade path from 4.1 to 4.2
 3. Bug fix

The size of the patch is even bigger now, around 10 LOC, you can find 
about the patch from https://reviews.apache.org/r/11277/diff/2/. 
 Comments/feedback are welcome. Thanks.

> -Original Message-
> From: Edison Su [mailto:edison...@citrix.com]
> Sent: Friday, May 17, 2013 1:11 AM
> To: dev@cloudstack.apache.org
> Subject: [MERGE]object_store branch into master
> 
> Hi all,
>  Min and I worked on object_store branch during the last one and half
> month. We made a lot of refactor on the storage code, mostly related to
> secondary storage, but also on the general storage framework. The following
> goals are made:
> 
> 1.   An unified storage framework. Both secondary storages(nfs/s3/swift
> etc) and primary storages will share the same plugin model, the same
> interface. Add any other new storages into cloudstack will much easier and
> straightforward.
> 
> 2.   The storage interface  between mgt server and resource is unified,
> currently there are only 5 commands send out by mgt server:
> copycommand/createobjectcommand/deletecommand/attachcommand/de
> ttachcommand, and each storage vendor can decode/encode all the
> entities(volume/snapshot/storage pool/ template etc) by its own.
> 
> 3.   NFS secondary storage is not explicitly depended on by other
> components. For example, when registering template into S3, template will
> be write into S3 directly, instead of storing into nfs secondary storage, then
> push to S3. If s3 is used as secondary storage, then nfs storage will be used 
> as
> cache storage, but from other components point of view, cache storage is
> invisible. So, it's possible to make nfs storage as optional if s3 is used for
> certain hypervisors.
> The detailed FS is at
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Storage+Backup
> +Object+Store+Plugin+Framework
> The test we did:
> 
> 1.   We modified marvin to use new storage api
> 
> 2.   Test_volume and test_vm_life_cycle, test_template under smoke test
> folder are executed against xenserver/kvm/vmware and devcloud, some of
> them are failed, it's partly due to bugs introduced by our code, partly master
> branch itself has issue(e.g. resizevolume doesn't work). We want to fix these
> issues after merging into master.
> 
> The basic follow does work: create user vm, attach/detach volume, register
> template, create template from volume/snapshot, take snapshot, create
> volume from snapshot.
>   It's a huge change, around 60k LOC patch, to review the code, you can try:
> git diff master..object_store, will show all the diff.
>   Comments/feedback are welcome. Thanks.
> 



Re: Bugs on Master

2013-06-14 Thread Will Stevens
I will check and see how the API is handled now that the UI is fixed.

Thanks,

will


On Fri, Jun 14, 2013 at 9:34 PM, Chiradeep Vittal <
chiradeep.vit...@citrix.com> wrote:

> Now that the UI works, just use Firebug to figure out the API.
> And file a doc bug on the API docs.
> (and submit a fix for the said bug :))
>
> On 6/14/13 5:43 PM, "Will Stevens"  wrote:
>
> >Thanks Sheng, that fixed the problem with the UI.
> >
> >
> >On Fri, Jun 14, 2013 at 8:06 PM, Will Stevens 
> >wrote:
> >
> >> Yes, I am building nonoss.  I actually have written my own network
> >>service
> >> provider plugin, so I was just using junipersrx as an example. I will
> >> actually be using my own.
> >>
> >> My problem is that the API docs for createNetworkOffering (
> >>
> >>
> http://cloudstack.apache.org/docs/api/apidocs-4.1/root_admin/createNetwor
> >>kOffering.html)
> >> do not have any documentation for how '*supportedservices*' is supposed
> >> to be formatted when it is passed via the API.  I believe it should be
> >>an
> >> array of objects which have things like 'name', 'provider', etc, but
> >>there
> >> is no documentation for how that should be formatted when it is passed
> >>via
> >> the API.  It appears there is a documentation gap here.
> >>
> >> Sheng, I just saw your note.  I will repull to see if it fixes the
> >> problem.
> >>
> >> Thanks...
> >>
> >>
> >> On Fri, Jun 14, 2013 at 7:39 PM, Chiradeep Vittal <
> >> chiradeep.vit...@citrix.com> wrote:
> >>
> >>> Did you build the nonoss build? You have to add the SRX provider using
> >>> addNetworkServiceProvider api, enable it and then the drop down for
> >>> network offering should work.
> >>>
> >>>
> >>> On 6/14/13 12:09 PM, "Will Stevens"  wrote:
> >>>
> >>> >BTW, I am using cloudmonkey 4.1.0...  Thx
> >>> >
> >>> >
> >>> >On Fri, Jun 14, 2013 at 3:04 PM, Will Stevens 
> >>> >wrote:
> >>> >
> >>> >> Chiradeep, can you send me the format of the cloudmonkey call for
> >>>the
> >>> >>api
> >>> >> request 'createNetworkOffering' with 'supportedservices' of
> >>> >> 'dhcp:virtualrouter', 'dns:virtualrouter', 'firewall:junipersrx'.  I
> >>> can
> >>> >> not figure out the format of this call.
> >>> >>
> >>> >> I have confirmed that I can reproduce the issue of not being able to
> >>> >> select capability dropdowns in multiple browsers on master.
> >>> >>
> >>> >> Thanks,
> >>> >>
> >>> >> Will
> >>> >>
> >>> >>
> >>> >> On Fri, Jun 14, 2013 at 1:56 PM, Will Stevens
> >>> >>wrote:
> >>> >>
> >>> >>> I will try that.  I am doing some testing right now.  I am
> >>>compiling
> >>> >>>and
> >>> >>> running just master now to validate everything.
> >>> >>>
> >>> >>> I will be in touch when I have more details...
> >>> >>>
> >>> >>> ws
> >>> >>>
> >>> >>>
> >>> >>> On Fri, Jun 14, 2013 at 1:20 PM, Chiradeep Vittal <
> >>> >>> chiradeep.vit...@citrix.com> wrote:
> >>> >>>
> >>>  Are you able to use CloudMonkey? Perhaps it is a UI issue?
> >>> 
> >>>  On 6/14/13 9:50 AM, "Will Stevens"  wrote:
> >>> 
> >>>  >11 days ago I pulled the master code into my branch.  Master was
> >>>at:
> >>>  >48913679e80e50228b1bd4b3d17fe5245461626a
> >>>  >
> >>>  >When I pulled, I had Egress firewall rules working perfectly.
> >>>After
> >>> the
> >>>  >pull I now get the following error when trying to create Egress
> >>> firewall
> >>>  >rules:
> >>>  >ERROR [cloud.api.ApiServer] (1784147987@qtp-213982037-11:)
> >>> unhandled
> >>>  >exception executing api command: createEgressFirewallRule
> >>>  >java.lang.NullPointerException
> >>>  >at
> >>> 
> >>> 
> >>>
> >>>
> com.cloud.network.firewall.FirewallManagerImpl.validateFirewallRule
> (Fi
> >>> >rewa
> >>>  >llManagerImpl.java:485)
> >>>  >at
> >>> 
> >>> 
> >>>
> >>>
> com.cloud.network.firewall.FirewallManagerImpl.createFirewallRule(F
> ire
> >>> >wall
> >>>  >ManagerImpl.java:191)
> >>>  >at
> >>> 
> >>> 
> >>>
> >>>
> com.cloud.utils.component.ComponentInstantiationPostProcessor$Inter
> cep
> >>> >torD
> >>>  >ispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
> >>>  >at
> >>> 
> >>> 
> >>>
> >>>
> com.cloud.network.firewall.FirewallManagerImpl.createEgressFirewall
> Rul
> >>> >e(Fi
> >>>  >rewallManagerImpl.java:157)
> >>>  >at
> >>> 
> >>> 
> >>>
> >>>
> org.apache.cloudstack.api.command.user.firewall.CreateEgressFirewal
> lRu
> >>> >leCm
> >>>  >d.create(CreateEgressFirewallRuleCmd.java:252)
> >>>  >at
> >>>
> >>>com.cloud.api.ApiDispatcher.dispatchCreateCmd(ApiDispatcher.java:101
> >>>)
> >>>  >at com.cloud.api.ApiServer.queueCommand(ApiServer.java:471)
> >>>  >at com.cloud.api.ApiServer.handleRequest(ApiServer.java:367)
> >>>  >at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
> >>>  >at com.cloud.api.ApiServlet.doGet(ApiServlet.java:66)
> >

Re: Automation analysis improvement

2013-06-14 Thread Prasanna Santhanam
Indeed - expunge, storage, account cleanups are set to 60s on all
automated test environments I've setup.

The cleanup process works as below:

setUpClass() -> creates user account, creates user resources
testXxx() -> test steps and verify within the account
tearDownClass() -> cleanup resources acquired in setUpClass()

The way this is done is to collect a list of resource objects in
setUpClass#_cleanup (like [account, vm, offering]). In tearDownClass
we simply process that list calling delete on each item. Usually, just
the account is enough since the account GC thread will take care of
the rest.

Rayees - if you've identified these tests then please submit a patch
if they flout the process I've explained above. I've fixed some
before, but there's probably more.

Thanks,

-- 
Prasanna.,

On Fri, Jun 14, 2013 at 11:08:42AM -0700, Ahmad Emneina wrote:
> I'm +1 on this. I feel global setting (relating to expunge and
> cleanup) should be set to aggressively expunge deleted resources,
> then delete the user resources... before deleting the account. That
> way we can verify garbage collection of resources is working
> properly.
> 
> Ahmad
> 
> On Jun 14, 2013, at 10:21 AM, Chiradeep Vittal  
> wrote:
> 
> > +1
> > 
> > On 6/14/13 8:54 AM, "Rayees Namathponnan" 
> > wrote:
> > 
> >> Many of the automation test cases are not tearing down the  account
> >> properly; due to this resources are not getting released and followed
> >> test cases getting failed during VM deployment itself.
> >> 
> >> During automation run accounts are created with random number without any
> >> reference for test case (eg : test-N5QD8N), and it's hard to identify
> >> which test case not tearing down the account after complete the test.
> >> 
> >> Here my suggestion; we should create account name with test case name
> >> (eg : test- VPCOffering-N5QD8N)
> >> 
> >> Any thoughts ?
> >> 
> >> Regards,
> >> Rayees
> > 



Powered by BigRock.com



Infra Issues from the IRC meeting (Wed, Jun 12)

2013-06-14 Thread Prasanna Santhanam
Saw a few infra topics being discussed in the meeting and there was
talk of bringing it to the list so I'm taking this opportunity to
explain some things behind it.

> 17:22:44 [Animesh]: I will send out my weekly reminder today on status and 
> include Sudha's test results 
> 17:23:28 [topcloud]: one thing that concerns me is that the bvt continues to 
> be at < 100% pass rate
> 17:23:41 [topcloud]: is there anything we're doing about this?
yes - I've fixed most tests. Some have existed because of bugs in
packaging, systemvm templates that I see patches for now. 

> 17:25:20 [chipc]: topcloud: was BVT ever at 100% ?
> 17:25:32 [chipc]: (real question, not sarcasm)
It was - 100% - when the project was first proposed. But more tests
have come in since then.

> 17:26:41 [chipc]: once we get it back to 100%, I say we block all changes 
> when it drops to <100%
> 17:26:49 [topcloud]: +1
+1 - this is what I've been driving towards but haven't announced some
changes I've made in the past weeks because it's pointless to have
tests fail soon as I announce we are 100%. We shouldn't wait for one
run, but at least 10 to ensure that the bvt is indeed stable enough to
be trusted as a 'gating' system for master stability.

There's also a couple of issues here -
1. Does everyone know where the tests run?
2. Do people know how to spot the failures?
3. Do people know how to find the logs for the failures?

If the answer is no to all this, I have more documentation on my
hands.

> 17:28:07 [Animesh]: agreed bvt also shows progress towards release readines
> 17:28:07 [chipc]: topcloud: +1
> 17:28:32 [chipc]: Animesh: BVT should show that master is stable, regardless 
> of release timeframes
> 17:28:33 [chipc]: IMO that is
> 17:28:44 [chipc]: master should only see good  /tested code
Which is why the BVT runs on master at all times on
jenkins.buildacloud.org. There is also ability to run it against a
feature branch but I would rather defer that to the release manager
for now since it's tied with hardware resources and jenkins schedules.
That feature should strictly be reserved for architecture changes in
MERGE requests.

> 17:43:27 [topcloud]: sorry...to bring back this topic but is bvt running on 
> apache infra?
> 17:43:35 [chipc]: no
> 17:43:57 [topcloud]: chipc: is there any talk about bringing it into apache 
> infra?
It was brought up with the ASF infra back in January and the
suggestion was to donate hardware to the ASF to manage. So if we're
prepared to do that, great! But it certainly can't just be Citrix :)

I'd prefer individual project related test hardware and resources
to stay in the control of the project. Infrastructure is constantly
changed to allow features and enhancements to be tested so it's best
to have it in the core group. Which is why jenkins.buildacloud.org
came to existence. This is similar to how cloudbees operates or
(*gasp*) openstack-infra [1][2] operates.

Ideally, I'd like those interested in infra activities to form a
separate group for cloudstack-infra related topics. The group's focus
will be to maintain, test and add to the infrastructure of this
project. But that's in the future. Without such a group, building an
IaaS cloud is not much fun :)

> 17:44:17 [topcloud]: i can't imagine apache wanting bvt to only run inside 
> citrix all the time.
It doesn't run within Citrix. It runs in a DC in Fremont. There are
other environments within Citrix however that run their own tests for
their needs - eg: object_store tests, cloudplatform tests, customer
related issue tests etc.

/me beginning to think more doc work is on my way :/

> 17:46:27 [chipc]: but generally, the ASF build infra is a bit overloaded
+1000

> 17:46:51 [jzb]: topcloud: when you say "in Citrix" - it's still visible 
> outside Citrix, yes?
> 17:46:52 [chipc]: so frankly, CTXS donating an environment to run it, 
> publicly visible to everyone, is quite helpful
> 17:46:58 [chipc]: jzb: it is
We need more people to donate hardware/virtual resources for testing :) 
CTXS has been gracious to provide quite a few resources already IMO.

> 17:47:18 [chipc]: actually, I think it is...  
> 17:47:34 [topcloud]: jzb: yeah it's still visible but it really should be 
> runnable by everyone.
Not quite. It's a gating system. It runs automatically and shouldn't
be runnable by everyone at will. I'm still waiting to implement
devcloud tests based on that gerrit converstaion (which went nowhere)
we had many months back. DevCloud stuff can be run at will.

> 17:47:37 [jzb]: I'm all for building up Apache infra, but I also
> think having vendors donate publicly visible resources that are
> usable by the community is acceptable.
+1

> 17:47:53 [jzb]: in fact, we probably ought to be hitting up some of
> our ISP friends for more. 
+1 - who are our ISP friends? Would like to get help on this.

> 17:49:49 [ke4qqq]: so tsp (along with abayer and roman) are working
> on a publicly accessible jenkins instance in fremont
This is basically to dogfood all inst

Re: Summary of IRC meeting in #cloudstack-meeting, Wed Jun 12 17:08:56 2013

2013-06-14 Thread Prasanna Santhanam
On Fri, Jun 14, 2013 at 11:24:11AM +0100, Noah Slater wrote:
> While we're talking about bot etiquette... ;) If people used #info and
> #action, important takeaway points would be included at the top of the
> email. As it is, it's a bit hard to read through the logs if you just want
> to get a jist.

Yeah - we usually use them but looks like it was missed this time.
Took some time to glean out the important bits from this meeting.

-- 
Prasanna.,


Powered by BigRock.com



Re: [MERGE]object_store branch into master [Second round]

2013-06-14 Thread Prasanna Santhanam
On Sat, Jun 15, 2013 at 03:07:39AM +, Edison Su wrote:
> Hi all, 
>  The second round, call for merge object_store branch into master, is 
> coming!
>   The issues fixed:
>1. All the major issues addressed by John are fixed:
> 1.1 A cache storage replacement algorithm is added: 
> StorageCacheReplacementAlgorithmLRU, based on the reference count and least 
> recently used. 
> 1.2 A new S3 transport is added, can upload > 5G template 
> into S3 directly.
> 1.3 Retry, if S3 upload failed.
> 1.4 some comments from https://reviews.apache.org/r/11277/, 
> mostly, the coding style, are addressed and clean up some unused code.
>  2. DB upgrade path from 4.1 to 4.2
>  3. Bug fix
> 
> The size of the patch is even bigger now, around 10 LOC, you
> can find about the patch from
> https://reviews.apache.org/r/11277/diff/2/. 
>  Comments/feedback are welcome. Thanks.

Awesome work - looking forward to this. Great discussions in the past
few weeks. Some of the better topic threads we've had in the recent
past!

-- 
Prasanna.,


Powered by BigRock.com



Re: Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check remove the duplicate file from consoleproxy and include the script from secondary storage folder while packing iso

2013-06-14 Thread ASF Subversion and Git Services

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11874/#review21936
---


Commit 6d140538c5efc394fda8a4ddc7cb72832470d0b3 in branch refs/heads/master 
from Rajesh Battala
[ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6d14053 ]

CLOUDSTACK-3004: remove duplicate ssvm-check.sh

ssvm_check.sh remove the duplicate file from consoleproxy and include the
script from secondary storage folder while packing iso

Signed-off-by: Prasanna Santhanam 


- ASF Subversion and Git Services


On June 14, 2013, 10:09 a.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11874/
> ---
> 
> (Updated June 14, 2013, 10:09 a.m.)
> 
> 
> Review request for cloudstack and Chip Childers.
> 
> 
> Description
> ---
> 
> Issue: .There are two ssvm_check scripts(duplicates).
> 
> ./services/console-proxy/server/scripts/ssvm-check.sh
> ./services/secondary-storage/scripts/ssvm-check.sh
> 
> When building the code, these scripts will go to systemvm.zip, systemvm.zip 
> will be packaged into systemvm.iso. 
> 
> systemvm-descriptor.xml will define what all the scripts should package.
> As per the descriptor xml,  the ssvm-check script under console-proxy is 
> getting into systemvm.zip.
> 
> I had verified the ssvm-check script with the fix under console-proxy. The 
> systemvm.zip is getting update properly and making into systemvm.iso.
> And ssvm is getting the right script now.
> 
> Changes made in script under 
> ./services/secondary-storage/scripts/ssvm-check.sh is not getting into 
> systemvm.iso
> 
> Fixed:
> Modified systemvm-descriptor.xml to pick the ssvm-check.sh form 
> ./services/secondary-storage/scripts/ssvm-check.sh
> removed the duplicate file which is creating confusion 
> (./services/console-proxy/server/scripts/ssvm-check.sh)
> 
> 
> This addresses bug CLOUDSTACK-3004.
> 
> 
> Diffs
> -
> 
>   services/console-proxy/server/scripts/ssvm-check.sh 7b83c98 
>   services/console-proxy/server/systemvm-descriptor.xml e34026b 
> 
> Diff: https://reviews.apache.org/r/11874/diff/
> 
> 
> Testing
> ---
> 
> Tested by generating the systemvm.zip , the ssvm-check file is getting copied 
> into the zip from the ./services/secondary-storage/scripts/ssvm-check.sh
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check remove the duplicate file from consoleproxy and include the script from secondary storage folder while packing iso

2013-06-14 Thread Prasanna Santhanam

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11874/#review21937
---

Ship it!


Ship It!

- Prasanna Santhanam


On June 14, 2013, 10:09 a.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11874/
> ---
> 
> (Updated June 14, 2013, 10:09 a.m.)
> 
> 
> Review request for cloudstack and Chip Childers.
> 
> 
> Description
> ---
> 
> Issue: .There are two ssvm_check scripts(duplicates).
> 
> ./services/console-proxy/server/scripts/ssvm-check.sh
> ./services/secondary-storage/scripts/ssvm-check.sh
> 
> When building the code, these scripts will go to systemvm.zip, systemvm.zip 
> will be packaged into systemvm.iso. 
> 
> systemvm-descriptor.xml will define what all the scripts should package.
> As per the descriptor xml,  the ssvm-check script under console-proxy is 
> getting into systemvm.zip.
> 
> I had verified the ssvm-check script with the fix under console-proxy. The 
> systemvm.zip is getting update properly and making into systemvm.iso.
> And ssvm is getting the right script now.
> 
> Changes made in script under 
> ./services/secondary-storage/scripts/ssvm-check.sh is not getting into 
> systemvm.iso
> 
> Fixed:
> Modified systemvm-descriptor.xml to pick the ssvm-check.sh form 
> ./services/secondary-storage/scripts/ssvm-check.sh
> removed the duplicate file which is creating confusion 
> (./services/console-proxy/server/scripts/ssvm-check.sh)
> 
> 
> This addresses bug CLOUDSTACK-3004.
> 
> 
> Diffs
> -
> 
>   services/console-proxy/server/scripts/ssvm-check.sh 7b83c98 
>   services/console-proxy/server/systemvm-descriptor.xml e34026b 
> 
> Diff: https://reviews.apache.org/r/11874/diff/
> 
> 
> Testing
> ---
> 
> Tested by generating the systemvm.zip , the ssvm-check file is getting copied 
> into the zip from the ./services/secondary-storage/scripts/ssvm-check.sh
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>



Re: Review Request: Fixed CLOUDSTACK-3004 [script] ssvm_check remove the duplicate file from consoleproxy and include the script from secondary storage folder while packing iso

2013-06-14 Thread Rajesh Battala


> On June 15, 2013, 5:53 a.m., ASF Subversion and Git Services wrote:
> > Commit 6d140538c5efc394fda8a4ddc7cb72832470d0b3 in branch refs/heads/master 
> > from Rajesh Battala
> > [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=6d14053 ]
> > 
> > CLOUDSTACK-3004: remove duplicate ssvm-check.sh
> > 
> > ssvm_check.sh remove the duplicate file from consoleproxy and include the
> > script from secondary storage folder while packing iso
> > 
> > Signed-off-by: Prasanna Santhanam 
> >

Thanks


- Rajesh


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/11874/#review21936
---


On June 14, 2013, 10:09 a.m., Rajesh Battala wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/11874/
> ---
> 
> (Updated June 14, 2013, 10:09 a.m.)
> 
> 
> Review request for cloudstack and Chip Childers.
> 
> 
> Description
> ---
> 
> Issue: .There are two ssvm_check scripts(duplicates).
> 
> ./services/console-proxy/server/scripts/ssvm-check.sh
> ./services/secondary-storage/scripts/ssvm-check.sh
> 
> When building the code, these scripts will go to systemvm.zip, systemvm.zip 
> will be packaged into systemvm.iso. 
> 
> systemvm-descriptor.xml will define what all the scripts should package.
> As per the descriptor xml,  the ssvm-check script under console-proxy is 
> getting into systemvm.zip.
> 
> I had verified the ssvm-check script with the fix under console-proxy. The 
> systemvm.zip is getting update properly and making into systemvm.iso.
> And ssvm is getting the right script now.
> 
> Changes made in script under 
> ./services/secondary-storage/scripts/ssvm-check.sh is not getting into 
> systemvm.iso
> 
> Fixed:
> Modified systemvm-descriptor.xml to pick the ssvm-check.sh form 
> ./services/secondary-storage/scripts/ssvm-check.sh
> removed the duplicate file which is creating confusion 
> (./services/console-proxy/server/scripts/ssvm-check.sh)
> 
> 
> This addresses bug CLOUDSTACK-3004.
> 
> 
> Diffs
> -
> 
>   services/console-proxy/server/scripts/ssvm-check.sh 7b83c98 
>   services/console-proxy/server/systemvm-descriptor.xml e34026b 
> 
> Diff: https://reviews.apache.org/r/11874/diff/
> 
> 
> Testing
> ---
> 
> Tested by generating the systemvm.zip , the ssvm-check file is getting copied 
> into the zip from the ./services/secondary-storage/scripts/ssvm-check.sh
> 
> 
> Thanks,
> 
> Rajesh Battala
> 
>