[cross-project-issues-dev] Fw: Hudson sluggishness

2012-02-16 Thread Winston Prakash

Resending.. (appears my earlier attempt got rejected)

Hello all,

I just joined this mailing list!

I would recommend to move to Hudson 2.2.0. The main reason for switching 
to Hudson 2.2.0 is support for project cascading [1] [2]. However, 
Hudson 2.2.0 changes project structure, so we need to be careful about 
backing up the home directory before switching, in case Hudson 2.2.0 
doesn't live up to the expectation.


I don't think Hudson 2.2.0 would solve the sluggishness problem. 
Sluggishness is not mainly due to Hudson Application, it also depends on 
the  Servlet containers ability to scale. I'm working with Matt and 
looking in to possible ways to improve the response. I will write a 
detail message on that.


Also, as mentioned by Matt, please do not use hudson context URL  to 
download huge directories. We should keep hudson application purely for 
building and viewing status and results. Downloading few artifacts is 
ok. We could have solutions some thing like


- By pass hudson servlet and create a sub context  in Apache (Hudson is 
proxied by Apache) that points to the workspaces, so download happens 
via Apache not Hudson Servlet.
- Use post build action in your job to push (publish) your project 
artifacts to location of your choice. I think SCP plugin may be useful.


I'm also analyzing the heap dump (2 GB!!), using Eclipse Memory 
Analyzer, to find out memory leaks which is killing Hudson slowly.


Hudson 3.0.0 is not ready (not even close ). Due IP cleaning, it is 
going though huge changes, mainly due to external library dependencies. 
We don't expect it to be ready before June-July time frame.



[1] http://wiki.hudson-ci.org/display/HUDSON/Project+cascading
[2] 
http://hudsoncentral.wordpress.com/2011/10/28/cascading-projects-released-as-beta/ 
(it's no longer BETA though)


- Winston
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Hudson testing next week

2012-02-17 Thread Winston Prakash
Actually I need only one restart, some time this weekend. I want to 
capture the memory dump from the live instance for few days  to analyze 
the progressive memory leaks. I needed the restart because in the 
current instance the memory is already saturated.


My initial analysis tells me the memory leaks happen when builds are 
delegated to slaves. Main reason I want to capture the memory dump from 
the production instance is because it has several slaves and lots of 
builds are done on these slaves. If we set up sandbox then we need to 
find several slaves. Thought would be easy to use the production master 
& slaves and the production memory dump would give real scenario.


- Winston

On 2/17/12 6:28 AM, David M Williams wrote:

If you have any questions please feel free to ask.

Well ... since you asked us to ask ... :)

Why not use "sandbox" for this? I know this might take a little more
initial work; back level sandbox, make sure same plugins installed, copy
production configs over to sandbox (saving what ever is in sandbox config
first, naturally). And then if problem is not reproducible there, then yes,
might have resort to using the production server ... but, seems sandbox
would the the first choice to test on?

I certainly have no objections to improving Hudson, and using production
server to do so, if needed,  but it seems overall, long term, there might
be lots of occasions where a "test server" would be needed to track down a
bug, so seems a solution that didn't depend on the production server to
track it down would be preferable. Maybe that's already been tried and the
environments are just too different? but figured it was worth asking.
Using the sandbox might be easier for you and Hudson team long term, since
you could start/stop/experiment a little more freely without worrying about
disrupting on going production work?

I'd understand if it was decided it was not possible, that sandbox
environment is just too different, but thought the obvious question should
be asked explicitly, and I hope these comments are helpful,






From:   "Webmsaster(Matt Ward)"
To: cross-project-issues-dev@eclipse.org,
Date:   02/17/2012 08:59 AM
Subject:[cross-project-issues-dev] Hudson testing next week
Sent by:cross-project-issues-dev-boun...@eclipse.org



Hi Folks,

The Hudson team has found evidence of a memory leak and have asked if
next week(after SR2) if it would be possible to restart our Hudson
instance a few times in order track it's progression.  My plan is to
announce the restarts here(as we do for any planned restart).

Hopefully this will help improve everyones Hudson experience.

If you have any questions please feel free to ask.

-Matt.
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev



___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Hudson testing next week

2012-02-17 Thread Winston Prakash

Just once. I promise :-)

We have another instance where we build hudson plugin 
http://hudson-ci.org/hudson/. I could get the dump from there. But it 
doesn't have any slaves setup yet.


I'll file a bug to track the progress.

- Winston

On 2/17/12 10:02 AM, David M Williams wrote:

Thought would be easy to use the production master
&  slaves and the production memory dump would give real scenario.

Sure. Easier. And by all means, I don't mind ... if just once. I would just
hate
to see our production server progressively become the Hudson project's
"test server" ... they should have
their own test server :) (complete with slaves, tough jobs, variable loads,
variable file systems, unit tests, functional tests, etc.) ... but this one
case makes a lot of sense, since it is an urgent problem and debugging the
live production server sounds like an easy and prudent thing to do.

I hope it helps ... the "load", the exact jobs ran, etc., will likely
differ quite a bit from day to day, week to week, depending on phase of
dev. cycle, so hope you "capture" what you need.

Is there a bug open for this, so the interested can track progress and
results? (I searched, but didn't see anything obvious)

Good luck! Sincerely.

Thanks so much for helping!





From:   Winston Prakash
To: Cross project issues,
Date:   02/17/2012 12:42 PM
Subject:Re: [cross-project-issues-dev] Hudson testing next week
Sent by:cross-project-issues-dev-boun...@eclipse.org



Actually I need only one restart, some time this weekend. I want to
capture the memory dump from the live instance for few days  to analyze
the progressive memory leaks. I needed the restart because in the
current instance the memory is already saturated.

My initial analysis tells me the memory leaks happen when builds are
delegated to slaves. Main reason I want to capture the memory dump from
the production instance is because it has several slaves and lots of
builds are done on these slaves. If we set up sandbox then we need to
find several slaves. Thought would be easy to use the production master
&  slaves and the production memory dump would give real scenario.

- Winston

On 2/17/12 6:28 AM, David M Williams wrote:

If you have any questions please feel free to ask.

Well ... since you asked us to ask ... :)

Why not use "sandbox" for this? I know this might take a little more
initial work; back level sandbox, make sure same plugins installed, copy
production configs over to sandbox (saving what ever is in sandbox config
first, naturally). And then if problem is not reproducible there, then

yes,

might have resort to using the production server ... but, seems sandbox
would the the first choice to test on?

I certainly have no objections to improving Hudson, and using production
server to do so, if needed,  but it seems overall, long term, there might
be lots of occasions where a "test server" would be needed to track down

a

bug, so seems a solution that didn't depend on the production server to
track it down would be preferable. Maybe that's already been tried and

the

environments are just too different? but figured it was worth asking.
Using the sandbox might be easier for you and Hudson team long term,

since

you could start/stop/experiment a little more freely without worrying

about

disrupting on going production work?

I'd understand if it was decided it was not possible, that sandbox
environment is just too different, but thought the obvious question

should

be asked explicitly, and I hope these comments are helpful,






From:"Webmsaster(Matt Ward)"
To:  cross-project-issues-dev@eclipse.org,
Date:02/17/2012 08:59 AM
Subject: [cross-project-issues-dev] Hudson testing next week
Sent by: cross-project-issues-dev-boun...@eclipse.org



Hi Folks,

 The Hudson team has found evidence of a memory leak and have asked if
next week(after SR2) if it would be possible to restart our Hudson
instance a few times in order track it's progression.  My plan is to
announce the restarts here(as we do for any planned restart).

Hopefully this will help improve everyones Hudson experience.

If you have any questions please feel free to ask.

-Matt.
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev



___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev



___
cross-pr

Re: [cross-project-issues-dev] Hudson testing next week

2012-02-17 Thread Winston Prakash

Denis,

Thanks. I will use it for my future experiments. I've filed a bug for 
the memory leak study


https://bugs.eclipse.org/bugs/show_bug.cgi?id=371921

- Winston

On 2/17/12 10:21 AM, Denis Roy wrote:

On 02/17/2012 01:02 PM, David M Williams wrote:

Thought would be easy to use the production master
&  slaves and the production memory dump would give real scenario.

Sure. Easier. And by all means, I don't mind ... if just once. I would just
hate
to see our production server progressively become the Hudson project's
"test server" ... they should have
their own test server :)

As Hudson is an Eclipse project, webmasters are willing to share the
keys to our current sandbox [1] with the Hudson team for their
testing/debugging.  Our sandbox also has one active slave, but the
entire environment is very idle.

[1] https://hudson.eclipse.org/sandbox/
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Hudson testing next week

2012-02-17 Thread Winston Prakash



On 2/17/12 1:43 PM, Matthias Sohn wrote:

2012/2/17 Denis Roy mailto:denis@eclipse.org>>

On 02/17/2012 01:02 PM, David M Williams wrote:
>> Thought would be easy to use the production master
>> & slaves and the production memory dump would give real scenario.
> Sure. Easier. And by all means, I don't mind ... if just once. I
would just
> hate
> to see our production server progressively become the Hudson
project's
> "test server" ... they should have
> their own test server :)
As Hudson is an Eclipse project, webmasters are willing to share the
keys to our current sandbox [1] with the Hudson team for their
testing/debugging.  Our sandbox also has one active slave, but the
entire environment is very idle.

[1] https://hudson.eclipse.org/sandbox/
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org

https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


We could easily put some load on sandbox if we move back our
gerrit verification jobs for the jgit and egit projects from main hudson
to sandbox hudson. These jobs start whenever a new or updated
change is uploaded to Gerrit for code review.

The jobs which have been migrated to main hudson last week
anyway don't work properly yet. Could you adjust the gerrit-trigger
plugin configuration of sandbox hudson to point at the shiny new
Gerrit server and upload it's public key to Gerrit ? Then I will revive
the verification build jobs and we'll create some load on sandbox
hudson so that the Hudson team has some real traffic to monitor.


I think this is a good idea. Last week I noticed that the Gerrit plugin 
was misbehaving and it was firing builds of jobs which were not involved 
in any Gerrirt review. This would give me an opportunity to look in to 
the issues with Gerrit plugin fix them and then move the plugin to 
production Hudson.


- Winston


--
Matthias


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-22 Thread Winston Prakash

Jason,

I saw your TODO note in the code about the reference.  I'm not worried 
about that.  I'm wondering about the amount of memory it takes to keep 
the parsed artifacts info.  Only builds of this job takes this much 
memory. Could this be because of maven-Javadoc-plugin as mentioned by 
Mickael.


When it comes to memory leak, Stapler and XStream are the worst enemies 
I have to fight with :-)


- Winston

On 2/21/12 9:08 PM, Jason Dillon wrote:

IIRC the build state is only in a soft ref when loading from disk, but there is 
a hard-ref when a new build is run, so its retained until the server reboots. 
The project work at sonatype was terminated before this and many other issues 
could be resolved. There needed to be some additional api added to the xref 
bits to allow a reference to build state once built to be transitioned from a 
hard to soft ref.

--jason


On Tuesday, February 21, 2012 at 8:51 PM, Winston Prakash wrote:


Hi Jason D./Stewart,

As per request from Eclipse foundation, I'm analyzing performance of their Hudson 
instance. I noticed that tycho-gmp.gmf.tooling 
(https://hudson.eclipse.org/hudson/view/Tycho%20+%20Maven/job/tycho-gmp.gmf.tooling)
 builds are occupying about 120 MB of memory. What is special about this job. The 
memory is specifically consumed by maven3 plugin data model. When I try to look at 
the maven3 structure UI 
(https://hudson.eclipse.org/hudson/view/Tycho%20+%20Maven/job/tycho-gmp.gmf.tooling/136/maven/?)
 in Hudson it spins for ever. See the tree in the attached image. Any idea what is 
causing this huge memory consumption? BTW, only builds of this job seem to 
occupying this much memory. Other job builds consume much less memory (<  10 
MB).

A memory leak in Stapler causes GC from collecting these huge build related 
objects and eventually Hudson dies running out of memory.

- Winston






___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-22 Thread Winston Prakash

Denis,

Right now I'm not talking about memory leak, but memory consumption.

Hudson has a tendency to keep the info about entire artifacts of a 
build. For example, if a job contains JUnit test results then this info 
is kept in the memory. Currently Hudson is holding about 360 jobs with 
6500 builds and 510,000 JUnit Case results in the memory. Average memory 
consumed by each JUnit Test case is about 200kb. So net memory 
consumption by JUnit results should be about 100 MB, however memory 
profiler reports 250 MB of memory occupied by Test cases.


I noticed that the job hudson-test-harness (which is maintained by me) 
has the  debug turned on. Because of this the Junit Test standard 
outputs has about 15-20 MB of debug results and this entire result was 
in the memory. Just 10 JUnit Case results occupy about 150-200 MB of 
memory. Now I turned off the debug. However, old Junit test results are 
still in the memory. So next time when Hudson restarts it should consume 
less memory (about 150 MB less). I noticed some other jobs too have 
large JUnit standard output. Reducing those output should reduce over 
all Hudson memory consumption. I will try to list those jobs. The ideal 
solution would be to fix in Hudson to load JUnit Test results  lazily, 
but that is a long term solution.


Similarly, the job "tycho-gmp.gmf.tooling" is producing huge amount of 
maven artifacts which are also kept in the memory (about 128 MB). I'm 
not sure exactly what it is, could be JavaDoc as pointed out by Mickael. 
I'm trying to find from tycho team if we could reduce those maven 
artifacts in the build (temporarily at least), until we find proper 
solution in Hudson itself.


Of course we need to fix the memory leaks, however the quick solution is 
to reduce these huge memory footprints for now in a easy way.


- Winston

On 2/22/12 10:32 AM, Denis Roy wrote:

Winston,

Hudson is a very important part of our infrastructure.  If you've found
a plugin (or a specific build job) that is obviously leaking memory
(thus making our Hudson instance slow) I suggest we disable that plugin
and/or job immediately and open a bug to request it be fixed.

In this case, is the problem with a specific plugin, or is it within the
job?  I can't tell from the words that are written below.

Thanks,


Denis



On 02/22/2012 11:37 AM, Winston Prakash wrote:

Jason,

I saw your TODO note in the code about the reference.  I'm not worried
about that.  I'm wondering about the amount of memory it takes to keep
the parsed artifacts info.  Only builds of this job takes this much
memory. Could this be because of maven-Javadoc-plugin as mentioned by
Mickael.

When it comes to memory leak, Stapler and XStream are the worst
enemies I have to fight with :-)

- Winston

On 2/21/12 9:08 PM, Jason Dillon wrote:

IIRC the build state is only in a soft ref when loading from disk,
but there is a hard-ref when a new build is run, so its retained
until the server reboots. The project work at sonatype was terminated
before this and many other issues could be resolved. There needed to
be some additional api added to the xref bits to allow a reference to
build state once built to be transitioned from a hard to soft ref.

--jason


On Tuesday, February 21, 2012 at 8:51 PM, Winston Prakash wrote:


Hi Jason D./Stewart,

As per request from Eclipse foundation, I'm analyzing performance of
their Hudson instance. I noticed that tycho-gmp.gmf.tooling
(https://hudson.eclipse.org/hudson/view/Tycho%20+%20Maven/job/tycho-gmp.gmf.tooling)
builds are occupying about 120 MB of memory. What is special about
this job. The memory is specifically consumed by maven3 plugin data
model. When I try to look at the maven3 structure UI
(https://hudson.eclipse.org/hudson/view/Tycho%20+%20Maven/job/tycho-gmp.gmf.tooling/136/maven/?)
in Hudson it spins for ever. See the tree in the attached image. Any
idea what is causing this huge memory consumption? BTW, only builds
of this job seem to occupying this much memory. Other job builds
consume much less memory (<   10 MB).

A memory leak in Stapler causes GC from collecting these huge build
related objects and eventually Hudson dies running out of memory.

- Winston




___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-22 Thread Winston Prakash



On 2/22/12 12:55 PM, Matthias Sohn wrote:
2012/2/22 Winston Prakash <mailto:winston.prak...@gmail.com>>


Denis,

Right now I'm not talking about memory leak, but memory consumption.

Hudson has a tendency to keep the info about entire artifacts of a
build. For example, if a job contains JUnit test results then this
info is kept in the memory. Currently Hudson is holding about 360
jobs with 6500 builds and 510,000 JUnit Case results in the
memory. Average memory consumed by each JUnit Test case is about
200kb. So net memory consumption by JUnit results should be about
100 MB, however memory profiler reports 250 MB of memory occupied
by Test cases.

why is Hudson keeping all that in memory ?


Looks like the original authors developed Hudson that way. May be at 
that time the authors did not envision Hudson to be so successful and  
become an enterprise tool :-)



Looks like it should instead use a LRU cache
with a fixed max cache size.
Yes, that would be the right architecture for a large enterprise 
installation. I will look in to that.



I noticed that the job hudson-test-harness (which is maintained by
me) has the  debug turned on. Because of this the Junit Test
standard outputs has about 15-20 MB of debug results and this
entire result was in the memory. Just 10 JUnit Case results occupy
about 150-200 MB of memory. Now I turned off the debug. However,
old Junit test results are still in the memory. So next time when
Hudson restarts it should consume less memory (about 150 MB less).
I noticed some other jobs too have large JUnit standard output.
Reducing those output should reduce over all Hudson memory
consumption. I will try to list those jobs. The ideal solution
would be to fix in Hudson to load JUnit Test results  lazily, but
that is a long term solution.

Similarly, the job "tycho-gmp.gmf.tooling" is producing huge
amount of maven artifacts which are also kept in the memory (about
128 MB). I'm not sure exactly what it is, could be JavaDoc as
pointed out by Mickael. I'm trying to find from tycho team if we
could reduce those maven artifacts in the build (temporarily at
least), until we find proper solution in Hudson itself.

does it keep all the maven artifacts in memory or some metadata about 
these artifacts ?


I think it keeps only the metadata. But something is going weird, I'm 
trying to figure that out with the maven folks.


Of course we need to fix the memory leaks, however the quick
solution is to reduce these huge memory footprints for now in a
easy way.


could we throw more memory on Hudson to workaround these problems until
Hudson became smarter about caching data ?
Currently 2.5 GB of memory is set aside for the Hudson JVM. If we are 
able to get rid of these huge memory consumers, I guess that amount 
should be ok.


- Winston

--
Matthias


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-23 Thread Winston Prakash



On 2/23/12 2:06 AM, Mickael Istria wrote:

Hi Denis, Winston, all,

Right.  So I propose that the tycho-gmp.gmf.tooling 
 
job be moved over to our Sandbox so that its memory consumption does 
not impact our Hudson instance.  Any objections?
How much trouble does this cause to other jobs? Is this job the only 
responsible for Hudson being slow? Would removing from Hudson save the 
world?
Certainly not. As I mentioned earlier I'm trying to knock down large 
memory consumers. The over all memory occupied by this job  is lot less 
than memory occupied by hudson-test-harness which I fixed already. I'm 
just looking to see if I can do something in this job also to reduce 
memory, but that alone is not enough.


If yes, then I think it is fair to move it TEMPORARLY to the sandbox 
while the issue is not fixed. But I'd be OK with that just if someone 
can promise to the GMF Tooling team that the issue (which is in GMF 
build? in GMF job? in Maven? in Hudson?) will be fixed in a short 
delay, otherwise the CI job for GMF Tooling will be forever in a 
sandbox, it'd have a bad effect on the project. Also, be sure that if 
GMF Tooling job is moved to sandbox, the same problem will happen on 
sandbox...
If no, then it seems that the job does not cause real trouble, then 
let's keep it as it and keep on enjoying life while Hudson folks 
improve the memory consumption.


Is there any benefit in blaming this job and moving it to sandbox?
No it is not the cause of trouble, I don't recommend to move it. It 
helps in a small way if we can fix what ever in this job takes up memory 
like I did in hudson-test-harness. But that is not going to make the 
whole issue go away.


Winston,
Correct me if I'm wrong, but since the build happens in a forked 
process in Hudson, it does not consume memory by itself. Then the 
problem is clearly in Hudson or one of its plugins, isn't it? So the 
Maven/Tycho build by itself is not the culprit. If the memory is 
consumed by Hudson JVM, then the issue in is Hudson.
Is there something we could change in configuration of the job to make 
it consume less memory?
About the tests, GMF-Tooling has 431 tests, I am not really sure. Some 
other projects probably have more tests on this Hudson instance and 
don't have the same problem. But maybe the content of test reports is 
bigger in GMF Tooling (more execution traces maybe?) causing this huge 
memory consumption.

Any help is appreciated. You can get me on Skype: mickael.istria


After the build is done, Hudson keeps some of the build artifacts such 
as JUNit or maven metadata info. Since hudson keeps all the builds of 
all joba, this bloats up the memory. The immediate solution is reduce 
the foot print of these build artifacts, if we can. Long term term 
solution is fix Hudson not to keep all the builds in memory


- Winston


--
Mickael Istria
Eclipse developer at JBoss, by Red Hat 
My blog  - My Tweets 




___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-23 Thread Winston Prakash

Denis,

Let us not move the job. The fix I put in hudson-test-harness (reducing 
foot print of JUnit Tests) should save lot more than this job. I request 
a restart during this week end and do some more heap analysis early next 
week.


- Winston

On 2/23/12 7:26 AM, Denis Roy wrote:

On 02/23/2012 09:59 AM, Mickael Istria wrote:
When it is done, I'd be glad to have feedback on the overall 
performance benefit of moving just one job. It's just saving 100MB on 
a 2GB JVM. If nothing changed, then I'll annoy to get job is back on 
hudson.eclipse.org.


The background for all this is bug 367238.  It is called "improve 
Hudson stability and performance" and it is marked "critical".  I've 
put a number of bugs as dependencies to that bug.  As I've mentioned, 
closing only one of the blockers will not fix the issue, but when a 
job consumes 10x the amount of RAM of other jobs, it is clearly 
misbehaving.


Let's resume this discussion on the appropriate bugs.  Thanks.

Denis


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-23 Thread Winston Prakash



On 2/23/12 9:04 AM, Nicolas Bros wrote:


Exactly what does keeping JUnit test results in memory mean? 

I assume it only keeps the Junit test report XML files chosen in 
"Publish JUnit test result report" in the job configuration. These can 
get big when the tests output a lot of text (stacktraces, log messages 
etc.)


Yes that is correct. To know exactly look at the XML file hudson creates 
at /builds//junitResult.xml. If this file is large, then 
it is going to consume more memory.


- Winston


On Thu, Feb 23, 2012 at 5:52 PM, Ed Willink > wrote:


Hi


After the build is done, Hudson keeps some of the build
artifacts such as JUNit or maven metadata info. Since hudson
keeps all the builds of all joba, this bloats up the memory.
The immediate solution is reduce the foot print of these build
artifacts, if we can. Long term term solution is fix Hudson
not to keep all the builds in memory

Exactly what does keeping JUnit test results in memory mean? For
the MDT/OCL JUnit tests, the derived TestCase classes have fields
that can reference useful and sometimes large models. These used
to be a major source of 'leakage' and sometimes prevented tests
running in 512M. After making sure that all fields were nulled in
tearDown() they run in 32M. GMF Tooling uses MDT/OCL, so they may
be encountering a similar form of model lock-in.

   Regards

   Ed Willink

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org

https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

-- 
Nicolas Bros

R&D
tel: 06 75 09 19 88
nb...@mia-software.com
nbros@gmail.com
Mia-Software, 410 clos de la Courtine
93160 Noisy-le-Grand
http://www.mia-software.com
.: model driven agility :.





___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-23 Thread Winston Prakash



On 2/23/12 10:29 AM, Denis Roy wrote:

Winston,

The job is hogging memory -- it needs to go. 


The problem is, I can't say conclusively the memory consumption I saw is 
transient or permanent. It is based on one memory dump I took. I need to 
analyze more dumps. I got 4 so far.


I just looked at the jobs artifacts of tycho-gmp.gmf.tooling. I see a 
large file (145 MB)


hudsonbuild@hudson:/opt/public/jobs/tycho-gmp.gmf.tooling/builds/135> ls 
-la maven*.xml


-rw-r--r--  1 hudsonbuild callisto-dev 145185863 2012-02-21 13:49 
maven-build-23fc8cf8-9fe3-4a66-8e73-2612c606debf.xml


I suspect it could be because the maven3 job configuration set to use 
"private maven repository". I'll uncheck that and do a build and see if 
that helps.


If it's the jenkins-maven3 plugin that is the culprit, then it needs 
to go too.  ASAP.


It is not Jenkins-maven3 plugin. It is own own built by Sonatype. We 
have this set up at our installation at Oracle and we have no problem. I 
don't think it is maven 3 plugin problem either.




Is it?

Memory leaks on production, mission-important systems will be 
tolerated no more.  I am angry now  :)
Understand. Bear with me, I'm spending my entire week exclusively on 
analyzing the performance issue. I need some more time to study.


As per my finding Hudson (for that matter Jenkins also) has a 
fundamental architecture problem of holding all builds of all jobs in 
memory. This is ok for small installation, but not for enterprise level 
installations like Eclipse. Hudson quickly became popular and now used 
in enterprise environment. But it never evolved to be an enterprise 
level tool. Now that Hudson is a Top level Hudson project, we at Eclipse 
Foundation need to take pride to evolve Hudson in to an enterprise tool. 
Being a lead for the Hudson project, I'm taking every steps to do that. 
But I need time to work on that.


The correct solution is to fix  Hudson and make it scale well in an 
enterprise environment . Unfortunately that can not be done over night. 
It may take weeks, even months to fix it, because Hudson is a huge tool. 
For the past several months, my whole time is spent cleaning up the 
years of crud collected and making the code base IP clean, so that it 
can be released from Eclipse. Once that is done, I can attack the 
architectural problem.


Next best thing for the time being we could do is, keep the footprints 
of these builds smaller, so that Hudson does not run out of memory. I'm 
working on to find the potential area of improvement among the builds.




BTW: I am very greatful for your help and insight so far.


No Problem, at Oracle we have  larger installation of Hudson 2.1.2. I 
talked to them and they don't see any issues. I'm comparing the 
configuration between Eclipse and Oracle installation and see what the 
actual differences are.


- Winston




On 02/23/2012 11:16 AM, Winston Prakash wrote:

Denis,

Let us not move the job. The fix I put in hudson-test-harness 
(reducing foot print of JUnit Tests) should save lot more than this 
job. I request a restart during this week end and do some more heap 
analysis early next week.


- Winston

On 2/23/12 7:26 AM, Denis Roy wrote:

On 02/23/2012 09:59 AM, Mickael Istria wrote:
When it is done, I'd be glad to have feedback on the overall 
performance benefit of moving just one job. It's just saving 100MB 
on a 2GB JVM. If nothing changed, then I'll annoy to get job is 
back on hudson.eclipse.org.


The background for all this is bug 367238.  It is called "improve 
Hudson stability and performance" and it is marked "critical".  I've 
put a number of bugs as dependencies to that bug.  As I've 
mentioned, closing only one of the blockers will not fix the issue, 
but when a job consumes 10x the amount of RAM of other jobs, it is 
clearly misbehaving.


Let's resume this discussion on the appropriate bugs.  Thanks.

Denis



___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


[cross-project-issues-dev] Hudson home on a slow NFS mount

2012-02-23 Thread Winston Prakash

I see that the Hudson jobs folder is at

 jobs -> /opt/public/jobs

which seems to be a NFS mount from

wilma:/opt/public1975741280 1248514496 626864960  67% /opt/public

This NFS mount seems to be really slow. To give an idea, I just gave 
this simple command to get an idea about varies build artifacts


find `find . -name "builds" -print`  -name "*.xml" -ls

This command shouldn't take more than few minutes to complete. But it is 
more than 45 mins and haven't got the result yet. I'm afraid having 
Hudson to build on this NFS mount is going to degrade the performance 
further. To boost the performance we need to use a local disk for entire 
hudson home.


I remember Matt mentioned it is there for committers to get access to 
the artifacts etc. If some one elaborate the exact reason, we can find 
alternate way to build in the local disk, but provide way to get the 
artifacts from public NFS mount. May be using some plugin (SCP plugin?) 
to copy (or publish) the required artifacts after the build is done.


- Winston


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


[cross-project-issues-dev] jobs having more than 15 old builds

2012-02-23 Thread Winston Prakash

  
  
To keep the Eclipse Hudson instance running smoothly, I have one
more request. Limit the number of old builds to about 15. These are
the jobs which have more than 15 builds. It would really help if you
set number of old builds to keep as 15 in your job configuration.




amalgam-1.2.0  256
amp-integration  59
amp-nightly  37
cbi-scout-3.7  21
cbi-scout_rt-3.7.0-nightly  99
cbi-wtp-inc-xquery-conformance  16
egit  19
egit-github.gerrit  17
egit.gerrit  294
egit.test  27
emf-base-head  26
emf-compare-master  21
emf-core-head  36
emf-emfclient-maintenance  33
emf-emfstore-integration  151
emf-xcore-head  31
epp-mpc-e3.6  61
epp-mpc-e3.7  43
epp-mpc-e4  270
epp-mpc-release  29
epp-repository-build-helios  22
gef-nightly-tycho  24
gef4-nightly-tycho  20
gemini-management  118
gyrex-integration  26
indigo.epp-package-build  20
indigo.epp-repository-build  35
indigo.runAggregator  22
jetty-8  73
jgit.gerrit  100
juno.epp-package-build  21
juno.epp-repository-build  23
juno.runReports  34
m2t-acceleo-master  21
mdt-uml2-master  17
MWE-Language-nightly-HEAD  55
mylyn-nightly  25
mylyn-release  144
ptp-master-nightly  50
ptp-master-release  33
ptp-photran-nightly  35
ptp-photran-release  46
rap-incubator-1.5  60
rap-runtime  25
rap-runtime-old  33
rap-tooling  32
rmf-nightly  21
sapphire-0.4.x  20
sapphire-0.5.x  20
skalli  191
swtbot-e34  31
tm-master-nightly  17
tycho-gmp.gmf.runtime  20
tycho-gmp.gmf.tooling  20
tycho-gmp.gmf.tooling.maintenance  18
tycho-its-linux-nightly  30
tycho-its-win-nightly  178
tycho-mat-nightly  234
tycho-nightly  33
tycho-sitedocs-nightly  24
tycho.extras-nightly  28
udc.nightly  35
uomo  249
virgo.ide.snapshot  315
wtp-javaee-config-0.1.x  21
Xpand-nightly-HEAD  28
Xtext-nightly-HEAD  55
Xtext-test  35

- Winston


  

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] tycho-gmp.gmf.tooling builds consumes enormous amount of memory

2012-02-24 Thread Winston Prakash

Agree. No arguments there.

Off topic, wondering should there be a third option, custom location for 
maven repository. Say, I have 10 jobs, if I set private repository all 
10 jobs downloads artifacts separately and occupies 10x space. However, 
I don't want my jobs to share the global local m2 repository for the 
reasons you mentioned. But if we have something like a custom location, 
then all of 10 jobs could share that location, thus reducing the space 
occupied by m2 repository  by 10 times. Do you think having such option 
makes sense ?


- Winston

On 2/24/12 5:28 AM, Matthias Sohn wrote:

2012/2/24 Mykola Nikishov mailto:m...@mn.com.ua>>

On 02/24/2012 09:29 AM, Matthias Sohn wrote:

> I believe that many projects (maybe all ?) use private maven
repository
> inside job workspace
> for the following reasons:

[...]

>   * in general build jobs run into big trouble if some artifacts in
> maven repository shared
> across build jobs are corrupted since build engineers can't
fix this
> on their own since
> they don't have direct write access to this shared maven
repository
> so they can't delete
> the corrupt artifacts. With job-local, private maven
repository this
> can be easily fixed by
> wiping the job's workspace which will throw away the private
maven
> repository.

It's not about corruption only. Private repository provides better
isolation for dependent projects in terms of a) direct
dependencies and
b) versions of plugins:

a) For instance, running 'mvn install' for JGit will not affect
EGit (as
it depends on JGit) in any way.

b) For instance, if some project changes its maven-javadoc-plugin's
version to something 'latest and greatest', all other projects
that use
the same plugin without explicitly locking down its version, would use
this new version automagically. The result? Your build was good couple
days ago but now it's broken without any specific reason.


for maven dependencies we lock down all the versions we use

--
Matthias


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


[cross-project-issues-dev] Hudson build best practice

2012-02-24 Thread Winston Prakash
While I was investigating Hudson performance issues, I noticed some of 
the jobs are taking 2-3 hours to build. These jobs are building several 
time a day. When I glanced at the change set some of the changes are too 
insignificant to warrant a full fledged build. I don't know much about 
what it builds, so I can't comment on what it should do.


We had a very similar situation with Hudson code base builds. The core 
build took about 2.5 hrs to finish. I wasn't happy about it. I wanted my 
continuous build feedback with in 5 mins. So we restructured the code 
base in to smaller modules and used downstream-upstream paradigm to 
create a pipeline of builds. Now, none of the jobs in the pipeline takes 
more than 10 mins to build. Of course, to build the entire pipeline it 
takes about 2-3 hours. However, since it is a branched pipeline, failure 
feedback from branches are much faster and rest of wasted builds do not 
happen.


Susan and I will be at Eclipsecon 2012 (Reston, Virginia), presenting 
"Best practices for using Hudson as part of your Agile strategy" based 
on our experience of restructuring Hudson code base for better CI 
experience . Hope to see some of you there.


PS: Sorry for the plug, thought it would help some of you to get a 
better CI  experience :-)


- Winston
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Hudson job keeps rebuilding

2012-02-27 Thread Winston Prakash

David,

I already announced the solution (pasted it here again), I'm waiting for 
the webmaster to fix it.


>

We have two options

- Change the label of the node "hudson-slave1" to "hudson-slave1" (currently it is 
"hudson-slave2"). Then change those jobs
which are tied to the label "hudson-slave2" to "hudson-slave1"

- Leave the label of the node "hudson-slave1" as "hudson-slave2", but change all those 
jobs which are tied to the label "hudson-slave1" to
"hudson-slave2"

I think options 1 is easier because only three jobs are tied to the label 
"hudson-slave2", but more than 35 jobs are tied to the label
"hudson-slave1".

<-


Also since this require reboot, it would help if the job owners fix 
their jobs with the other two suggestions I made earlier.


- Specify in the job to retain only about 15-20 old builds
- Reduce the amount of standard output (debug trace etc) in JUnit tests

With that done, along with the NFS change, I expect better performing 
hudson.


- Winston


On 2/27/12 9:27 AM, David M Williams wrote:

If you are interested to know the reason (lengthy read though) here is

the note I shared with Matt and Denis.

Thanks for sharing the information. It seems very significant (little that
I understand it). I would encourage you to open/track such issues in
bugzilla, as it makes it easier to track progress and provides a better
long-term record of what's happened, what was changed, etc.

Though communication here is good too, and sounds like at some point you'll
need to announce "the solution" and who has to change what.

Greatest thanks,





From:   Winston Prakash
To: Cross project issues,
Date:   02/26/2012 03:06 PM
Subject:Re: [cross-project-issues-dev] Hudson job keeps rebuilding
Sent by:cross-project-issues-dev-boun...@eclipse.org



I was also baffled by this repeated building. Initially we thought it was
because of Gerrit plugin. After couple of days of  pouring through
everything, finally found out the reason. I discussed the reason and
possible solution with Matt and Denis. Hopefully we could fix this next
week.

If you are interested to know the reason (lengthy read though) here is the
note I shared with Matt and Denis.

When you tie a job to a slave configuration, in the configuration file
following is written

hudson-slave1

When a build finishes, a file is created/builds//build.xml.
This file has

hudson-slave1

When Git plugin does a polls, it first checks what node the job is tied to
(in this case "hudson-slave1") and then checks which node the last build
happened (in this case it is again "hudson-slave1"). If the "builtOn" node
is not same as "assignedNode", then Git poll triggers a build.

Strangely Git plugin still triggers the build even though both are same. I
wrote a small Groovy script to verify both are same. See below the object (
@5301ca0c) and the display name are same.




I noticed one odd thing. If you see the label I printed out above it says
"hudson-slave1" which belongs to the node "hudson-slave1". Strangely, the
node "hudson-slave1" has two labels "build2" and "hudson-slave2". There is
no such label "hudson-slave1" (see the first picture).




However, if I look at nodes tied to to label "hudson-slave1", it shows the
corresponding node as "hudson-slave1". This is really odd, because there is
no such label.




Coincidentally all these jobs which are  going crazy and doing repeated
builds belong to the mysterious label "hudson-slave1", which in fact
doesn't exists.

I changed the tied label of one of the job to "hudson-slave2" which is the
label for the node "hudson-slave1" and it stopped building repeatedly.

Appears the labels of the nodes are messed up. If we clean up this mess,
all those jobs going crazy will get back its sanity. We have two options to
clean up the mess

- Change the label of the node "hudson-slave1" to
"hudson-slave1" (currently it is "hudson-slave2"). Then change those jobs
which are tied to the label "hudson-slave2" to "hudson-slave1"

- Leave the label of the node "hudson-slave1" as "hudson-slave2", but
change all those jobs which are tied to the label "hudson-slave1" to
"hudson-slave2"

I think options 1 is easier because only three jobs are tied to the label
"hudson-slave2", but more than 35 jobs are tied to the label
"hudson-slave1".

BTW, this also requires a Hudson restart and the good news is we can put
back Gerrit plugin, because it seems it has nothing to do  with the
repeated build.

- Winston

On 2/26/12 9:35 AM, Doug Schaefer wrote:
   I'm seeing the same thing. The build

Re: [cross-project-issues-dev] Hudson job keeps rebuilding

2012-02-28 Thread Winston Prakash
The webmaster (Matt) removed the label "hudson-slave2", since then my 
jobs which were building repeated stopped building. Are others seeing 
the same?


I have filed a bug to keep track of the issue

372755 <https://bugs.eclipse.org/bugs/show_bug.cgi?id=372755> - Hudson 
confuses node name and label name which cause git poll to trigger build 
repeatedly


- Winston

On 2/27/12 2:17 PM, Winston Prakash wrote:

David,

I already announced the solution (pasted it here again), I'm waiting 
for the webmaster to fix it.


>

We have two options

- Change the label of the node "hudson-slave1" to "hudson-slave1" 
(currently it is "hudson-slave2"). Then change those jobs

which are tied to the label "hudson-slave2" to "hudson-slave1"

- Leave the label of the node "hudson-slave1" as "hudson-slave2", but 
change all those jobs which are tied to the label "hudson-slave1" to

"hudson-slave2"

I think options 1 is easier because only three jobs are tied to the 
label "hudson-slave2", but more than 35 jobs are tied to the label

"hudson-slave1".

<-


Also since this require reboot, it would help if the job owners fix 
their jobs with the other two suggestions I made earlier.


- Specify in the job to retain only about 15-20 old builds
- Reduce the amount of standard output (debug trace etc) in JUnit tests

With that done, along with the NFS change, I expect better performing 
hudson.


- Winston


On 2/27/12 9:27 AM, David M Williams wrote:

If you are interested to know the reason (lengthy read though) here is

the note I shared with Matt and Denis.

Thanks for sharing the information. It seems very significant (little 
that

I understand it). I would encourage you to open/track such issues in
bugzilla, as it makes it easier to track progress and provides a better
long-term record of what's happened, what was changed, etc.

Though communication here is good too, and sounds like at some point 
you'll

need to announce "the solution" and who has to change what.

Greatest thanks,





From:Winston Prakash
To:Cross project issues,
Date:02/26/2012 03:06 PM
Subject:Re: [cross-project-issues-dev] Hudson job keeps rebuilding
Sent by:cross-project-issues-dev-boun...@eclipse.org



I was also baffled by this repeated building. Initially we thought it 
was

because of Gerrit plugin. After couple of days of  pouring through
everything, finally found out the reason. I discussed the reason and
possible solution with Matt and Denis. Hopefully we could fix this next
week.

If you are interested to know the reason (lengthy read though) here 
is the

note I shared with Matt and Denis.

When you tie a job to a slave configuration, in the configuration file
following is written

hudson-slave1

When a build finishes, a file is 
created/builds//build.xml.

This file has

hudson-slave1

When Git plugin does a polls, it first checks what node the job is 
tied to

(in this case "hudson-slave1") and then checks which node the last build
happened (in this case it is again "hudson-slave1"). If the "builtOn" 
node

is not same as "assignedNode", then Git poll triggers a build.

Strangely Git plugin still triggers the build even though both are 
same. I
wrote a small Groovy script to verify both are same. See below the 
object (

@5301ca0c) and the display name are same.




I noticed one odd thing. If you see the label I printed out above it 
says
"hudson-slave1" which belongs to the node "hudson-slave1". Strangely, 
the
node "hudson-slave1" has two labels "build2" and "hudson-slave2". 
There is

no such label "hudson-slave1" (see the first picture).




However, if I look at nodes tied to to label "hudson-slave1", it 
shows the
corresponding node as "hudson-slave1". This is really odd, because 
there is

no such label.




Coincidentally all these jobs which are  going crazy and doing repeated
builds belong to the mysterious label "hudson-slave1", which in fact
doesn't exists.

I changed the tied label of one of the job to "hudson-slave2" which 
is the

label for the node "hudson-slave1" and it stopped building repeatedly.

Appears the labels of the nodes are messed up. If we clean up this mess,
all those jobs going crazy will get back its sanity. We have two 
options to

clean up the mess

- Change the label of the node "hudson-slave1" to
"hudson-slave1" (currently it is "hudson-slave2"). Then change those 
jobs

which are tied to the label "hudson-slave2" to "hudson-slave1"

- Leave the label of the node "hudson-slave1" as "hudson-slave2", but
change all those jobs which are tied to the label "hudson-slave1" to
"h

Re: [cross-project-issues-dev] Hudson Jetty conversion

2012-02-28 Thread Winston Prakash



On 2/28/12 11:42 AM, Webmaster(Matt Ward) wrote:

Hi Folks,

 I know it's short notice but as it's quiet for Hudson this week I'm 
going to move us off the Winstone servlet and onto Jetty.  I've got 
everything ready to go so at 4pm I'll shut Hudson down and then 
startup the Jetty instance.  I expect we'll be offline for about 10 
minutes.


Nice. Good news is, for Hudson 3.0.0 we don't have to do this, because 
we replaced Winstone with Jetty as a bundled server.


I'm interested to know, from Jetty folks, if there is a difference 
between Jetty bundled server and Standlone server in terms of performance.


- Winston



-Matt.



___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Hudson Jetty conversion

2012-02-28 Thread Winston Prakash
Let me clarify, what I meant by "bundled" (I think the correct word is 
embedded) and "standalone".  You can use the hudson.war two ways


Just run the war as a regular executable jar using java -jar hudson.war. 
In this case it uses the jetty jars bundled with in the war and starts 
an embedded jetty which in turn uses the hudson.war itself


Server server = new Server();
URL warUrl = protectionDomain.getCodeSource().getLocation();
WebAppContext context = new WebAppContext();
context.setWar(warUrl.toExternalForm());
..
server.start();

Other way (which Matt is doing now), drop the war file in a "webapps" 
folder of a normal Jetty distribution.


Wondering if there is any performance difference between the above two.

- Winston

On 2/28/12 12:16 PM, Jesse McConnell wrote:

not sure what is meant by 'bundled' vs 'standalone' really

maybe osgi vs normal distribution? or embedded vs normal distribution?

in those cases there shouldn't be any real default difference, osgi
has its natural bit of classloader muckity muck muck but in terms of
processing of a request from start to finish...not that I know of.

more info on what is meant by 'bundled' vs 'standalone' would help
clarify that though

cheers,
jesse

--
jesse mcconnell
jesse.mcconn...@gmail.com



On Tue, Feb 28, 2012 at 14:08, Winston Prakash
  wrote:


On 2/28/12 11:42 AM, Webmaster(Matt Ward) wrote:

Hi Folks,

  I know it's short notice but as it's quiet for Hudson this week I'm going
to move us off the Winstone servlet and onto Jetty.  I've got everything
ready to go so at 4pm I'll shut Hudson down and then startup the Jetty
instance.  I expect we'll be offline for about 10 minutes.


Nice. Good news is, for Hudson 3.0.0 we don't have to do this, because we
replaced Winstone with Jetty as a bundled server.

I'm interested to know, from Jetty folks, if there is a difference between
Jetty bundled server and Standlone server in terms of performance.

- Winston


-Matt.



___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


[cross-project-issues-dev] gerrit.eclipse.org?

2012-05-01 Thread Winston Prakash
Is it possible to  map https://git.eclipse.org:29418 to a virtual host
http://gerrit.eclipse.org. That would  make cloning from a Gerrit
enabled repository easier for us (don't have to remember the port no etc).

- Winston
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] gerrit.eclipse.org?

2012-05-01 Thread Winston Prakash
Ah, you are right. I forgot it is the SSH port.

- Winston

On 5/1/12 2:11 PM, Matthias Sohn wrote:
> 2012/5/1 Winston Prakash  <mailto:winston.prak...@gmail.com>>
>
> Is it possible to  map https://git.eclipse.org:29418 to a virtual host
> http://gerrit.eclipse.org. That would  make cloning from a Gerrit
> enabled repository easier for us (don't have to remember the port
> no etc).
>
>
> 29418 is the ssh port for Gerrit, so there is no point to try https://
> on this port.
> Gerrit's Jetty server is listening at https://git.eclipse.org/r/.
> If you are using the Mylyn Gerrit connector [1] you even don't need to
> remember
> this URL as the Gerrit connector has this URL already preconfigured.
>  
> [1] http://www.eclipse.org/reviews/gerrit/
>
> -- 
> Matthias
>
>
> ___
> cross-project-issues-dev mailing list
> cross-project-issues-dev@eclipse.org
> https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Fwd: Build failed in Hudson: mdt-etrice-nightly #259

2012-05-02 Thread Winston Prakash
When I look at the log file, it seems some how the remote class loader
has attempted to load duplicate classes. How does the affected slave
restarted, is it in the following sequence?

- Disconnect the slave from the master
- Physically restart the machine
- Reconnect the slave from the master

- Winston

On 5/2/12 6:46 AM, Denis Roy wrote:
> FATAL: cannot assign instance of hudson.EnvVars
>
> That error is an indication that we either need to restart the
> affected slave, or the master, or both.  I believe Matt has restarted
> slave6; please try again.
>
> Thanks
>
> On 05/02/2012 06:20 AM, Henrik Rentz-Reichert wrote:
>> Hi all,
>>
>> is this exception special for slave2?
>>
>> I haven't changed our build config for quite a while. We also had
>> successful builds since we switched to Gerrit recently.
>>
>> Can anybody help?
>>
>> Thanks,
>> Henrik
>>
>>  Original-Nachricht 
>> Betreff: Build failed in Hudson: mdt-etrice-nightly #259
>> Datum:   Wed, 2 May 2012 06:11:13 -0400 (EDT)
>> Von: hudsonbu...@eclipse.org
>> An:  h...@protos.de
>>
>>
>>
>> See 
>>
>> --
>> Started by an SCM change
>> Building remotely on hudson-slave2
>> Checkout:mdt-etrice-nightly / 
>>  - 
>> hudson.remoting.Channel@2d344d4d:hudson-slave2
>> Using strategy: Default
>> Last Built Revision: Revision 1b0c3ef0efd3aff268c480d58a90350a142f3d34 
>> (origin/master)
>> FATAL: cannot assign instance of hudson.EnvVars to field 
>> hudson.plugins.git.GitSCM$3.val$environment of type hudson.EnvVars in 
>> instance of hudson.plugins.git.GitSCM$3
>> java.lang.ClassCastException: cannot assign instance of hudson.EnvVars to 
>> field hudson.plugins.git.GitSCM$3.val$environment of type hudson.EnvVars in 
>> instance of hudson.plugins.git.GitSCM$3
>>  at 
>> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2039)
>>  at 
>> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1212)
>>  at 
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1952)
>>  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
>>  at 
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
>>  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
>>  at 
>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
>>  at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
>>  at 
>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
>>  at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
>>  at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
>>  at hudson.remoting.UserRequest.deserialize(UserRequest.java:178)
>>  at hudson.remoting.UserRequest.perform(UserRequest.java:98)
>>  at hudson.remoting.UserRequest.perform(UserRequest.java:48)
>>  at hudson.remoting.Request$2.run(Request.java:283)
>>  at 
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>  at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>  at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>  at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>  at java.lang.Thread.run(Thread.java:662)
>>
>>
>
>
>
> ___
> cross-project-issues-dev mailing list
> cross-project-issues-dev@eclipse.org
> https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Fwd: Build failed in Hudson: mdt-etrice-nightly #259

2012-05-02 Thread Winston Prakash


On 5/2/12 7:42 AM, Denis Roy wrote:
> We disconnect and reconnect (but do not restart the physical machine)
> and that usually solves it.

Yes. That should work. Next time could you make sure the Java process
that runs the slave agent in the Slave machine is properly shutdown,
means the process got correctly killed by the shutdown process from the
master.

- Winston
>
> Denis
>
> On 05/02/2012 10:41 AM, Winston Prakash wrote:
>> When I look at the log file, it seems some how the remote class
>> loader has attempted to load duplicate classes. How does the affected
>> slave restarted, is it in the following sequence?
>>
>> - Disconnect the slave from the master
>> - Physically restart the machine
>> - Reconnect the slave from the master
>>
>> - Winston
>>
>> On 5/2/12 6:46 AM, Denis Roy wrote:
>>> FATAL: cannot assign instance of hudson.EnvVars
>>>
>>> That error is an indication that we either need to restart the
>>> affected slave, or the master, or both.  I believe Matt has
>>> restarted slave6; please try again.
>>>
>>> Thanks
>>>
>>> On 05/02/2012 06:20 AM, Henrik Rentz-Reichert wrote:
>>>> Hi all,
>>>>
>>>> is this exception special for slave2?
>>>>
>>>> I haven't changed our build config for quite a while. We also had
>>>> successful builds since we switched to Gerrit recently.
>>>>
>>>> Can anybody help?
>>>>
>>>> Thanks,
>>>> Henrik
>>>>
>>>>  Original-Nachricht 
>>>> Betreff:   Build failed in Hudson: mdt-etrice-nightly #259
>>>> Datum: Wed, 2 May 2012 06:11:13 -0400 (EDT)
>>>> Von:   hudsonbu...@eclipse.org
>>>> An:h...@protos.de
>>>>
>>>>
>>>>
>>>> See <https://hudson.eclipse.org/hudson/job/mdt-etrice-nightly/259/>
>>>>
>>>> --
>>>> Started by an SCM change
>>>> Building remotely on hudson-slave2
>>>> Checkout:mdt-etrice-nightly / 
>>>> <https://hudson.eclipse.org/hudson/job/mdt-etrice-nightly/ws/> - 
>>>> hudson.remoting.Channel@2d344d4d:hudson-slave2
>>>> Using strategy: Default
>>>> Last Built Revision: Revision 1b0c3ef0efd3aff268c480d58a90350a142f3d34 
>>>> (origin/master)
>>>> FATAL: cannot assign instance of hudson.EnvVars to field 
>>>> hudson.plugins.git.GitSCM$3.val$environment of type hudson.EnvVars in 
>>>> instance of hudson.plugins.git.GitSCM$3
>>>> java.lang.ClassCastException: cannot assign instance of hudson.EnvVars to 
>>>> field hudson.plugins.git.GitSCM$3.val$environment of type hudson.EnvVars 
>>>> in instance of hudson.plugins.git.GitSCM$3
>>>>at 
>>>> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2039)
>>>>at 
>>>> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1212)
>>>>at 
>>>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1952)
>>>>at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
>>>>at 
>>>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
>>>>at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
>>>>at 
>>>> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1946)
>>>>at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1870)
>>>>at 
>>>> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1752)
>>>>at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1328)
>>>>at java.io.ObjectInputStream.readObject(ObjectInputStream.java:350)
>>>>at hudson.remoting.UserRequest.deserialize(UserRequest.java:178)
>>>>at hudson.remoting.UserRequest.perform(UserRequest.java:98)
>>>>at hudson.remoting.UserRequest.perform(UserRequest.java:48)
>>>>at hudson.remoting.Request$2.run(Request.java:283)
>>>>at 
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>>>at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>>at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>>at 
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>at 
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>at java.lang.Thread.run(Thread.java:662)
>>>>
>>>>
>>>>
>
>
>
> ___
> cross-project-issues-dev mailing list
> cross-project-issues-dev@eclipse.org
> https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


[cross-project-issues-dev] Job Cascading or Inheritance in Hudson 2.2.x

2012-09-25 Thread Winston Prakash
Since Eclipse Hudson instance is upgraded to 2.2.1, I thought it may be 
worth to mention about a helpful feature we introduced in Hudson 2.2.0. 
This is a useful feature especially if you have several jobs with 
similar configurations and maintaining them is a painful process.


In Hudson 2.2.0, we introduced the concept called "Job Cascading or 
Inheritance". The concept of cascading Job is the ability to inherit job 
properties from parent in a cascading fashion down the inheritance tree. 
The main points of this feature are


 * Any job of same type can be used as a parent job for inheritance
 * Cyclic inheritance is prohibited.
 * All Job properties of a child job, by default,  will be inherited
   from Parent.
 * User can override job properties by changing them on Child Job
   Configuration Page.
 * When overridden, property will be highlighted to show it is as
   overridden.
 * User will be allowed to revert the overriding property to that of
   the parent property using a simple revert button at the left hand side.

You can read more about this feature here

http://wiki.hudson-ci.org/display/HUDSON/Job+cascading+or+Inheritance
http://hudsoncentral.wordpress.com/2011/10/28/cascading-projects-released-as-beta/

- Winston
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] Unnecessary build load on hudson.eclipse.org

2012-12-18 Thread Winston Prakash
We saw the same couple of months ago. We were not able determine what is 
causing the git plugin to go in to this state. So I put some diagnostic log 
print out in the git plugin. Do we see any log in the server log file related 
to this?

- Winston


On Dec 18, 2012, at 5:38 AM, Denis Roy  wrote:

> I've noticed the same thing about Papyrus and Xtend / Xtext.  They seem to 
> always be building.
> 
> And I thought we had moved from Continuous Integration to Continually 
> Integrating.
> 
> I'll see if there is any tweaking we can do on our side.
> 
> Denis
> 
> 
> 
> 
> On 12/18/2012 07:40 AM, Markus Tiede wrote:
>> Hello,
>> 
>> I noticed that there is an unnecessary build / job load on 
>> hudson.eclipse.org being triggered due to an invalid / wrongly determined 
>> SCM change in the jobs "swtbot-e36" and "swtbot-e37" [1].
>> 
>> Those jobs seem to have run round about 18 x times during the last five 
>> months as they check for an SCM change every minute (which seems to trigger 
>> the build) though there is none.
>> 
>> @SWTBot-Team: Could you please verify and resolve this issue?
>> @Webmasters: Is there any way of sanity checking the execution frequency of 
>> hudson jobs (aka hudson build time / job execution 
>> "time-/-amount-/-quota"-report ;-) )?
>> 
>> With best regards,
>> MarkusT
>> 
>> [1] https://hudson.eclipse.org/hudson/view/SWTBot/
> 
> ___
> cross-project-issues-dev mailing list
> cross-project-issues-dev@eclipse.org
> https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] [HIPP] Visibility of Hudson configuration for anonymous users

2014-01-09 Thread Winston Prakash
I would suggest to upgrade HPP instances to Hudson to 3.1.* and enable 
Team Management authorization instead of Project Matrix Authorization. 
This has few advantages


- Sys Admins can be still be Eclipse IT
- Create Team Admins (Eclipse Project leads) who can manage team jobs
- Team admins can set job permissions for other team members
- Team admins can selectively set which jobs can be public and which one 
can be private


More details about Hudson Team Management is available here
http://hudsoncentral.wordpress.com/2013/09/02/using-hudson-teams-part-1-the-basics/
http://hudsoncentral.wordpress.com/2013/09/04/using-hudson-teams-part-2-mapping-roles-to-teams/

Though Team Admin can set "Job Configure" permission,  "Job View" 
permission can not be set as of 3.1.1. But that can be easily 
implemented for 3.1.2 which is expected to be released with in a month 
time frame.


- Winston



Anyway, yes Hudson always lets you activate a per-project (job) 
permissions and thus change permissions for a given user on a per job 
basis.




In my experience with Hudson permissions the permissions are additive, 
not subtractive. So if we globally set that Anonymous users can view a 
job configuration. You cannot remove that permission inside an 
individual job configuration.


For example all the Hudson servers allow anonymous users to view jobs 
globally, and it's not possible for a individual job to remove this 
permission. Even if you set anonymous read access unchecked in the job 
permissions matrix.



Thanh
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] [HIPP] Visibility of Hudson configuration for anonymous users

2014-01-13 Thread Winston Prakash

On 1/13/14, 1:22 AM, Mikaël Barbero wrote:

+1 obviously !!

Denis, Thanh,
Do you need a bug about the migration to Hudson 3.1.*?

Winston,
Do you also need a bug on Hudson for the new job configuration view permission?


Yes please.

Thanks,

Winston


Best regards,
Mikael


Le 9 janv. 2014 à 18:55, Denis Roy  a écrit :


On 01/09/2014 11:43 AM, Winston Prakash wrote:

I would suggest to upgrade HPP instances to Hudson to 3.1.* and enable Team 
Management authorization instead of Project Matrix Authorization. This has few 
advantages

- Sys Admins can be still be Eclipse IT
- Create Team Admins (Eclipse Project leads) who can manage team jobs
- Team admins can set job permissions for other team members
- Team admins can selectively set which jobs can be public and which one can be 
private

+1 !


Though Team Admin can set "Job Configure" permission,  "Job View" permission 
can not be set as of 3.1.1. But that can be easily implemented for 3.1.2 which is expected to be 
released with in a month time frame.

- Winston

Thanks, Winston.  That would be great.

Denis
___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev

___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


Re: [cross-project-issues-dev] [HIPP] Visibility of Hudson configuration for anonymous users

2014-02-28 Thread Winston Prakash

Hi Mikaël,

Your suggestion is implemented and Hudson 3.1.2 is released with the fix.

(Ref: 
http://dev.eclipse.org/mhonarc/lists/cross-project-issues-dev/msg10207.html)


- Winston

Winston,
Do you also need a bug on Hudson for the new job configuration view 
permission?


Yes please.



Done. https://bugs.eclipse.org/bugs/show_bug.cgi?id=425639

Best regards,



___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev


___
cross-project-issues-dev mailing list
cross-project-issues-dev@eclipse.org
https://dev.eclipse.org/mailman/listinfo/cross-project-issues-dev