Hello, TL;DR: I just updated Jenkins and its plugins. If things stop working correctly, please let me know.
Some details below, in case I'm not here when things start to break down... I updated Jenkins and its plugins to the latest versions, hoping to fix the problem we've been having lately where we would only ever get a single EC2 slave. The result was an AWS EC2 plugin that started many, many EC2 slaves, but on the Jenkins side mapped all slaves to the same URL, which resulted in multiple builds running concurrently on the same EC2 instance, which obviously resulted in many failures. I rolled back the AWS EC2 plugin from 1.42 to 1.39 (like I had to do a few weeks ago), and things to be back to normal. It even works better than before I attempted the upgrade: the plugin correctly spawns multiple slaves as required. Frankly I don't understand what is going on, but it works again so I'll stop touching it. I suppose I should take the time to investigate, attempt to reproduce the problem and report it to the plugin maintainers. I currently do not have a few days to spare for that, so it'll wait... For the record, I also had to do the following during the upgrade: - I had to update the AWS permissions for the EC2 plugin: https://wiki.jenkins.io/display/JENKINS/Amazon+EC2+Plugin#AmazonEC2Plugin-Version1.41(Oct24th,2018) - I had to install a plugin to ensure running builds are no longer allowed to do whatever they want (~root permissions): https://jenkins.io/doc/book/system-administration/security/build-authorization/ Cheers, Yoann Rodière Hibernate NoORM Team yo...@hibernate.org _______________________________________________ hibernate-dev mailing list hibernate-dev@lists.jboss.org https://lists.jboss.org/mailman/listinfo/hibernate-dev