Re: [ClusterLabs] Resources within a Group

2015-08-28 Thread Zhen Ren
Hi Jorge,

Like colocation is what you want. 

Please take a look at here:
http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-sets-colocation.html
 

 
 Hi, 
  
 I'm on SLES 11 SP4 (Pacemaker 1.1.12) and still learning all this :) 
 I'm wondering if there's a way to control the resource startup behaviour 
 within a group? 
  
 For example, I have an LVM resource (to activate a VG) and the next one: 
 the Filesystem resource (to mount it).  If the VG activation fails I see 
 errors afterwards trying to mount the filesystem.  If there's something 
 like If the first resource fails, stop further processing? (sort of 
 like one can control the stacking of PAM modules). 
  
 Thanks, 
 Jorge 
  
 ___ 
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
  
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 
  
  



--
Eric, Ren




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] Antw: Re: Problem in Xen RA (SLES11 SP3)?

2015-07-27 Thread Zhen Ren
Hi Ulrich,

When you think the problem is relative to a specified vendor, you'd better
file a report as Lars said. 

For see sles11 sp3, you can report here: https://bugzilla.suse.com/index.cgi 

 
 Lars Marowsky-Bree l...@suse.com schrieb am 17.07.2015 um 09:50 in Nachricht
 20150717075045.gu6...@suse.de: 
  On 2015-07-09T17:13:01, Ulrich Windl ulrich.wi...@rz.uni-regensburg.de 
  wrote: 
   
  I was watching our Xen-cluster when there were problems, and I found this: 
  NameID   Mem VCPUs  State
  Time(s) 
  Domain-0 0 1340124 r-
  560.6 
  [...other domains running...] 
  v08  8 16384 1 --p---  
  0.0 
  v09  9 16384 0 --p---  
  0.0 
   
  Jul  9 17:06:04 h01 Xen(prm_xen_v08)[12923]: INFO: Xen domain v08 will be  
  stopped (timeout: 400s) 
  Jul  9 17:06:09 h01 Xen(prm_xen_v09)[12922]: INFO: Xen domain v09 already  
  stopped. 
   
  Obviously this is not true: When the cluster tried to start the domain, it 
   
  never left that p-state. But the re-create the domain, I guess the cluster  
 has  
  to destroy the existing domain. 
   
  Any insights on this? 
   
  The usual answer: please file a bug report. 
  
 So you are saying it's a bug? 
  
 Anyway, what had happened was this: Someone changed the VM configuration of  
 another VM to get more memory. Then the cluster tried to start all VMs on a  
 single node, but that node (Domain-0) did not have enough memory... Thus the  
 VMs were staying in that p-state. 
  
 What I guess is this: Such a domain is not actually running (and needs to be  
 destroyed (stopped) before any attempt to start the VM elsewhere is done) 
  
 Can you confirm? 
  
 Another question is why Xen doesn't fail the start of such a VM more or less  
 immediately; it seems Xen is waiting for more memory to arrive indefinitely. 
  
 Regards, 
 Ulrich 
  
  
  
 ___ 
 Users mailing list: Users@clusterlabs.org 
 http://clusterlabs.org/mailman/listinfo/users 
  
 Project Home: http://www.clusterlabs.org 
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
 Bugs: http://bugs.clusterlabs.org 
  
  



--
Eric, Ren




___
Users mailing list: Users@clusterlabs.org
http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org