nodeset < noderange>   broadcasts to all service nodes and then each
service node sees if one of the nodes in the noderange belongs to it.
That is why you see it go to all service nodes,  they don't necessarily do
any work.
Assigning a compute node to more than one service node is using pools.
That is the only mechanism.
I am not real familiar with all of nodeset,  so I am not sure of exactly
how much of the processing is done on every service node, or why.    Maybe
one of the nodeset experts could explain more.

Lissa K. Valletta
2-3/T12
Poughkeepsie, NY 12601
(tie 293) 433-3102





From:   Dave Barry <dbarry1...@gmail.com>
To:     xCAT Users Mailing list <xcat-user@lists.sourceforge.net>
Date:   07/19/2011 11:14 AM
Subject:        Re: [xcat-user] Confused regarding service node pools and
            defining specific service nodes



Hi Lissa,

Thanks for the help and clarification! I've been able to get service nodes
operating properly, just trying to understand some of my questions better.
Let me see if I can help clarify and perhaps simplify the questions. I know
some of my email appeared to be rambling :-)

* My first question is, why does doing nodeset's for a node seem to send
that nodeset to *all* service nodes, regardless of what servicenode a node
is assigned to? I would think that if a node is only assigned to the
management node, a nodeset to that node would only go to the management
node, not Service01 as well. However when I have, for example, c1n01
assigned to no service nodes at all, doing a nodeset on c1n01 has both the
management node and Service01 responding to the command. I see that c1n01
is getting its /tftpboot/pxelinux.cfg/c1n01 file written out on Service01,
even though "disjointhcps" is set to "1". This node should never be able to
boot from Service01 as a result. Why is Service01 writing out the configs
for c1n01?

*Question 2: When *not* using service node pools, can you assign a compute
node to more than one service node? If so, what would the noderes table
look like for that? The documentation I read seems to show only assigning
one service node to a compute node.




Here is my lsdef service:



[root@mn ~]# lsdef service

Object name: service01
    arch=x86_64
    currchain=boot
    currstate=boot
    groups=service,all
    initrd=xcat/centos5.5/x86_64/initrd.img
    interface=eth0
    ip=192.168.1.10
    kcmdline=nofb utf8 ks=http://mn/install/autoinst/service01
ksdevice=eth0 noipv6
    kernel=xcat/centos5.5/x86_64/vmlinuz
    mac=00:50:56:11:11:13
    mgt=ipmi
    monserver=mn
    netboot=pxe
    nfsserver=mn
    nodetype=osi
    os=centos5.5
    postbootscripts=otherpkgs

postscripts=updaterepos.sh,syslog,remoteshell,syncfiles,updaterepos.sh,servicenode,xcatserver,xcatclient

    power=ipmi
    primarynic=eth0
    profile=service
    provmethod=install
    servicenode=mn
    setupdhcp=1
    setupftp=1
    setupnameserver=1
    setuptftp=1
    status=booted
    statustime=07-14-2011 11:35:13
    xcatmaster=mn




Here is my lsdef for c1n01. As you can see it is not assigned to a
servicenode:


[root@mn ~]# lsdef -t node -o c1n01

Object name: c1n01
    arch=x86_64
    chain=runcmd=standby
    currchain=boot
    currstate=install centos5.5-x86_64-compute
    groups=compute,all
    initrd=xcat/centos5.5/x86_64/initrd.img
    installnic=eth0
    interface=eth0
    ip=192.168.1.2
    kcmdline=nofb utf8 ks=http://mn/install/autoinst/c1n01 ksdevice=eth0
noipv6
    kernel=xcat/centos5.5/x86_64/vmlinuz
    mac=00:50:56:11:11:11
    mgt=ipmi
    netboot=pxe
    nfsserver=mn
    nodetype=osi
    ondiscover=nodediscover
    os=centos5.5
    postbootscripts=otherpkgs
    postscripts=updaterepos.sh,syslog,remoteshell,syncfiles
    power=ipmi
    primarynic=eth0
    profile=compute
    provmethod=install
    status=booted
    statustime=07-15-2011 18:32:09




However when doing a nodeset, both the management node and Service01
respond:

[root@mn ~]# nodeset c1n01 install
c1n01: install centos5.5-x86_64-compute
c1n01: install centos5.5-x86_64-compute




I am running MySQL as the database and latest xCAT 2.5 for both the
management node and Service01. My compute nodes, management node, and
service node are all sharing the same flat network. Like I stated,
everything is working properly on my setup as far as assigning nodes to
Service01 and booting them from Service01, just trying to understand better
how service node pools and statically assigning specific nodes to a service
node work, and the behavior of them (as mentioned in my above questions).


Thanks so much for your patience and assistance!






On Tue, Jul 19, 2011 at 7:46 AM, Lissa Valletta <lis...@us.ibm.com> wrote:

  For either  nodes assigned to a specific service node or a pool of
  service
  nodes,  you always run the commands on the Management Node.   xCAT will
  figure out which  service node will run the command and automatically
  send
  it there.  You do not run these commands on the Service Node yourself in
  either case.   For pools, it will pick the first one available.
  I would suggest you try just one service node first before using pools.
  Take one of your nodes and assign a single  service node.
  Also maybe we should check your service node setup.

    Run lsdef service
    Can you ssh to service10 and not be prompted for a password.
    When on service10 can you tabdump site  and get to the database on the
    Management Node?
    What database are you running?
    Also give me the output of rpm -qa | grep xCAT from the Service node
  and
    the Management Node.
    Are your nodes connected only to the Service node or are they actually
    connected to the Management Node also.   From what you say, you may not
    be using the service node at all.

  Lissa K. Valletta
  2-3/T12
  Poughkeepsie, NY 12601
  (tie 293) 433-3102





  From:   Dave Barry <dbarry1...@gmail.com>
  To:     xCAT Users Mailing list <xcat-user@lists.sourceforge.net>
  Date:   07/18/2011 08:07 PM
  Subject:        Re: [xcat-user] Confused regarding service node pools and
             defining specific service nodes



  Thanks! I am just trying to understand this part of xCAT more clearly so
  that I can make the correct decisions for my setup. So let me make sure I
  have this straight, please feel free to correct as needed:

       For any nodes who are specifically assigned a service node (as
       opposed to assigned to a service node pool), their management
       commands (such as nodeset etc.) should be ran on the Service Node
       they are assigned to, and not the Management Node.    - I did notice
       that when defining the "xcatmaster" in noderes for this compute node
       as its service node, even though the management node could still
  DHCP
       boot the node, the imgurl for where to pull the netboot image was
       statically assigned to the service node.

        When using service node pools, it becomes a first-come, first-serve
       basis, in that any service node will have its imgurl defined back to
       itself in /tftpboot, so that if that service node ends up being the
       one that pxe boots the compute node, it essentially becomes that
       compute node's master. Management commands should be ran on the
       management node instead of the service node in a service node pool
       architecture. Correct?


  Questions:
       Can you define more than one statically assigned Service Node to a
       compute node? How would the imgurl that is defined in tftpboot be
       handled in this situation?
       I cannot seem to figure out how the "servicenode" column actually
       comes into play in the various services configuration. I even tried
       putting fake service node hostnames in there that do not exist, and
       was still able to makedhcp, nodeset, and boot both diskless and
       install diskful nodes from either service node or master node with,
       what appeared to be, no ill effects. What does this column actually
       affect, and when does it come into play in fail over situations?


  Sorry for the multiple questions, just trying to gather as much
  information
  as possible =)  I've read the service node pools documentation and
  unfortunately, unless I missed something, it doesn't seem to go into as
  much depth as I am trying to gather.


  Here are my lsdef's. c1n01 is a diskful install, and c1n02 is a diskless
  node. I was able to successfully boot c1n02 from service01, which is not
  defined as that compute node's service node, without an issue. I even
  made
  sure it was booting from service01 by stopping dhcp on the master node.
  That's what is confusing to me... it would seem to me that if I have
  service10 and service11 defined as c1n02's service nodes (which by the
  way
  are non-existant service nodes), service01 wouldn't care about c1n02 and
  as
  a result wouldn't create the tftpboot/dhcp configuration needed for that
  node, only service10/11 would (if they existed). What is the actual
  purpose
  of the servicenode column in noderes in a service node pool setup?



  Object name: c1n01
      arch=x86_64
      chain=runcmd=standby
      currchain=boot
      currstate=install centos5.5-x86_64-compute
      groups=compute,all
      initrd=xcat/centos5.5/x86_64/initrd.img
      installnic=eth0
      interface=eth0
      ip=192.168.1.2
      kcmdline=nofb utf8 ks=http://mn/install/autoinst/c1n01 ksdevice=eth0
  noipv6
      kernel=xcat/centos5.5/x86_64/vmlinuz
      mac=00:50:56:11:11:11
      mgt=ipmi
      netboot=pxe
      nfsserver=mn
      nodetype=osi
      ondiscover=nodediscover
      os=centos5.5
      postbootscripts=otherpkgs
      postscripts=updaterepos.sh,syslog,remoteshell,syncfiles
      power=ipmi
      primarynic=eth0
      profile=compute
      provmethod=install
      servicenode=service10,service11
      status=booted
      statustime=07-15-2011 18:32:09
  [root@mn ~]# lsdef c1n02

  Object name: c1n02
      arch=x86_64
      chain=runcmd=standby
      currchain=boot
      currstate=netboot centos5.5-x86_64-compute
      groups=compute,all
      initrd=xcat/netboot/centos5.5/x86_64/compute/initrd-stateless.gz
      installnic=eth0
      interface=eth0
      ip=192.168.1.3

  
kcmdline=imgurl=http://!myipfn!/install/netboot/centos5.5/x86_64/compute/rootimg.gz

   XCAT=!myipfn!:3001 ifname=eth0:00:50:56:11:11:15 netdev=eth0
      kernel=xcat/netboot/centos5.5/x86_64/compute/kernel
      mac=00:50:56:11:11:15
      mgt=ipmi
      netboot=pxe
      nodetype=osi
      ondiscover=nodediscover
      os=centos5.5
      postbootscripts=otherpkgs
      postscripts=updaterepos.sh,syslog,remoteshell,syncfiles
      power=ipmi
      primarynic=eth0
      profile=compute
      provmethod=netboot
      servicenode=service10,service11
      status=booted
      statustime=07-17-2011 20:29:51





  Thanks!



  On Mon, Jul 18, 2011 at 7:31 AM, Lissa Valletta <lis...@us.ibm.com>
  wrote:
   So first do you really want service node pools or as you indicate below
   you just want to assign a compute node to a particular service. If you
   check this link the two sections have a description of setting up
  Service
   Nodes and pools:



  
https://sourceforge.net/apps/mediawiki/xcat/index.php?title=Setting_Up_a_Linux_Hierarchical_Cluster#Assigning_Nodes_to_their_Service_Nodes_.28_updating_the_noderes_table.29



   If you want compute1 to just use the Management Node, you do not have to
   put anything in the noderes table for it. That will be the default, or
   just assign the xcatmaster attribute to the Management node as know by
   compute1.

   If not a pool then in the noderes table:
   The servicenode attribute for a compute node should be set to the
   hostname of the service node(s) that the management node knows it by.
  The
   xcatmaster attribute in the noderes table should be set to the hostname
   of the service node that the compute node knows it by.
   Make sure your service node is defined in the servicenode table.

   For pools, make sure you note this restriction.
   Note: the noderes table's xcatmaster, tftpserver,nfsserver attributes
   should be blank for any node entry that has the noderes servicenode
   attribute set to a pool of service nodes.


   The command(s) that is reference, is all xCAT commands that will
  actually
   be run on the servicenode at this point instead of the Management Node,
   because the Service Node is the master of the compute node. Some
  examples
   are your nodeset , nodestat, xdsh. Some of these commands do some work
  on
   the Management Node ( our preprocess setup) before the actual executing
   the real work on the Service Node.

   Also run lsdef compute1 and lsdef compute2, we can check the entire
  setup
   from that output.

   Lissa K. Valletta
   2-3/T12
   Poughkeepsie, NY 12601
   (tie 293) 433-3102



   (Embedded image moved to file: pic04821.gif)Inactive hide details for
   Dave Barry ---07/17/2011 05:45:56 PM---Hello! I am attempting to
   understand how to manually lay out sDave Barry ---07/17/2011 05:45:56
   PM---Hello! I am attempting to understand how to manually lay out
   specific service nodes

   From: Dave Barry <dbarry1...@gmail.com>
   To: xcat-user@lists.sourceforge.net
   Date: 07/17/2011 05:45 PM
   Subject: [xcat-user] Confused regarding service node pools and defining
   specific service nodes



   Hello!

   I am attempting to understand how to manually lay out specific service
   nodes that are responsible to specific compute nodes, but am having a
   hard time doing so. I read the following paragraph:


   To define a list of service nodes that support a set of compute node(s),
   in the noderes table, in the service node attribute, put a
   comma-delimited list of the
   service nodes. The list will be processed left to right, picking the
   first service node on the list to run the command. If that service node
   is not available, then
   the next service node on the list will be chosen until the command is
   successful. Errors will be logged. If no service node on the list can
   process the
   command, then the error will be returned. You can provide some
   load-balancing by assigning your service nodes as we do below.



   I have tried manually defining my node "compute1" to have its
  servicenode
   (in the noderes table) to be my masternode, and then defining compute2
  to
   have its servicenode be sn1. However when I run "nodeset compute1
   netboot"
   the command appears to be sent to both the master node and the service
   node. The same happens if I do "nodeset compute2 netboot". The /tftpboot
   files and /install/autoinst files are written out on both the masternode
   and service node
   as if xcat is ignoring the fact that I have separated these two compute
   nodes to different service nodes. I am successfully able to netboot
   compute1 from service01 without any problems. DHCP from service01 will
   happily respond and boot this node even though it is not assigned as
  this
   node's servicenode.


   Am I misunderstanding how this is supposed to work?

   Also: "The list will be processed left to right, picking the first
   service node on the list to run the command. If that service node is not
   available, then
   the next service node on the list will be chosen until the command is
   successful."


   What "command" is this documentation referring to?



   Thanks!

  ------------------------------------------------------------------------------


   AppSumo Presents a FREE Video for the SourceForge Community by Eric
   Ries, the creator of the Lean Startup Methodology on "Lean Startup
   Secrets Revealed." This video shows you how to validate your ideas,
   optimize your ideas and identify your business strategy.
   http://p.sf.net/sfu/appsumosfdev2dev
   _______________________________________________
   xCAT-user mailing list
   xCAT-user@lists.sourceforge.net
   https://lists.sourceforge.net/lists/listinfo/xcat-user



  ------------------------------------------------------------------------------


   AppSumo Presents a FREE Video for the SourceForge Community by Eric
   Ries, the creator of the Lean Startup Methodology on "Lean Startup
   Secrets Revealed." This video shows you how to validate your ideas,
   optimize your ideas and identify your business strategy.
   http://p.sf.net/sfu/appsumosfdev2dev
   _______________________________________________
   xCAT-user mailing list
   xCAT-user@lists.sourceforge.net
   https://lists.sourceforge.net/lists/listinfo/xcat-user

  ------------------------------------------------------------------------------


  AppSumo Presents a FREE Video for the SourceForge Community by Eric
  Ries, the creator of the Lean Startup Methodology on "Lean Startup
  Secrets Revealed." This video shows you how to validate your ideas,
  optimize your ideas and identify your business strategy.
  http://p.sf.net/sfu/appsumosfdev2dev
  _______________________________________________
  xCAT-user mailing list
  xCAT-user@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/xcat-user

  ------------------------------------------------------------------------------

  Magic Quadrant for Content-Aware Data Loss Prevention
  Research study explores the data loss prevention market. Includes
  in-depth
  analysis on the changes within the DLP market, and the criteria used to
  evaluate the strengths and weaknesses of these DLP solutions.
  http://www.accelacomm.com/jaw/sfnl/114/51385063/
  _______________________________________________
  xCAT-user mailing list
  xCAT-user@lists.sourceforge.net
  https://lists.sourceforge.net/lists/listinfo/xcat-user

------------------------------------------------------------------------------

Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
_______________________________________________
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user



------------------------------------------------------------------------------
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
_______________________________________________
xCAT-user mailing list
xCAT-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/xcat-user

Reply via email to