Re: [slurm-dev] Re: SLURM between two different networks So I have to
   open necessary ports in firewall for allowing clients to reach
   slurmctld server... OK, I will try.

   Thanks!

   El 14/06/2017 a las 14:01, Carlos Fenoy escribió:

   Buenas Daniel,
   If you don't need to run interactive jobs (srun, salloc) there should
   not be any issue. You only need the client packages and the config
   files on the submit hosts. The submit hosts must be able to reach the
   slurmctld host, but they do not need to see the internal cluster.
   Regards,Carlos
   On Wed, Jun 14, 2017 at 12:57 PM, Daniel Ruiz Molina <<!-- tmpl_var 
LEFT_BRACKET -->1<!-- tmpl_var RIGHT_BRACKET -->daniel.r...@caos.uab.es>
   wrote:


     Hi Miguel,

     No, I don't need to connect to SLURM clusters, so I have only one
     slurmctld. What I need is connect some computers from "1.1.1.0"
     network to a HPC Clusters in "1.1.2.0" network. From computers I
     only need submit jobs that will be executed inside clusters. So my
     question (or doubt) is what I need to share between two networks
     for allowing submit jobs from one side to the other side.

     El 12/06/2017 a las 7:47, Miguel Gila escribió:

       Hola Daniel,

       My replies inline below.


         On 9 Jun 2017, at 18:21, Daniel Ruiz Molina <<!-- tmpl_var 
LEFT_BRACKET -->2<!-- tmpl_var RIGHT_BRACKET -->daniel.r...@caos.uab.es>
         wrote:


         Hello,

         I have this scenario and I need to know if it would be
         possible for SLURM to work.

         I have some computers that will act as "submit" SLURM
         hosts; in other words, in that hosts won't run batch or
         interactive jobs. That computers are configured with a
         network as
         <!-- tmpl_var LEFT_BRACKET -->3<!-- tmpl_var RIGHT_BRACKET 
-->1.1.1.0/24
         for example.

       Do I understand here submit host = login host?

         On the other side of my scenario, I manage a HPC Cluster
         configured in a network like
         <!-- tmpl_var LEFT_BRACKET -->4<!-- tmpl_var RIGHT_BRACKET 
-->1.1.2.0/24.
         I can connect using SSH from submits computers to HPC
         Cluster server with no problem

         I need to connect my submits hosts with my entire HPC
         Cluster for allowing people submit jobs from hosts and,
         then, execute that jobs in the cluster. >From server, I
         will share "/slurm" folder via NFS for all the compute
         nodes inside the cluster, but also I could share "/slurm"
         with submit hosts (opening their ports in iptables). I
         suppose I need to configure in "slurmd.conf" all my nodes:
         HPC compute nodes and submit hosts.

       As far as I know, the only thing you need to physically share
       between the two slurmctlds is the StateSaveLocation if you
       want failover. Other than that you need to:
       - make sure they all use the same Munge key, if you’re using
       Munge.
       - make sure all systems are reachable in the configured Slurm
       ports.
       - make sure all systems have the same defined users.

       There is no concept of ‘routing node' as in other scheduling
       systems, submission hosts need to be able to reach compute
       nodes and control daemons. If this cannot be an option, it is
       possible to have ssh wrappers here and there… but it becomes
       ugly very fast.

         Will run SLURM if it is configured for both networks?

       As said, yes, if you have all-to-all routing enabled on the
       configured ports.

         Could anybody answer me?

         Thanks.

       Hope this helps.

       Cheers,
       Miguel




   --
   --
   Carles Fenoy

   

   <!-- tmpl_var LEFT_BRACKET -->1<!-- tmpl_var RIGHT_BRACKET --> 
mailto:daniel.r...@caos.uab.es
   <!-- tmpl_var LEFT_BRACKET -->2<!-- tmpl_var RIGHT_BRACKET --> 
mailto:daniel.r...@caos.uab.es
   <!-- tmpl_var LEFT_BRACKET -->3<!-- tmpl_var RIGHT_BRACKET --> 
http://1.1.1.0/24
   <!-- tmpl_var LEFT_BRACKET -->4<!-- tmpl_var RIGHT_BRACKET --> 
http://1.1.2.0/24


Reply via email to