Hi Reuti,

Please ignore my previous email. I am getting closer to my goal by
re-compiling the source. I have finished compile successfully, except
missing PAM (As I am not root user, I cannot install PAM at this moment.)
However, it looks work and Qexecd is communicating to 64bit QMASTER now,
and it is able to submit 'sleep' to 64bit QEXEC host.

I will keep doing my testing and update later.

I very appreciate your help and guide!!!

Edgy

On Thu, Dec 17, 2015 at 11:20 PM, Steven Du <[email protected]> wrote:

> Hi Reuti,
>
> Thank you very much for explaining and pointing to the valuable learning
> resource.
>
> I ran into a very weird issue.
>
> My running env:
>
> one 64bit QMASTER and a few of 64bit QEXECD running good. One 32bit Qexecd
> on 32bit SLES10SP4.
>
> 32bit Qexecd works fine if it join all 32bit SGE environment. I use the
> same binary to start, once it point to 64bit Qmaster, the segment fault
> happens. Even, when I use qping to collect Qmaster info, it works when
> collecting 32bit Qmaster, but it failed when connecting 64Qmaster for some
> reason.
>
> I am so confused. :( Any idea?
>
> Thanks,
> Edgy
>
>
> On Thu, Dec 17, 2015 at 7:46 AM, Reuti <[email protected]> wrote:
>
>>
>> > Am 17.12.2015 um 13:19 schrieb Steven Du <[email protected]>:
>> >
>> > Hi Reuti/Joshua,
>> >
>> > I tried but failed. Here I wonder ( I am using 2011.11p1 version.):
>> >
>> > 1. If all GRID members have to be NFS shared spooling dir, such as
>> $SGE_CELL/common. Otherwise, the client cannot be started. I doubt, but
>> once I used the different name, the sge_execd cannot be started.
>>
>> You would at least need a copy of it on all nodes. But this is not
>> related to different architectures.
>>
>> https://arc.liv.ac.uk/SGE/howto/nfsreduce.html
>>
>>
>> > 2. This is related to the 1st question. In bootstrap config, it need to
>> config qmaster spooling dir, if there is no NFS share, the sge_execd cannot
>> access qmaster spooling dir. Then why do we need config qmaster spooling
>> dir?
>>
>> Not all information in the files in $SGE_ROOT/default/common is read by
>> all daemons. The sgeexecd will never access the spooling dir of the
>> qmaster. It's to be set up there only for sgemaster to know where to look
>> at. OTOH the binary_path setting targets both daemons.
>>
>>
>> > 3. I can start sge_execd daemon and it can listen on one port, but it
>> cannot communicate to QMASTER, even if I configered QMASTER port and
>> act_qmaster server name.
>> >
>> > 4. When I started sge_execd, and run any qstat command, I got "Segment
>> fault" for some reason. I will follow this one later.
>>
>> You complied this version on your own? On a platform with the same
>> architecture which the final nodes will use? Otherwise maybe the compiler
>> needs settings not to include the latest CPU features.
>>
>> -- Reuti
>>
>>
>> > Thanks,
>> > Edgy
>> >
>> > On Tue, Dec 15, 2015 at 11:38 PM, Steven Du <[email protected]> wrote:
>> > Hi Reuti, Joshua,
>> >
>> > Thank you very much GURU!
>> >
>> > That is a really good news!!! Actually, that is exactly situation we
>> are facing right now. We need have SGE Master on em64t, and submission host
>> on x86, and execution hosts are x86 and x64(em64t).
>> >
>> > We used to use all NFS share copy for SGE master and exec client,
>> except for exec client spooling dir. Then, as all hosts are x86, so all
>> hosts, including master and client, share one bootstrap file. But, now, we
>> are going to put some x64 boxes to GRID. That's why I need to find the way
>> to work out.
>> >
>> > Just very curious, I could not find any documents or web page for
>> building hybrid environment. So I suspect my solution even if I thought it
>> should work. :)
>> >
>> > Thank you again! You save us a lot time and effort!
>> >
>> > I will work on this in the next few days and update what I get.
>> >
>> > Edgy
>> >
>> > On Tue, Dec 15, 2015 at 6:24 AM, Reuti <[email protected]>
>> wrote:
>> >
>> > > Am 15.12.2015 um 05:45 schrieb Steven Du <[email protected]>:
>> > >
>> > > Thank you very much!
>> > >
>> > > Does it mean there is no any issue on SGE master to manage x64 and
>> x32 client? Is it right?
>> >
>> > Yep, you can even mix different operating systems and throw in some AIX
>> or FreeBSD clients. To ease the creation of uniform job scripts which run
>> independent from the machine they are execute on, one can use the
>> environment variable $ARC and organize the binaries and/or their containing
>> directories accordingly:
>> >
>> > #!/bin/sh
>> > # Set some stuff
>> > FOO=BAZ
>> > # Execute the binary
>> > /opt/software/$ARC/foobar
>> >
>> > Having directories /opt/software/lx-amd64 and /opt/software/lx-x86 you
>> will always get the correct binary. I even use this to distinguish between
>> amd64 and em64t to get the correct binaries for each type of CPU although
>> the jobscript stays the same.
>> >
>> > -- Reuti
>> >
>> >
>> > > I will try to setup experimental environment.
>> > >
>> > > Will update later once I prove.
>> > >
>> > > Thanks,
>> > > Edgy
>> > >
>> > > On Mon, Dec 14, 2015 at 11:28 PM, Joshua Baker-LePain <
>> [email protected]> wrote:
>> > > On Mon, 14 Dec 2015 at 4:53pm, Steven Du wrote
>> > >
>> > > I wonder if I am able to build hybrid SGE computing environment.
>> > >
>> > > It is about running one SGE master on Intel x64 host with RHEL, and
>> SGE
>> > > execd on 64bit and 32bit RHEL. And then, I am able to submit my jobs
>> to any
>> > > exec hosts I like. Such as, some jobs go to 32bit only, others go to
>> 64 bit
>> > > only.
>> > >
>> > > Based on my understanding, it should work.
>> > >
>> > > I did a Google search, but I could not find any article about it. Do
>> you
>> > > know? Or if anyone has the similar running environment. Please share
>> you
>> > > thought! I very appreciate your help.
>> > >
>> > > Yep, this is not a problem.  I run such an environment.  Be sure that
>> users submit jobs with the proper "-l arch=" request so that they go to the
>> right architecture.
>> > >
>> > > --
>> > > Joshua Baker-LePain
>> > > QB3 Shared Cluster Sysadmin
>> > > UCSF
>> > >
>> > > _______________________________________________
>> > > users mailing list
>> > > [email protected]
>> > > https://gridengine.org/mailman/listinfo/users
>> >
>> >
>> >
>>
>>
>
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to