--
[ Picked text/plain from multipart/alternative ]
Not sure its that realistic of advisable for many tbh. It's actually
possibly a temporary solution, and I think its a good general rule of thumb
but I don't think its an optimal solution. There's always X amount of
variable situations that you can think of arising, and then typically one
more will happen than you thought of.

For example 32/32/16/16, seems pretty straightforward, but then what happens
if for some odd reason a 16 slot went nuts and spiralled out at 100% cpu
(not so common fortunately with Valve games, but I've seen it happen, map
changes etc may also influence). Also at the same time the 32 slot on the
other cpu is actually empty. Naturally the ideal is you don't have a process
out of control for too long, but in the interim really you want the 32 to
switch over cpus. The more cpu's/cores/combinations you bring into the
equation also the trickier it gets.

I'm no expert on the scheduler by even a glimmer of imagination, I just
think that typically there's a case for both, but ultimately the scheduler
will do a better job with _most_ circumstances, especially unforeseen ones.
Ultimately I also have no illusions that there aren't some situations where
its preferable to have a manually set affinity according to your own rules
which is where that solution comes into its own fine.


On 4/10/06, Whisper <[EMAIL PROTECTED]> wrote:
>
> --
> [ Picked text/plain from multipart/alternative ]
> Surely it is not difficult to program into your Game Server Management
> Systems a method for:
>
> a)Determining the maximum amount of HLDS/SRCDS processes to be allowed to
> be
> spawned on a physical CPU/CPU Core
>
> b)If CPU/Core already has a HLDS/SRCDS process running on it, manually
> (via
> your Game Server Management System) to set the processes affinity to use
> the
> next CPU/CPU Core
>
> c)Wash Rinse Repeat
>
> Hell, I am surprised many of you are not already doing this, since not
> being
> able to see this problem coming a mile off is naive. You would do it, if
> for
> no other reason than, if you have 2 x 32 player SRCDS and 2 x 16 player
> SRCDS on a Dual Xeon for example, you don't want the 2 x 32 player SRCDS
> processes sitting on 1 CPU and the 2 x 16 player SRCDS on the other CPU.
> And
> you all know being humans rather than a computer, that the obvious optimal
> solution is to ensure that the each CPU gets 1 x 32 player SRCDS and 1 x
> 16
> player SRCDS and bugger off what Windows or SRCDS thinks how they should
> get
> allocated.
>
> I don't know, maybe I am over thinking the problem or maybe some of you
> are
> over thinking the problem, but it does seem like the solution is not all
> that difficult to achieve.
>
> On 4/10/06, kyle <[EMAIL PROTECTED]> wrote:
> >
> > --
> > [ Picked text/plain from multipart/alternative ]
> > The small number of people that code there own hacks is not what vac was
> > intended for.
> > It was intended for stopping publicly available hacks that spread like
> > wild
> > fire on the Internet.
> > And even if you code your own Direct X hooks they can and will
> eventually
> > be
> > detected by vac.
> > Either way you look at it, even if vac stops one hacker its a good
> thing.
> >
> > -------Original Message-------
> >
> > From: Stuart Stegall
> > Date: 04/09/06 19:50:00
> > To: hlds@list.valvesoftware.com
> > Subject: Re: [hlds] CS1.6 Servers all bound to 1 CPU?
> >
> > This is a multi-part message in MIME format.
> > --
> > [ Picked text/plain from multipart/alternative ]
> > Try writing your own directx hooks and then see how long before you get
> > hacked.  Of course publically available hacks are quickly detected.
> >
> > kyle wrote:
> > > --
> > > [ Picked text/plain from multipart/alternative ]
> > > VAC works very well on HL1 mods I have tested it with 7 different
> > accounts
> > > and they were all banned within a month after using XYZ,flautz,and
> > asuswall
> > > hacks.  Vac also caught 3 different types of aimbots.   My last test
> was
> > > about a month ago.   I went tino my server which was vac secure with
> an
> > > aimbot turned on with one account.  And then entered with another
> > account
> > > with a wallhack on.  Almost one month to the day later both of those
> > > accounts were banned.
> > >
> > > Vac is working and working well.   I just wish they would ban
> instantly.
> > > This would stop 97% of all hacking.
> > >
> > > -------Original Message-------
> > >
> > > From: Wayne
> > > Date: 04/09/06 17:18:28
> > > To: hlds@list.valvesoftware.com
> > > Subject: RE: [hlds] CS1.6 Servers all bound to 1 CPU?
> > >
> > > I'd go as far as to say not questionable, but utterly pointless having
> > VAC
> > > on 1.6.
> > > Clearly, IMO, VALVe have dropped VAC updating for 1.6 and with good
> > reason
> > -
> > > the game is old.
> > > The only place you can have a safe game of 1.6 is at LANs where the
> > machines
> > > are provided and you are unable to load anything onto the machines.
> > >
> > > -----Original Message-----
> > > From: [EMAIL PROTECTED]
> > > [mailto:[EMAIL PROTECTED] On Behalf Of David Harrison
> > > Sent: Monday, 10 April 2006 9:52 AM
> > > To: hlds@list.valvesoftware.com
> > > Subject: Re: [hlds] CS1.6 Servers all bound to 1 CPU?
> > >
> > > I wholeheartedly agree with Steve here; we're managing too many
> servers
> > > to make setting processor affinity ourselves a really workable
> solution.
> > >
> > > I'd personally rather lose VAC and not have to worry about this issue
> > > until a fix is in place. Based on the comments I see around the traps
> > > VAC's effectiveness in CS1.6 is questionable anyway so I'd rather the
> > > problem not be mine to fix.
> > >
> > > -- dave
> > >
> > > ---- Original Message ----
> > > From: "Dustin Tuft" <[EMAIL PROTECTED]>
> > > To: <hlds@list.valvesoftware.com>
> > > Sent: Monday, April 10, 2006 9:07 AM
> > > Subject: Re: [hlds] CS1.6 Servers all bound to 1 CPU?
> > >
> > >
> > >> Well I guess you have a limited imanigation. The first place I would
> > >> start is your process where an admin clicks to start a new server or
> > >> shut
> > >> down a server. I asume that you have an existing structure in place
> > >> that
> > >> manages this now? so maybe I gave you to much credit, If you have a
> > >> structure
> > >> in place then the ease of making it a smarter process is an addtion
> to
> > >> your exsiting function on that given server to check the load of all
> > >> CPU's
> > >> and set affinity, even say not this server go to the next server in
> > >> the
> > >> list.
> > >> I didn't say anything about recoding the schedular for windows, I
> only
> > >> express that I have felt the pian in multi threaded applications that
> > >> have critical events dependent on proper timing, and in most cases
> > >> your
> > >> have to have one threaded process monitor all threads to get anywhere
> > >> close to
> > >> correct timing, but that again could lead to lag here, lag being the
> > >> reality that your waiting for something before you move on.....
> > >>
> > >> Dustin Tuft
> > >>
> > >>
> > >>
> > >>> From: "Steven Hartland" <[EMAIL PROTECTED]>
> > >>> Reply-To: hlds@list.valvesoftware.com
> > >>> To: <hlds@list.valvesoftware.com>
> > >>> Subject: Re: [hlds] CS1.6 Servers all bound to 1 CPU?
> > >>> Date: Sun, 9 Apr 2006 22:39:07 +0100
> > >>>
> > >>> If you know how to statically determine a dynamic process then
> please
> > >>> let us all know as Im sure that would be a great breakthrough in
> > >>> computer science.
> > >>>
> > >>> Your basic solution of round robin would be unworkable as there are
> > >>> too many dynamic variants. You may be running your two servers quite
> > >>> happily but we are talking about many hundreds of servers across may
> > >>> hundreds of varying spec machines running multiple varying games
> > >>> that can change at any second triggered by an admins click.
> > >>>
> > >>> Its no surprise that one of the most difficult tasks in an OS is the
> > >>> scheduler
> > >>> and as such I dont believe reinventing the wheel to satisfy the
> > >>> temporary needs of one game is something we should be wasting
> > >>> resources on do you?
> > >>>
> > >>>    Steve
> > >>>
> > >>> ----- Original Message -----
> > >>> From: "Dustin Tuft"
> > >>>
> > >>>> Just as a suggestion, you might try re-evaluating your process. It
> > >>>> seems clear that the way your creating your servers does not allow
> > >>>> flexable programing to control affinity, since your way out side of
> > >>>> what I am sure Valve would consider standard deployment it might be
> > >>>> faster for you to build
> > >>>> a smarter dynamic process, possibly a program that trackes the
> > >>>> diffrent servers and sets affinity based on the current CPU load.
> > >>>> Then you could round robin the servers as they start up taking the
> > >>>> CPU with the lowest load, you might even think about leaving CPU0
> > >>>> out of the mix so the windows
> > >>>> OS doesn't have any complication or get over loaded durning peak
> > >>>> game play.
> > >>>>
> > >>>> I have been running a dual CPU box from the beging of DOD and I
> have
> > >>>> always
> > >>>> found that setting the affinity greatly increases the game server's
> > >>>> stablity
> > >>>> even before they patched the server to set affinity by it's self.
> > >>>>
> > >>>> CPU's get bussy and when your busses are bussy with I/O a CPU can
> > >>>> fall behind and there is nothing to correct that not even a better
> > >>>> built timer. if you take in the whole system all the way to client
> > >>>> Hard drives over the network, chances are your server CPU spend
> > >>>> more time waiting to process data
> > >>>> then any thing else.
> > >>>>
> > >>> ================================================
> > >>> This e.mail is private and confidential between Multiplay (UK) Ltd.
> > >>> and the person or entity to whom it is addressed. In the event of
> > >>> misdirection, the recipient is prohibited from using, copying,
> > >>> printing or otherwise disseminating it or any information contained
> > >>> in it. In the event of misdirection, illegible or incomplete
> > >>> transmission
> > >>> please telephone (023) 8024 3137
> > >>> or return the E.mail to [EMAIL PROTECTED]
> > >>>
> > >>>
> > >>> _______________________________________________
> > >>> To unsubscribe, edit your list preferences, or view the list
> > >>> archives, please visit:
> > >>> http://list.valvesoftware.com/mailman/listinfo/hlds
> > >>>
> > >>
> > >> _______________________________________________
> > >> To unsubscribe, edit your list preferences, or view the list
> > >> archives, please visit:
> > >> http://list.valvesoftware.com/mailman/listinfo/hlds
> > >>
> > >
> > >
> > > _______________________________________________
> > > To unsubscribe, edit your list preferences, or view the list archives,
> > > please visit:
> > > http://list.valvesoftware.com/mailman/listinfo/hlds
> > >
> > > __________ NOD32 1.1466 (20060331) Information __________
> > >
> > > This message was checked by NOD32 antivirus system.
> > > http://www.eset.com
> > >
> > >
> > >
> > > _______________________________________________
> > > To unsubscribe, edit your list preferences, or view the list archives,
> > > please visit:
> > > http://list.valvesoftware.com/mailman/listinfo/hlds
> > >
> > > --
> > >
> > >
> > >
> > > _______________________________________________
> > > To unsubscribe, edit your list preferences, or view the list archives,
> > please visit:
> > > http://list.valvesoftware.com/mailman/listinfo/hlds
> > >
> > >
> >
> > --
> >
> > _______________________________________________
> > To unsubscribe, edit your list preferences, or view the list archives,
> > please visit:
> > http://list.valvesoftware.com/mailman/listinfo/hlds
> >
> > --
> >
> >
> >
> > _______________________________________________
> > To unsubscribe, edit your list preferences, or view the list archives,
> > please visit:
> > http://list.valvesoftware.com/mailman/listinfo/hlds
> >
> --
>
> _______________________________________________
> To unsubscribe, edit your list preferences, or view the list archives,
> please visit:
> http://list.valvesoftware.com/mailman/listinfo/hlds
>
--

_______________________________________________
To unsubscribe, edit your list preferences, or view the list archives, please 
visit:
http://list.valvesoftware.com/mailman/listinfo/hlds

Reply via email to