Only a careful study of the time slot requests will give you the best
solution, i.e., you may find a significant percentage of requests are
for two or three slots.

So you set up #1, as though you had already received a few 2 and/or 3
slot requests and the instant they arrive, they're routed into those
slots to fill the slots in the table.

But you have to know the type of requests most common, the time period
when they are common, and how many are likely to be received in a set
period of time.

I'm seeing this now as a game playing strategy, like chess or checkers.
Each board position is a different time slot schedule table
arrangement. Each table schedule can be quickly "scored" according to
how many requests are perfectly spaced (and how many are not). Then,
using a depth first search, we find the best scoring table schedule and
choose it.

Just like a chess game, the best move search has to be cut off at some
time interval - you can't just keep on searching, and giving the table
schedule generator certain "rules" to follow, will help limit the
number of possible schedules that have to be "scored".

A "rule" might something like "the largest request will always have
it's first time slot included in the schedule in one of the first 5
slots".

This can run very quickly in a fast language like C or C++. The slower
languages like Java, Ruby Python, etc., should not be used for this. 20
to 30 requests per second is a very fast rate, indeed!

If the slots are for 15 minute intervals, I see this working like this:
1) Requests are coming in, at a varied rate, perhaps a dozen per
second.
2) We are continuing to receive requests.
3) It is now 5 minutes before the next 15 minute time interval.
Requests are now routed over to a buffer for later processing.
The 5 minute mark is the cut-off time to allow for the best processing.
4) Schedule tables are simulated and scored to find the best one.
Incoming requests still go to the buffer.
5) The best table schedule has been found in less than 5 minutes.
6) We take the requests from the buffer, and then continue to receive
requests directly until 5 minutes before the next interval must be
output, and repeat.

The best cut-off time will vary by the number of requests and the size
of the requests. The larger the # of time slots needed, the tougher to
schedule it.

Using Alpha-beta algorithm with depth first search and a hash will
speed this up, quite a bit. If a server could direct the data and work
to several different core's or computers, that would provide another
big benefit, but also increase the complexity.

That's quite a problem!

Adak


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To post to this group, send email to algogeeks@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/algogeeks
-~----------~----~----~----~------~----~------~--~---

Reply via email to