Since I'm being blamed for global warming anyway (see, globals /are/ bad -
not that I subscribe to that book-bashing theory {:v) )...
My problem just got bigger.
My program revolves around comms. The worker thread was spawned to wait for
incoming bytes. This was my first foray into event-driven comms, and
apparently it may be my last, lol.
Originally the application sent out its message, and then sat in a tight
loop repeatedly asking how many bytes were available and whether they formed
a legal response yet. I can't allow Windows messages to be pumped from the
calling code during this time because then another button on the UI could
also try and communicate, which would be suicide when we're already in the
middle of a message / response cycle. I also can't disable the interface
during this call because these calls are going off very regularly (for
example, background comms and foreground logging of data), and thus the UI
would become irritating as it keeps disabling .
The problem I was hearing about (and can see for myself) is 100% CPU being
hogged.
Clearly, hammering the comms to find out when we have a message is
inefficient and hogging the CPU. So I naively thought this was the problem,
and hence tried to let the routines handling the comms provide a call that
would block the main thread until SOMETHING had come back, to limit the
number of calls to check. This code is working as coded, but it's not
helping stop the CPU hog one little bit.
A little bit of lateral investigation revealed that the CPU hog starts the
very moment I spawn the worker thread, before I even start sending messages!
This troubles me, because I really thought spawning that receive thread and
having it wait until something happened was good karma.
Here's that thread routine I spawn (for those who detest Hungarian notation,
I apologise for your weaknesses {;v) ):
UINT CommsRXThreadProc(LPVOID pvParam)
{
COMMS_RX_THREAD_DATA_S *psThreadData ;
DWORD dwNumBytes ;
OVERLAPPED sNotifyOverlapData ;
psThreadData = (COMMS_RX_THREAD_DATA_S *)pvParam ;
if (psThreadData != NULL)
{
// Set it up to notify us of incoming data.
::SetCommMask(psThreadData->hPortHandle, EV_RXCHAR);
memset(&sNotifyOverlapData, 0, sizeof(OVERLAPPED) );
sNotifyOverlapData.hEvent = ::CreateEvent(NULL, FALSE, FALSE,
NULL);
while (!psThreadData->bTerminate)
{
dwNumBytes = WaitForRX(*psThreadData,
sNotifyOverlapData);
if (dwNumBytes > 0)
ReceiveBytes(*psThreadData, dwNumBytes);
}
::CloseHandle(sNotifyOverlapData.hEvent);
psThreadData->bTerminationComplete = TRUE ;
}
return 0 ;
}
Yes, there's a tight little loop in there also, but this is what WaitForRX()
does:
DWORD WaitForRX(COMMS_RX_THREAD_DATA_S& rsThreadData, OVERLAPPED&
rsNotifyOverlapData)
{
BOOL bDone ;
COMSTAT sStatus ;
DWORD dwEvent, dwError, dwDummy, dwNumBytes ;
dwNumBytes = 0 ; // Until we have definitely been notified of
something
// being received.
if (!::WaitCommEvent(rsThreadData.hPortHandle, &dwEvent,
&rsNotifyOverlapData) )
{
dwError = ::GetLastError();
if (dwError == ERROR_IO_PENDING)
{
bDone = FALSE ;
// Wait for completion of WaitCommEvent().
while (!rsThreadData.bTerminate && !bDone)
{
bDone =
::GetOverlappedResult(rsThreadData.hPortHandle,
&rsNotifyOverlapData,
&dwDummy, FALSE);
etc.
So I'm using official Wait...() calls to notify me when something happens
even in the worker thread, so why is it hogging the CPU? Aren't these
Wait...() calls supposed to be efficient? Or does WaitCommEvent() not follow
the same nice notification system that WaitForMultipleObjects() and its
companions do?
I feel a bit betrayed by the system {:v(
Note that asynchronous comms is useless as I need to wait until either I get
a valid response to what's sent, or a timeout occurs. It's a master / slave
system (Modbus).
Any advice?
--
Jason Teagle
[EMAIL PROTECTED]
_______________________________________________
msvc mailing list
[email protected]
See http://beginthread.com/mailman/listinfo/msvc_beginthread.com for
subscription changes, and list archive.