On Friday 04 February 2005 3:13 am, ben syverson wrote:

> I'm curious how the "pros" would approach an interesting system design
> problem I'm facing. I'm building a system which keeps track of user's
> movements through a collection of information (for the sake of
> argument, a Wiki). For example, if John moves from the "dinosaur" page
> to the "bird" page, the system logs it -- but only once a day per
> connection between nodes per user. That is, if Jane then travels from
> "dinosaur" to "bird," it will log it, but if "John" travels moves back
> to "dinosaur" from "bird," it won't be logged. The result is a log of
> every unique connection made by every user that day.

What are you doing with the data once you have it? Is there any reason that it 
needs to be 'live'? If not, you could simply add the username in a field in 
the logfile, and post-process the logs (assuming you trust the referer field 
sufficiently). That removes all the load from the webserver

> My initial thoughts on how to improve the system were to relieve
> mod_perl of having to serve the files, and instead write a perl script
> that would run daily to analyze the day's thttpd log files, and then
> update the database. However, certain factors (including the need to
> store user data in cookies, which have to be checked against MySQL)
> make this impossible.

Why does storing user data in cookies prevent you from logging enough to 
identify the user again later? Or are you storing something you need to 
reconstruct the trace that you can't get otherwise?

-- 
"Debugging is twice as hard as writing the code in the first place.
 Therefore, if you write the code as cleverly as possible, you are,
 by definition, not smart enough to debug it."
- Brian W. Kernighan

Reply via email to