I can't agree that such a thing would be a good approach in a  
commercial desktop application environment. I'd never deploy something  
like that to millions of graphic designers. I want everything in a  
nice, tidy black-box that the average joe is incredibly unlikely to  
screw up. I have no idea what google chrome does and can't comment on  
it. I don't use the app.

Beyond that, I don't see how that approach solves the problem you  
point out. You still have concurrency going on with shared data  
structures. You still have to implement serialization on the shared  
data structures. The only thing you gain from that design over a  
threaded design is an extra degree of resiliency in that a crashed  
task won't bring down the app. On the downside, you have the extra  
hassle and complication of IPC.

The way I guard against a single task bringing the app down is that I  
religiously keep code exception safe, check for NULLs, and use  
shared_ptr's...I expect the same from my staff. Another way to guard  
the app is to minimize the use  of mutexes. Instead of blocking  
threads, you keep your tasks very small and focused, and you set up  
execution dependencies between tasks. Task B can be made not to run  
until task A is completed. Finally, the primary shared data structure  
is a SQLite in-memory store which is wrapped in C++ code that handles  
the dirty details of serializing transactions on the DB.


Handling the limited 32 bit VM space was indeed a challenge. I had to  
come up with a scheme to throttle the task queue once memory  
consumption reached a certain level.


As for the quality of staff members, that is always a challenge. All I  
can do about that is recruit and retain people who are talented and  
can write solid code.

-James


On Apr 30, 2009, at 4:37 PM, Roger Binns wrote:

> James Gregurich wrote:
>> So, you suggest I should build a commercial desktop application (for
>> processing print-industry files and presenting them in a UI)  in such
>> a way that it spawns multiple processes and communicates with them  
>> via
>> the filesystem or IPC APIs?
>
> You obviously know more about your application, APIs, libraries etc  
> but
> it does sound like it would actually be a very good approach.  And of
> course you can also spawn processes on other machines too should the
> need arise.  The description sounds not too different than what Google
> Chrome does.
>
>> Why would I want to go to that level of complexity in an
>> uncontrollable environment (i.e. a consumer desktop computer) when I
>> can just use NSOperation, boost::thread, and boost::mutex to build a
>> single-process solution that shares data in a normal way between  
>> tasks?
>
> Because while you are a perfect programming machine, not everyone else
> who will touch the code in the future is.  As an example if one mutex
> call is left out or the wrong acquired by programming accident, how  
> long
> would it take to know about it and fix it?
>
> If you have to run in a 32 bit address space then that also limits how
> much you can do in one process.  Do you even know how large the  
> maximum
> stack size is per thread and will other coders never exceed that?
> [Don't answer here - its your application, architecture and team :]
>
> Roger
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to