Hi,

  Currently sqlite code reads the data from the disk, sorts them and returns 
the data. I need to do the following, and would like to know how much work will 
it involve.

  Instead of reading from disk, I need to query say 100 servers simultaneously 
in parallel (using threads) a single select command. That is, run this query: 
'select * from table where parent_name = 'name' order by name limit 10', on 100 
machines. (that will return 1000 rows). Now re-sort them, dump the 990 rows, 
and return the first 10. There is no need to do any synchronized writing or 
anything, so this is almost like a database client that does re-sorting on its 
own.

   How much effort will it take to create such a system starting from the 
current sqlite code base? How much will it cost me if I was to hire someone to 
do this? I don't know why such a system doesn't exist now. I have been 
searching the net, but couldn't find any resources on such a system. I am not a 
database expert, so I am not sure whether what I want can be accomplished 
trivially using some other straightforward SQL method.

   This constitutes a very high end distributed database system. You basically 
need to only do search on multiple databases. Once you get the row, the 
modification can be done to that specific machine very easily by the higher end 
code, but the multiple database search has to be handled at the lower end in C.

   Thanks in advance.


--
:: Ligesh :: http://ligesh.com 

Reply via email to