All the pieces are open source. Re-write the code in the middle so that it
generates a new file with a new hash and reports the new hash along with
the new data. Or, if the hash is coming from the server, have a re-written
client for those "in the know" that ignores the hash entirely.
OK, we are talking completely at cross purposes. If the intermediate nodes
are not allowed to modify the data in any form, why are they there? When I
read the map-reduce description, it looks to me like the intermediate nodes
split the work into smaller pieces for the end clients to work on. This
only works if these nodes can be trusted, and they can't. If it is just a
data repository, why not just keep the data repository on the project
servers. In other words, what precisely are the intermediate nodes
supposed to be doing with the data.
There are two possibilities:
1) The intermediate nodes do not actually modify the data and are just a
data repository. If this is the case, why bother.
2) The intermediate nodes do change the data. In which case there are two
more cases:
2a) The modified data is sent back to the server for validation and
generation of a new round. End of problem, this can already be done.
2b) The intermediate node directly receives work requests from the next
level down. The intermediate node has to generate the hash for the
modified data or the modified data does not have a hash. This is a pretty
obvious security hole given that the source for all the pieces is open
source and the intermediate node would have to have the key to generate the
hash.
There may be a way already to do what you are thinking about. However, the
data would never be communicated directly from one client to another. What
some projects do is have the output of round n feed round n+1. The results
from round n are reported back to the server which then generates round n+1
from that after the results from round n are validated. There is no reason
to limit round n+1 to be either the same or a different algorithm. So, you
could send out the map work to clients that report the results back to the
server which in turn validates that work and sends the reduce work out to
clients which then return the results to the server for validation.
In some cases, you may discover that you have a newtork transfer time to
CPU time ratio problem. We have had at least one project that failed
because it took longer to transfer the data to the clients than the clients
took actually crunching the data. BOINC is a high latency system as there
can be large chunks of internet between any host and any server. Some of
the links are still running at dialup speed.
Please note that BOINC has already had one version that was generated to
request higher credits because of a problem with benchmarks on one system.
When that problem with the benchmarks was fixed, some people kept using the
adjusted program in order to have slightly higher credit awards than they
should have.
"Just because you are paranoid, does not mean that they are not out to get
you."
"If you haven't thought of a way to cheat if you have all the secrets, you
aren't thinking hard enough."
jm7
Fernando Costa
<flco...@student.
dei.uc.pt> To
<[email protected]>
11/05/2010 01:11 cc
PM
Subject
Re: [boinc_dev] Hadoop and BOINC
Most of what you are saying sounds fine to me. If everything stays the
same,
that means that MapReduce was successfully adapted to BOINC. Right now,
it's
impossible to run MapReduce jobs.
That is the problem.
In the current BOINC, you can (and there's that option in the prototype),
with a few tweaks, schedule Map work units "as is", and then dynamically
create a Reduce work unit once all the Mappers have finished. Everything
goes through the server, and it is useful for certain types of jobs (the
ones in which the Map job executes a group operation, such as average or
max).
I'm not here to solve bandwidth or storage problems, that would be a
benefit
derived from having to adapt BOINC to handle MapReduce jobs.
That encryption part I don't understand. If you use a hash with a powerful
enough key (256 bits - stronger than MD5, for example), the time it would
take to decrypt it would render the attempt useless since we have deadlines
for each work unit and the data is not constantly being used. The only
problem would be if somehow those saboteur intermediate nodes had access to
the key, and created a new hash. As long as there's a signature, "created"
by the server (upon receiving the Map hashes from the first phase, it would
decide if there was a quorum as usual), even if the original data is
changed, it would be extremely difficult to create a different data that
would match the existing server-approved signature.
There are several methods here to guarantee data validity and integrity,
extensively studied against sabotage in clusters, and over the Internet.
The question is how to choose those super nodes, and how to make cheating
not worthwhile. Again, quite a few options here, although not as many that
have been studied in the context of volunteer computing and credits. And
again, I only spoke of super-peers as an option. Volpex also has its share
of problems, but David mentioned its integration with BOINC, to allow inter
process communication.
MapReduce is such a simple concept that people often dismiss it as already
been done, or a step backwards from DBs or what already exists. That's its
main advantage though. A huge number of applications can be broken down
into
Map and Reduce steps. Even just having the "BOINC supports MapReduce" stamp
could generate a considerable amount of interest, in my opinion.
Repeatedly saying that BOINC basically does this and that, and that it's
not
worth trying anything else is a mistake, and may close doors for
potentially
interesting collaborations or projects.
Research may take years to reach a production-level, it's better to start
sooner than to wait until everyone has seen how obvious it is.
Fernando
P.S. - about concrete applications, the idea now is seeing how it performs
on simple tasks. BOINC may indeed not be ready yet, and may never be, but
if
the first experiments are reasonably successful, the next step would be to
adapt an existing application to it. There are a few options on the table,
but it really depends on how it will do, and if/how the connectivity issue
is solved.
----- Original Message -----
From: <[email protected]>
To: "Fernando Costa" <[email protected]>
Sent: Friday, November 05, 2010 4:01 PM
Subject: Re: [boinc_dev] Hadoop and BOINC
> By intermediate node, I meant everything between the server and the
> client.
>
>
>
> Having the server send data to intermediate repositories solves nothing
> that I can see.
>
> The requirements of the servers to generate the work are exactly the
same.
>
> The bandwidth used is at best halved (ok, a very minor improvement -
> maybe).
>
> If the client has to decrypt the data, then so can the intermediate
nodes.
> Again, the circle of friends can do damage in an attempt to cheat, unless
> you send each instance to a different intermediate node for storage and
> re-allocation. This removing the download bandwidth improvement.
> Remember
> BOINC is open source, and people can and have made modified versions.
>
> The bandwidth required at the server for returning results is exactly
what
> it was in the first place.
>
> The validation requirements at the server are the same as they were.
>
> The science that can be done is exactly the same as the BOINC clients
> still
> cannot talk to each other during the computation.
>
> Please define the problem you are trying to solve first. It looks like
> you
> have a tool and you are looking for a problem to solve with it.
>
>
>
> Moving work generation off of project controlled servers would possibly
> help with bandwidth and server cost, but would generate a huge trust
> problem.
>
> Moving validation off of project controlled servers also could possibly
> help with bandwidth and server cost, but would also generate a huge trust
> problem.
>
> jm7
>
>
>
> Fernando Costa
> <flco...@student.
> dei.uc.pt> To
> <[email protected]>
> 11/05/2010 11:27 cc
> AM
> Subject
> Re: [boinc_dev] Hadoop and BOINC
>
>
>
>
>
>
>
>
>
>
> Hi John,
>
>> One of the selling points of BOINC is that the clients only need an
>> outbound connection. Most of the nodes are singletons behind their own
>> firewall.
>
> I agree, but any changes to accomodate MapReduce would either be easy and
> rewarding to the client, or completely transparent and require no change
> on
>
> the user's part.
> It is a matter of tailoring for other types of clients as well. I'm
hoping
> that, since MapReduce and Cloud Computing are such buzzwords right now,
> this
> could actually bring more people and project to Volunteer Computing, and
> prove a point in terms of VC usefulness and real-case scenarios.
>
> I wasn't able to go to the BOINC workshop, but from what I have read from
> David's presentation, the same problem of last year is still there:
>
> "Napoleon: Volunteer computing just can't handle the kinds of jobs that
> real
> scientists run.
> Me: What precisely is different about these jobs?
> Napoleon: THEY'RE JUST DIFFERENT, THAT'S ALL"
>
>>
>> One major problem with map reduce is the requirement that the map reduce
>> nodes be trusted. That is OK as long as they are under the direct
> control
>> of the project, but no BOINC client can be trusted. I know that is
seems
>> stupid, but in Classic SETI, one of the ways of cheating was to share
>> partially completed tasks with your friends those friends would then do
>> the
>> last few seconds of work, and everyone got credit for doing all of the
>> work. Another method of cheating was to return garbage. If you allow
>> super clients that can split the work, what you will discover is that
>> someone will set a server up for himself and a few friends where some
>> level
>> of cheating occurs in the creation of work so that the tasks are "easy"
> so
>> that credit can be granted with no computer time spent by the end
> clients,
>> and it could be arranged that the garbage validated, but had no
>> relationship to the inputs. Moving any part of the work generation off
> of
>> the project premises to computers that cannot be trusted is probably a
> bad
>> idea. Most BOINC projects have learned that the clients cannot be
>> trusted.
>> There are many things that can go wrong from intentional cheating to
>> overly
>> aggressive over clocking so the FPU is unstable to overheating to
> outright
>> failure of the machine.
>
> I don't want to move any work generation to the client, only data
> (outputs,
>
> not executables) storage/management. Map outputs have to reach to reduce
> workers, that is the problem here.
> All scheduling and validation would still be done in the central server,
> the
> "super node" would only be used as a repository of data (encrypted,
> obsfucated, using any means necessary to guarantee the correctness and
> non-modification of data).
>
> Guaranteeing correctness of the final output then becomes a problem of
> making sure that there is no collusion between "reducers". This problem
> already exists in BOINC, anyone can use an outside method to see if they
> are
> executing the same work unit and decide to return the same invalid result
> or
> garbage as output to the server, and it would never be known.
> If there were several layers of MapReduce programs running sequentially
> without server validation, then there would be bigger problems, but I'm
> considering that every step is validated at least by sending the hash of
> the
> output(s).
>
>> Bit Torrent does not have this problem as the
>> checksum or hash can always be checked to see if the file has been
>> modified. I don't believe that this can be done in the map reduce case
>> for
>> BOINC as the intermediate nodes would be required to modify the data.
> The
>> bottom line is that the intermediate nodes would have to be trusted by
>> both
>> the project, and the clients, and they are not controlled by either.
>> Also,
>> every project that does not do good validation of some sort will
>> eventually
>> have a problem with both users that unintentionally return garbage and
>> users that intentionally cheat to increase their credit scores without
>> doing the work.
>
> I'm not sure what you mean by intermediate nodes. MapReduce, as I would
> imagine it in BOINC (and is working in the prototype) would be something
> like:
>
> 1) BOINC Server -----> Map workers (BOINC Scheduler sends Map tasks to
> clients as it would any other task)
> 2) Each Map worker -------> BOINC server (each BOINC client returns hash
> of
>
> Map output(s) to BOINC server for validation against other map results,
as
> done by BOINC already)
>
> --- All Map work units are done, Reduce phase can begin executing ---
>
> 3) BOINC Server ------> Reduce workers (BOINC Scheduler sends Reduce
tasks
> to clients, as any other task, BUT also gives them the URL/IP/Address of
> Map
> workers that hold the required data - or super-peers, or Volpex URI, or
> whatever)
> 4) Each Reduce worker -------> BOINC server (final outputs from reduce
> task
>
> are returned to BOINC server, validated as usual, and possibly further
> analyzed offline, or used for a next MapReduce step - not considered
right
> now)
>
> I see quite a few trust and sabotage tolerance problems arising from
> introducing Mapreduce to BOINC, but they do not seem unsolvable.
>>From using spot-checking to periodic data-integrity verficiations or
> assigning super nodes statically (known users to the project), there are
> quite a few well-researched options that theoretically should work.
> There are no 100% fail-proof mechanisms when dealing with volunteer
> machines: even with the most basic of applications that are currently
> supported, it is still possible to sabotage a work unit.
>
>>
>> Some of the projects already use distributed data centers. CPDN is one
>> example of this.
>>
>> Corporations tend to act allergic to things like Bit Torrent. These
>> services tend to have a very bad reputation. At least some of the
> clients
>> are behind corporate firewalls.
>>
>
> I don't intend to introduce BitTorrent (although it could probably
improve
> transfer speed between clients). I just used BitTorrent as an example of
a
> working system that has proven, with real-world examples and consolidated
> research of many years in extremely large swarms (millions of users),
that
> it only takes a small percentage of altruistic peers to make such a data
> transfer system work (the same happened with KaZaA, as well as Gnutella).
> MapReduce has, at least for now, an aura of respectability, and since
it's
> related to the "Cloud", it automatically gives it an aura of importance
> and
>
> respectability (unlike BitTorrent which is naturally associated with
> piracy).
>
> Some clients will probably be never able to join a MapReduce project, or
> will only join those in which the data always goes through the server (or
> one of those data centers, that einst...@home also uses). It is a price
to
> pay for a paradigm that is used more and more often, and as many things
> Google, I doubt will disappear anytime soon.
>
> Fernando
>
>> jm7
>>
>>
>>
>> Fernando Costa
>> <flco...@student.
>> dei.uc.pt>
To
>> Sent by: BOINC Developers Mailing List
>> <boinc_dev-bounce <[email protected]>
>> [email protected]
cc
>> u>
>>
Subject
>> Re: [boinc_dev] Hadoop and BOINC
>> 11/05/2010 10:06
>> AM
>>
>>
>>
>>
>>
>>
>>
>>
>> That's true, the server can be represented as a sole "reducer".
>> The objective, though, is to distribute as much as possible and reduce
>> sequential work, so if possible, it would be benefitial in some cases to
>> have a distributed group of "reducers".
>>
>>>> BOINC only needs an outbound connection to the internet, would
>> map-reduce
>>>> change that?
>>
>> Not necessarily, other projects would still work regardless of Mapreduce
>> integration (this is a requirement, existing clients or projects without
>> MR
>> jobs would have to work as usual). Even on projects using MapReduce,
> there
>> are alternatives.The idea would be to make this as transparent to the
> user
>> as possible, we don't want to raise the barrier to entrance into a
>> project,
>> or make life harder to those already using BOINC.
>> I can think of several ways to get past the firewall issue:
>>
>> - Have an option to choose the port where to receive incoming
> connections,
>> in the client GUI. Much like BitTorrent does it, people who want to make
>> the
>> most out of it have to open ports on the router/firewall. Not an ideal
>> solution, but simple enough to at least be an option.
>>
>> - Publish/subscribe distributed data center, with a similar mechanism to
>> Volpex. All communication is initiated by the client who has the data,
>> which
>> is then stored in and retrived from a "mirror".
>>
>> - Super-peers / nodes. They could act as the data center mentioned
above,
>> or
>> simply be clients that have more hw capabilities, faster network
>> throughput
>> and higher availability, and possibly have an external IP address. They
>> could be rewarded according to the amount of data that was retrieved
>> and/or
>> sent through them. It would be a Skype or KaZaA-like arrangement, the
>> problem here would be the amount of data.
>> There is a project under development in Cardiff, called Attic
>> (http://www.atticfs.org/), that was supposedly dealing with this exact
>> issue. I helped out a couple of years ago, so I'm not sure they're going
>> in
>> the same direction.
>>
>> - Go through central server. This is what is already been done, as
> Nicolas
>> said. The only possible difference would be to have the outputs shipped
>> out
>> to a "reduce" phase on clients, instead of running everything on the
>> central
>> server. There are applications that could benefit from this, as I'm sure
>> projects may have enough bandwidth but not computing power or storage to
>> handle larger analyses.
>>
>> BitTorrent is actually a good example of a system that works well with
>> inter-client data transfers. Something like 20% of the users are
>> responsible
>> for 80% of the uploads, which gives some support to the possibility of
>> having a small set of trusted nodes, rewarded for their extra work. In
>> BitTorrent they are rewarded with better speeds - although one can argue
>> if
>> that's enough of a incentive, and if they're not simply altruistic and
>> work
>> for the community.
>>
>> There are other possibilities, such as moving inter-client comm to UDP
> and
>> use hole-punching techniques (it does not work too well in the
> real-world,
>> I'm told), I'm sure it would be possible to get around it.
>>
>> Fernando
>>
>> ----- Original Message -----
>> From: "Nicolás Alvarez" <[email protected]>
>> To: <[email protected]>
>> Cc: "Fernando Costa" <[email protected]>;
>> <[email protected]>
>> Sent: Friday, November 05, 2010 4:12 AM
>> Subject: Re: [boinc_dev] Hadoop and BOINC
>>
>>
>>> IMHO most BOINC projects are already map-reduce. The "map" steps are
>> done
>>> by BOINC clients (mapping an input file into an output file), and the
>>> "reduce" step is done by the server later (taking the output files
from
>>> BOINC clients and reducing them into the answer to life, the universe
>> and
>>> everything).
>>>
>>> El 04/11/2010, a las 09:58, [email protected] escribió:
>>>> OK, and what do you do with the reduced data? If the next step is to
>>>> send
>>>> the reduced data to another client, there is a major problem. BOINC
>>>> clients, cannot, in general talk to each other as each one may be
>> behind
>>>> its own firewall. If the next step is to send the data to the same
>>>> client,
>>>> is there a point?
>>>>
>>>> In general, the graph looks like:
>>>>
>>>> Your Machine on your network:
>>>> BOINC Client ------------------------ Firewall -------------------
>>>> Cloud
>>>> -------------------------- Project Server
>>>> My Machine on my
>>>> network: /
>>>> BOINC Client ------------------------- Firewall -------------------
>>>> Cloud
>>>> -------/
>>>>
>>>> Most of the BOINC clients are singleton clients as opposed to the
> BOINC
>>>> farms where a single individual may own several computers that are all
>>>> attached to BOINC. Even in the case where a single individual has
>>>> control
>>>> over two computers, one may be in the home office and the other may
be
>>>> in
>>>> the work office with separate firewalls and separate firewall policies
>>>> where the clients cannot talk to each other.
>>>>
>>>> BOINC only needs an outbound connection to the internet, would map-
>>>> reduce
>>>> change that?
>>>>
>>>> jm7
>>>>
>>>>
>>>>
>>>> Fernando Costa
>>>> <flco...@student.
>>>> dei.uc.pt>
>>>> To
>>>> [email protected],
>>>> 11/04/2010 07:54 [email protected]
>>>> AM
>>>> cc
>>>> Ali Gholami
>>>> <[email protected]>
>>>>
>>>> Subject
>>>> Re: [boinc_dev] Hadoop and BOINC
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Hi,
>>>>
>>>> I'm actually working on the subject, and have a BOINC prototype that
>> can
>>>> run MapReduce jobs. It is not a BOINC-Hadoop integration though, I did
>>>> not use any of its code and cannot run any of its apps, I simply made
>>>> changes to the BOINC client and server to be able to run a Map phase
>> and
>>>> then use the outputs in a Reduce function afterwards.
>>>>
>>>> The patent part I was not aware of, but if Hadoop has received a
>> license
>>>> for it, and is still used by many big names like IBM, Cloudera, Yahoo
>>>> and MS Bing in clusters, I don't see why it could not be applied on an
>>>> Internet environment. Map and Reduce operations have been around for
>>>> decades, the patent does not prevent its use, and there are so many
>>>> differences when moving it out of a data center that I don't think
>> there
>>>> will be a problem.
>>>>
>>>> From the patent itself
>>>>
>>
>
http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/PTO/srchnum.htm&r=1&f=G&l=50&s1=7,650,331.PN.&OS=PN/7,650,331&RS=PN/7,650,331
>
>>
>>>>
>>>> <
>>>>
>>
>
http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=/netahtml/PTO/srchnum.htm&r=1&f=G&l=50&s1=7,650,331.PN.&OS=PN/7,650,331&RS=PN/7,650,331
>
>>
>>>>>
>>>>
>>>> "What is claimed is:
>>>>
>>>> 1. A system for large-scale processing of data, comprising: a
> plurality
>>>> of processes executing on a plurality of interconnected processors;
> the
>>>> plurality of processes including a master process, for coordinating a
>>>> data processing job for processing a set of input data, and worker
>>>> processes;"
>>>>
>>>> In BOINC's case, there are no interconnected processors, the master
>>>> process is the server, the tasks are not assigned by the master "per
>>>> se", they are requested by clients themselves, and there is no Google
>>>> File System (or Hadoop's HDFS) - they refer to it as
>>>> " A plurality of intermediate data structures are used to store the
>>>> intermediate data values".
>>>>
>>>> Anyway, talking to Google directly would probably be best, and I don't
>>>> think they would have any problem with it. If MapReduce could
>>>> effectively be applied to BOINC, and Volunteer Computing in general,
>> the
>>>> patent should not be enough of a reason to stop us from at least
>> trying.
>>>>
>>>> I'm starting to run the first tests on smaller scale, on a cluster.
>>>> There are still many issues to tackle, such as connectivity (Volpex,
>>>> super-peers, even going through server come to mind), but the fact
> that
>>>> there are so many different MR applications out there means that we
> can
>>>> experiment with several alternatives before dismissing it as a
>>>> data-intensive paradigm for clusters-only.
>>>>
>>>> Just as a quick example - a MR job to get the average of max
>> temperature
>>>> of each year for the past 100 years, and the input were measurements
>>>> from thousands of weather stations from around the world.
>>>> The Map task would have to gather part of the input data, parse it,
> and
>>>> output the max/avg for every year (which means only 100 values - 1 per
>>>> year - as output for each Map). Map is already done by BOINC, since
>> it's
>>>> embarrassingly parallel. This output would then have to be sent to
>>>> different Reduce workers, each responsible for a unique set of keys
>> (for
>>>> example, each reduce would get the output for 2 decades, so we would
>>>> have 50 Reduce tasks).
>>>> The communication between Mappers and Reducers would be minimal, and
>> the
>>>> initial data would either be downloaded from the central server, or be
>>>> previously distributed and stored in clients - like the stor...@home
>>>> project wanted to, in fold...@home.
>>>>
>>>> Just my 2 cents, this could all be a mistake but it's worth a shot.
>>>>
>>>> Fernando
>>>>
>>>> [email protected] wrote:
>>>>> Mapreduce looks like it is designed for multiple steps in the
process
>>>>> of
>>>>> breaking up the problem on a tightly linked trusted server cluster.
>>>> BOINC
>>>>> is loosely linked, and the devices are not to be trusted. The end
>>>>> hosts
>>>>> also cannot talk to each other as many are behind firewalls and will
>>>>> not
>>>>> allow incoming connections.
>>>>>
>>>>> It is also true that Google is claiming a patent on the algorithm.
>>>>> BOINC
>>>>> needs to stay away from patented code if at all possible.
>>>>>
>>>>> Mapreduce might work as a part of the splitter for a single project
> if
>>>> the
>>>>> data set makes sense for that. I do not see how it would work
>> anywhere
>>>>> else in BOINC.
>>>>>
>>>>> Anyone have any other ideas?
>>>>>
>>>>> jm7
>>>>>
>>>>>
>>>>>
>>>>
>>>>> Ali Gholami
>>>>
>>>>> <aligh.mail...@gm
>>>>
>>>>> ail.com>
>>>> To
>>>>> Sent by: [email protected]
>>>>
>>>>> <boinc_dev-bounce
>>>> cc
>>>>> [email protected]
>>>>
>>>>> u>
>>>> Subject
>>>>> [boinc_dev] Hadoop and BOINC
>>>>
>>>>>
>>>>
>>>>> 11/02/2010 02:30
>>>>
>>>>> PM
>>>>
>>>>>
>>>>
>>>>>
>>>>
>>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Hi everyone,
>>>>>
>>>>> I've a question about integrating BOINC with Hadoop (Open
>>>>> implementation of MapReduce framework). I've read a little bit about
>>>>> BOINC and how it makes other prjoects particularly in fold...@home
>>>>> area. I'm just wondering if Hadoop can be useful in term of
>>>>> integration with BOINC. I'd appreciate it a lot if you have some
> ideas
>>>>> or have some guides that I can understand this problem better.
>>>>>
>>>>>
>>>>> Best regards
>>>>> Ali Gholami
>>>>> _______________________________________________
>>>>> boinc_dev mailing list
>>>>> [email protected]
>>>>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>>>>> To unsubscribe, visit the above URL and
>>>>> (near bottom of page) enter your email address.
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> boinc_dev mailing list
>>>>> [email protected]
>>>>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>>>>> To unsubscribe, visit the above URL and
>>>>> (near bottom of page) enter your email address.
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> boinc_dev mailing list
>>>> [email protected]
>>>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>>>> To unsubscribe, visit the above URL and
>>>> (near bottom of page) enter your email address.
>>
>> _______________________________________________
>> boinc_dev mailing list
>> [email protected]
>> http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
>> To unsubscribe, visit the above URL and
>> (near bottom of page) enter your email address.
>>
>
>
>
>
_______________________________________________
boinc_dev mailing list
[email protected]
http://lists.ssl.berkeley.edu/mailman/listinfo/boinc_dev
To unsubscribe, visit the above URL and
(near bottom of page) enter your email address.