lkcl has already answered you: "if you're expecting to have multiple computer
cards and speed up *desktop* applications, forget it".
You understand that a program running on a computer would not run faster if
you connect this computer to another one through a network cable. That is at
least the common situation. Indeed, it *could* run faster if the program was
written (and configured) to send part of the work to the other computer
through the network and get the result back.
However it far easier said than done! The whole program must be rewritten to
work that way. You have technical issues to take care of (e.g., what happens
if one of the two systems crashes?). However, the biggest difficulty is to
actually find a way to divide the work into pieces that are, at the same
time, sufficiently large (otherwise most of the computational cost would be
spent taking care of the communication between the computers) but not too
large (if a piece represents 90% of the work, then the other computer, which
gets at most 10% of the work, will do very little and, overall, you get, at
most, a 10% speedup), mostly independent from each other (otherwise the
computer spends most of its time waiting for the other computer to reply),
requiring little input and generating little output (again to limit the
communication cost), etc. With several cores (rather than several computers)
sharing a same memory, some technical issues disappear but the organizational
issues (splitting the work in mostly independent pieces, balancing the
workloads, synchronizing, etc.) remain. They are the most difficult issues.
It is like dividing a work between members of a team (at work, at school) to
achieve it as fast as possible. The work is achieved differently if it is a
one-man team or if it is a two-men team; the two-men team rarely achieves the
work in half the time required by the one-man team (the ratio greatly depends
on what the problem actually is) because the two men must synchronize each
other, because one of the two receives a more difficult piece of the work and
needs more time to achieve it (during that time, the other man waits) so that
both can tackle what follows, etc. To get close to doubling the productivity
when you double the team, the problem that is solved must be easy to
distribute/parallelize. Not all problem are like that.
Back to computer programs: desktop applications do not achieve time-consuming
works that the programmer can divide and distribute to two processes running
on two cores. Most desktop applications are programs that require little
horse power but must be reactive to the user inputs. There is simply not much
(if anything) to win through parallelization. And, again, parallelizing is
hard. Just take as an example the desktop free software application with the
greatest development effort: Firefox. Until now, Firefox, like all your
desktop applications, runs in one single process. It should run in two
processes soon (
https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox ),
performance is one of the motivation... but the price to pay is high
(rewriting a lot of the code, many hard-to-debug issues to expect, some
use-case will performance will actually be worse because of message-passing,
etc.):
https://developer.mozilla.org/en-US/Firefox/Multiprocess_Firefox/Motivation
See https://en.wikipedia.org/wiki/Multi-core_processor#Software_effects too.
What is the point in having multi-core desktop computers then? Well, if a
demanding application (say, playing a HD movie) is running, it eats up one
core but you can still comfortably execute another application at the same
time (it will run on another core).