I suspect that the way quantum computers are going to be used initially
is as "attached processors", where some some large piece of tin
slops data from large datasets, passes it to the quantum computer,
hits the GO button, and waits for a result.

From my understanding, the interesting problems that a quantum
computer should be good at are those, such as those in NP,
that have no polynomial-time algorithms on classical processors.
APL, J, SAC, and similar languages should have a leg up here,
because their abstract nature lends itself to adjusting to new
environments without requiring a user to redesign their application.

Re the possible "defect rate" mentioned below, this is unimportant
for a large class of decision problems (those with a yes/no answer),
because a putative solution can be verified or rejected in polynomial
time.

E.g, integer factorization is in NP, as is map coloring. For both
of these, a putative solution can easily be verified.

A harder problem is that of the "travelling salesman problem",
in which we want to compute the shortest route that visits a
set of cities exactly once, and returns to its starting point.
The key here is "shortest": we can easily verify that a putative
solution visits each city once and returns to its starting point,
but we do not have any good way to ensure that the
putative solution is, in fact, the shortest one, except by
trying all of them.

Both of these types of problems are, I think, good fodder
for quantum computers. Those, and codebreaking, of course.

I am also, btw, a fan of optical computing...

Bob

On 13-10-12 11:29 AM, Raul Miller wrote:
Personally, I'd rather speculate about optical computing
infrastructures. We had the potential to start building that kind of
thing back in the '90s.

But, ok: a press release I found referencing the dwave indicates 128
qbits in the device. But I see nothing about how long it takes them to
converge and stabilize, nor how long it takes to set up the conditions
beforehand, nor whether there are any asymmetries nor anything about
failure rates. So, basically, it's suitable for implementing a search
mechanism that takes some amount of time (seconds? days?) to converge
and has some unknown level of reliability.

To use it for J, we'd want to be shuffling large amounts of data
through the chip. So, for example, a 1gb array might take 1gb*8/128
times as long to process as the unknown amount of time we would need
for a single operation. It's not yet clear whether the chip is
suitable for addition, though, so perhaps it would take considerably
longer to perform addition than some other operation.

Now, let's suppose that it has a 0.00001% defect rate - it might be
interesting to speculate how to manage this defective computation rate
to ensure a reasonable degree of accuracy in our hypothetical
operations, if only we knew how those operations worked.

Does that help?

Or maybe it would be better to just not use J and to instead use some
custom language which is specialized at dealing with whatever this
chip eventually winds up being able to do if it ever winds up working
in a meaningful fashion. We haven't even gotten J to work yet on GPU
infrastructure. For that matter, I was looking at the lapack lab today
and I see carefully written documentation on a routine named dgbrfs
but I can't find any hint of how I could make that routine run
(geev_jlapack_ works just fine, however - as near as I can tell,
anyways).

Thanks,



--
Robert Bernecky
Snake Island Research Inc
18 Fifth Street
Ward's Island
Toronto, Ontario M5J 2B9

berne...@snakeisland.com
tel: +1 416 203 0854


----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to