At 02:29 PM 6/5/2012, Peter Gluck wrote:
I hope to discuss a lot of LENR subjects
with our colleague Abd.
Today, now, here:
<http://egooutpeters.blogspot.ro/2012/06/discussing-with-my-colleague-abd-about.html>http://egooutpeters.blogspot.ro/2012/06/discussing-with-my-colleague-abd-about.html
starting to exchange ideas re Reliability in CF/LENR.
Far from agreement but this makes a dispute interesting.

I don't see that we are far from agreement, but maybe Peter sees something I don't.

Here is the discussion, with my responses interspersed:

DISCUSSING WITH MY COLLEAGUE ABD ABOUT RELIABILITY
I have to apologize again, my old age weaknesses allow me
 to answer to Abd only step by step, in fragments to his so well
ordered arguments. I have to recognize that in our LENR circles, what he says is more happily/readily accepted than my ideas.

Perhaps. What that means, I don't know. Maybe nothing. Peter, you have extensive experience, which is to be respected. I'm a writer, so it's my business to be effectively communicative. I'm still learning, though.

 So, now about reliability.

[I wrote:]

Well, they *may be* inherent weaknesses of PdD LENR set up by known methods. A premature vision of an ultimate application can kill new discoveries, allowing them to be dismissed as worthless even if they are real, and if their reality is in question, it's a double whammy. What we need is Science, and it comes before Energy, if we need reliability for Energy. *We do not need reliability for Science.* It is desirable, that's all.

I think there definitely are inherent weaknesses, uncontrollable hidden parameters in the Pd-D cells and these are almost ubiquitous in this system. and difficultly curable. I have met similar things in my lab-pilot-plant practice, something that went fine 1000 times suddenly became impossible, a colorless product (as it has to be) coming out red or dirty grey with no obvious explanation first. Many times I was thinking to write a report about occult phenomena in technology, but then we found a simple straightforward causal explanation and we solved the problem by removing, killing that cause.

Yes. Until you identified the cause, it was totally mysterious. Gremlins. Bad juju. Whatever.

The weaknesses of the Pd D cells are unusually stubborn,

Electrochemical PdD experiments are *extremely* complex. With gas-loading, the complexity may be reduced, but a great deal depends on the exact structure of the particles or Pd material. And it will change with loading and deloading.

I am firmly convinced that poisoning of the active centers (NAE) by adsorption of gases that are NOT deuterium (it seems everything goes not only the very polar gases as I thought) explains this long series of troubles. I will write a new paper about poisoning these days. Nobody will believe it- just the Pd-D cells.

I'll believe it in that I consider it possible. Why not? However, I don't see this as explaining the difference between the first, second, and third current excursions in SRI P13/P14, which was a sealed cell. It's not impossible, though, because the first and second excursions, showing no heat, may have cleaned off the cathode.

It was crucial to identify the reasons for such variability. The skeptics did not get the import of variability, they thought that it meant that the effect was down in the noise. However, that's what SRI P13/P14 showed so clearly: the effect, when it appears, is striking, not marginal. Of course, sometimes there is an effect close to the noise. But a strong, quite visible effect is one of the characteristics of a successful replication of the FPHE, not something questionable, where we look at a plot and say, "Well, see, it's a little bit above the noise there, for a few hours." Maybe. Or maybe that is just noise a little higher than usual.

Reality is really good if it is repeatable and it is bad when it plays
 perfidiously hide and seek with us.

Ultimately, it appears, reality does play hide and seek, at the quantum level. But I don't think that's happening here. Regardless, reality is not "bad." Period. It's just reality. We make up good and bad. This is not you, but "scientists" who reject experimental data because they don't see repeatability in it are just fooling themselves. What they don't see means nothing. Saying "I don't understand this" is fine. Saying "you must have made a mistake," is the problem, unless the error can be identified. Not just guessed.

I agree with Abd re the premature vision- it is not good to focus only and immediately on applications and not explore the full richness of the phenomena, process, and product, whatever.

It's not as powerful, and it runs the risk of an enormous waste of time. Look, it was obvious from the beginning that there *might be* enormous promise from cold fusion. But it was also obvious, within a few months, that this was not going to be easy, at least not with the FP approach. Yet people had done stuff for a long time with no clear evidence of fusion, and casting about to find a new approach was probably not so wise, either, in the sense that it was likely to be obscure itself.

The deepest error that Pons and Fleischmann made was in not disclosing how difficult it was, with the original announcement, and, if not there, with the original paper.

For those convinced that LENR was real by the P&F results, and by other confirmation, including perhaps their own, pursuing more reliable approaches did make some sense. However, if these people were convinced it was real, and especially if they had success replicating P&F, they might consider the value of carefully studying what they already were able to make happen. Some did that, perhaps. Some did not.

I do not get clearly and do not agree with what says Abd re
reliability in Science. It is about the experimental results, these are used in the very Scientific process as described here:http://egooutpeters.blogspot.ro/2011/08/how-does-apply-prof-piantelli-rules-of.html and here:http://egooutpeters.blogspot.com/2011/08/scientific-values-of-professor.html
There are many other papers about the scientific method and
 solid results are necessary for developing understanding/theory.

Say, in 3 identical experiments we obtain 5, 10 and 0 units of Helium and only the second gives measurable heat- what can we conclude? This is a practical example, does LENR have a genuine scientific theory?

Not from that example!!! The correlation there is quite weak, and, if this is a real CF experimental series, I'd suspect that the heat is close to the noise. That is, from the expectation d -> He, we'd expect half as much heat with the first as with the second, but you have only the second showing heat.

This is too short an experimental series to do more than provide an indication, and the indication here could be that one of the heat measurements is punk.

Real example, one of the two or three best:

Miles' work. Miles did a set of CF experiments and controls. His full series as reported by Storms involved 33 helium samples taken and analyzed blind. These were samples of the cell gases. Miles had data on heat generation from these cells before the samples were taken. Multiple samples were taken from cells, I originally though this was 33 cells. Not. A weakness, but not a disaster. (Better if all cells had been treated equally, all cells were identical, etc. There were some differences, which actually weakens the result, i.e., included in the series was some cells where something quite different was going on, and that makes the work look *less* conclusive. But I won't go into that here.)

Of the 33 cells, 12 were showing no anomalous heat, and no anomalous helium was detected. 18 showed heat, and, from them, helium was detected within an order of magnitude of the helium expected from d -> He-4. The more heat, the more helium, within experimental error. (The measurements were rough, unfortunately, only order-of-magnitude detection.)

That leaves three cells. One experienced a power failure and deloading and calorimetry error was thus suspected, the other two were a cerium-palladium alloy. They showed heat, but no helium. What happened? We don't know. Nobody followed up, the classic story of cold fusion. Mysterious results, sitting in the record, with no follow-up.

This is a strong correlation, even with those three anomalous results. Miles calculated one chance in 750,000 of this happening by chance.

You could also look at the SRI Case replication, reported in the 2004 DoE review paper. It was poorly explained. When it's fully understood (I had to read other papers to get it), it shows this same phenomenon: no heat, no helium. Varying amounts of heat, varying amounts of helium. SRI also studied the time behavior of accumulated helium, and did one experiment where they attempted to recover all the helium (that's the hard part!), finding a ratio of heat/helium quite close to the theoretical value for d -> He-4.

In your page on Piantelli, you say:
the stage of observation and description was so difficult and has consumed so many resources and creativity- reward less- that CF remained practically stucked here. The trend to compensate and overcome this by many bold, fantastic, disruptive theories (some 160?) sometimes based on absurd hypotheses, but without a trace of experimental proof, including reactions with an astronomically low probability –was definitely non-Galileian.

It was largely reward-less because many researchers were not looking at the treasure they had in their hands, if they managed to occasionally see excess heat. They bought the idea that this was some kind of failure. No, it was success. It was indeed difficult to arrange a demonstration of the FPHE. However, it seems that those who persisted did find it. Indeed, it may have been most difficult for those who were lucky and found it quickly! -- because it then disappeared. I can imagine the agony. However, the gold was in investigating the conditions of appearance and disappearance.

And if a practical application is possible, setting Rossi et al aside, it will very likely be from theory enabled by the presence of more data from what should have been done twenty years ago. The idea that it was necessary to get reliability permeated the field, and that was an error. Reliability would very likely follow from a successful theory. Or not.

(With Rossi, if that's real, the investigation will follow and theory will be developed based on that. Rossi, in a sense, got lucky -- if this is real -- though he "got lucky" from what he says was a thousand variations he tried. Essentially, he explored the parameter space, trying lots of combinations. It can work. In fact, I'm suggesting something like that, only with systematic exploration, with special focus on answering extant experimental questions.)

This discussion has reminded me a paper I wrote some 6 years ago:http://newenergytimes.com/v2/news/2006/NET17.shtml it is about cold fusion as a wicked problem. You will see that my ideas have rater deep roots.

Yes. "Wicked problem." Peter, you caught the disease, you looked at cold fusion with an eye that only saw value in high COP (which is very different from reliability, by the way, 10% excess power, reliably, would be spectacular *for the science*), and you compared a few thousands of what you called "sick cathodes" with heat less than 30% with "many thousands" of "dead cathodes\". 30% of input power, with the FPHE, is actually way above noise, more than adequate for systematic study. Pons and Fleischmann, as I recall, had a "dead cathode" rate of 5/6. The practical implication of this is that one must run many cathodes, and, from what I'm seeing (Letts is graciously allowing me to watch his work-in-progress), a "dead cathode" can become "live" by continued electrolysis, sometimes. So it's not the cathode that is dead, but the patience of the researcher.

The point is that one out of six is actually fine, not terribly difficult, except for one thing: it can take months to run one of these experiments. So, if one is serious, one must run many cells in parallel, which is exactly what Pons and Fleischmann did in their later work. I've been suggesting expanding this, by making cells smaller and cheaper, the limit is the smallest cell for which heat can be measured with reasonable separation from noise. NASA is apparently exploring cells-on-a-chip, with many cells built on a substrate perhaps using techniques common in electronics. I assume that with the connections through the substrate, individual cells can be run together with the others, or separately, all being immersed in the same electrolyte (if this is electrolytic, or in the same gas if this is gas-loading.)

If research can identify markers of the reaction other than heat and helium, it could be *extremely* useful. For example, suppose that active PdD produces a characteristic sound. (This is reported by SPAWAR, by the way). It might then be possible to monitor instantaneous reaction levels, even more quickly than through calorimetry. Monitoring IR emission could do this as well. I've wondered about visible light. There should be some, if palladium is being melted, as appears in some SEM images of cathodes. (Etc.)

This kind of research would vastly speed up engineering the effect, even without a sound theory.

*Without needing any new approach to be invented.* Of course, if more reliable methods of triggering LENR are found, great. I expect the same kind of work can be done with NiH, for example.

More accurately, we define, in Science, "reliability" in a different way than we will in engineering, it seems, statistically. (Sophisticated engineering actually does the same thing, looking at failure rates, not perfect reliability.)

I think reliability in Science, engineering, business, marriage, musical interpretation is, grosso modo, the same overall. Statistical reliability in engineering, production is about a small proportion of under-quality pieces. A minimum is say 98.5% good items.

Depends on the nature of the application. However, reliability of an effect is not necessary in science, it is simply one more characteristic that is measured, by accumulating experience and quantifying it. X out of 100 cells tested following Protocol Y were found to exhibit anomalous heat above 5% of input power. Then we look for associations present with X and not with not-X, or vice-versa. We try variations, etc. And we also run the *same* series again.

There difficulty is that electrolytic cold fusion is extremely sensitive to seemingly trivial variations in the material. This is one reason why I think the most productive work will be with electro-deposited palladium, because it may, particularly with thin layers, be easier to control that deposit. But there are still many ways to mess it up, apparently. An advantage of deposited techniques: generally cheap.

However my personal professional experience comes from an extreme area where reliability = safety and unreliability was deadly danger. As an airplane pilot, I could not err two times- working with hydrogen cyanide, phosgene and vinyl chloride and other explosive or corrosive stuff we have used very reliable vessels, pumps, gaskets, the gas masks and the fire extinguisher systems were prepared for intervention. At one stage of my career I became an expert in dust explosions and have learned a lot from the terrible accidents investigated. In theory, many engineering systems are going asymptotically toward perfect reliability.

It becomes possible with experience. One of the big concerns about CF is that occasionally, heat production has been enormous, cf. Pons and Fleischmann's cell meltdown. However, if cell performance becomes reliable, within a few percent, say, such an outlier becomes quite unlikely. That meltdown cell was bulk palladium, a 1 cm cube. It would be interesting if someone, taking appropriate precautions, were to run that again. The worry: that the meltdown was at the low end of what might happen.... but it's unlikely.

I also want to emphasize an important idea- engineering is based on Science but comprises much more than Science.

Yes. However, Science makes Engineering more efficient.

There are many empirical or empirical-in-part elements of Know-What, Know-How including Know How Not! And Know Why, rules and best practices- not all rational, quantitative and easy to get. Technological reliability is a very complex and sometimes tricky issue. It has to be ingrained in the psychology of workers, users.

It's an engineering issue, which, I'm contending, is not necessary for scientific research, except as to techniques. I.e., the calorimetry for cold fusion experiments should be engineered to be reliable. And that can be tested and quantified. (And has been, repeatedly.)

The reliability argument against cold fusion is a red herring as to the science of it. It's only truly important in the matter of practical applications. Even there, there can be ways to move around reliability issues. Under some conditions, many devices can be built into one. However, if the units work or fail to work together, then this approach can itself fail. However, if that is so, then it's likely that conditions can be found where all or most devices will work!

Cold Fusion is unreliable and has no usable, predictive explanation or theory. It is low intensity; low reliability and does not last, usually. It is more similar to a shark than a herring. I don’t understand this idea with many devices combined. Unreliable is unreliable in science too. I apologize for this painful tautology.

Reliability through multiplicity comes from microchip manufacture. As an example, some techniques of making large memories produce memory cells that fail in substantial numbers. However, the chip may be designed so that the cells are tested and interconnected to bypass the bad cells. The necessary good cells are selected, not created by tighter control of the process, which may be too difficult.

In the case of heat-producing cells, the real problem is probably an ability to continue working after time, and it's possible that this might not be truly soluble. I was thinking more of cells being made that are unreliable in the sense that if you make a cell, it's performance might not be predictable in advance and would be known only after testing. But if you make a lot of these cells, the process is likely to be successful statistically a certain percentage of the time. And if those cells are then interconnected and used, with "dead cells" being bypassed, one could make a device of any size from many such small cells.

However, if the problem of sustained operation is the stopping point, then it might be possible to make cells readily -- and automatically -- replaceable. Whether or not this would be worth doing depends on details we don't know yet. Say it turns out that an E-Cat works for a week. Bummer. But if they can be made cheaply enough, one might be able to make a module that consists of a dozen E-Cats, and they are plugged into the control unit, which selects which devices are running. So the thing might run for three months, and then you'd pull it and replace it, the original unit would be reprocessed to rejuvenate it. It might or might not be an advantage to miniaturize the units and use more of them.

Rossi/Defkalion seem to be considering their units as reliable, though. We'll see. I'm suspecting they aren't there yet.

How long they work is a separate issue, and, again, might be addressed with engineering, once the science is understood. Until then, "engineering" is hit-and-miss, and could possibly take a very long time.

The system is ill, it exists but it is more a source of troubles than a source of knowledge and/or heat.

You can think of it that way, and then have nobody but yourself to blame for your depression. How about, "the system is an opportunity to gain knowledge, explore new realms of possibility, and generate workability for the future." Yes, work. So is about anything worth doing.

There never was a Cold Fusion Killer paper. Reviewing all the peer-reviewed literature, so far, I've noticed nothing that rises to this level, not even "wrong." What uncontroversially existed was rather widespread failure to replicate. However, there was not universal failure to replicate, and replications *of a kind,* in the end, have outnumbered, as to publication, the failures. That is, there are widespread reports of anomalous heat in PdD under some conditions.

Cold Fusion/LENR exists with certainty, is versatile and very diversified, supported by good scientists- a Killer Paper cannot be
 conceived/written.

Probably not. But I was, in another place, suggesting to skeptics that they attempt to do the Killer Research and write the paper. Indeed, one of my goals in designing my experimental kit was to make cheap verification of neutrons from Project Galileo-type cells (originally designed by Pam Boss of SPAWAR) easily available, because then someone skeptical could cheaply verify the effect, and then demonstrate the artifact with controlled experiment. I'm not ready for that, but expect to be within a year. (Note that Pam didn't like my use of LR-115 as a detector, and I understand her objection; however, LR-115 is cheaper and easier to develop and interpret, in certain ways, than CR-39, and should still be able to detect neutrons through proton knock-on, that's a standard usage for it.)

Cold Fusion is not dead, however a good doses of reliability resurrection seems necessary.

No. Improvement in reliability is desirable, but not necessary. I'm sorry to disagree here, but I believe that this point is crucial. We don't need some new discovery to make massive progress, we only need to be willing to work with what we have, with what is already known, and to explore that and quantify it and the associated phenomena. We have no study of the association of tritium, for example, with anomalous heat, helium, and H/D ratio.

 If it is technologizable as such or only after conversion to LENR+,
 is a 2 .10 exp 16 cents question

When we have LENR+, whatever that is, we may do things quite differently. Until then ....

I'm not waiting.

Reply via email to