Rossi: QuarkX Commercialization Depends on 5 Sigma

We mentioned recently here that Andrea Rossi is focusing hard on obtaining a 5 Sigma rating for his Quark X reactor (5 Sigma in physics being a probability that an event happening the same way every time is 99.9999%). It seem now that there is a very practical reason for striving to reach this goal: Rossi confirms that it has been set for him by a current partner. Here are some Q&As today on the Journal of Nuclear Physics:

October 19, 2016 at 4:05 PM
Dear Andrea,

Based on your comments here recently I am trying to get a clearer picture of what is going on at Leonardo Corp. currently, and what might be going on in the future.

1. You mention that your goal is to reach 5 sigma. Is this the focus of your work at the moment? AR: yes
2. Is 5 sigma a goal your partner has a requirement for, before a new level of support is provided? AR: yes
3. The demonstration/presentation you talk about: Will this only happen if 5 sigma is reached? AR: yes
4. Do you think reaching 5 sigma will trigger mass production of the QuarkX reactors? AR: yes
5. Is 5 sigma connected also with the low temperature E-Cat plants? AR: no

Thank you very much,

Frank Acland

So it seems these responses that new funding will be released by the unnamed partner only if 5 sigma is reached. It does not appear that long-awaited ‘robotic production lines’ are set up yet; they will be contingent on the performance of the QuarkX.

If Rossi’s year-long test is anything to go by, we can imagine that he will now be driven to keeping the QuarkX on track to reach the 5 sigma goal (when he doesn’t have to deal with legal matters).

When does Rossi expect this goal to be reached? There was one comment on the JONP in which he responded positively to question about whether the presentation of the QuarkX might be in February 2017. We’ll have to wait and see about that, of course.

92 Replies to “Rossi: QuarkX Commercialization Depends on 5 Sigma”

  1. How can you have five sigma without a precision manufacturing process which controls all the variables of manufacture precisely?
    It makes no sense to me to pursue five sigma without precision automation.

    You need the automation before you can achieve the sigma. First you must have control of all the variables for the process you are running with an adequate design and materials, then you must have control over all the variables of the manufacturing. in order to produce a consistent product to provide the consistent control.

    1. Ophelia,

      I believe your assumption is incorrect as it is incomplete.

      It is my understanding that the idea of the 5-Sigma is a Total Quality Management concept related to the level of control of a repeatable process. With this perspective, there are at least two processes being discussed.

      I think Rossi and the mysterious partner recognize that there is the 5-Sigma performance for the underlying technology of the QuarkX PROTOTYPE’S capability of generating a controllable LENR cold fusion reaction of high COP followed by a second 5-Sigma for the precision manufacturing of COPIES of the prototype.

      1. Wasn’t the ‘mysterious parder’ (JM of JM Products) in the year long tests with IH Rossis’ real estate agent? If so, my money’s on our new ‘mysterious pardner’ being Rossi’s dentist or yoga instructor.

      2. I think you are right based on what we have heard so far.

        The more common buzzword is Six Sigma (capitalized as it is a trademark). It is a manufacturing quality assurance and control strategy for reliability and repeatability (as you mentioned) of a product. Of course, it can be tweaked to any sigma (such as Eight Sigma which was briefly a thing, but far too costly and impractical). Six Sigma is often a bit more restrictive and expensive than what is needed for a product to be properly manufactured within its intended market, and there are other competing strategies.

    2. He’s misusing the term. I believe he’s being measured by meeting MTBUI or MTBF criteria and he has to meet 5 nines on that, meaning that in one year of operation, the conglomerate of e-cats as a whole have to deliver rated power X consistently for all but 5 minutes.

      1. Indeed, this is how it could be. Up to now I haven’t really understood his use of the term, but this sounds like a viable explanation.

      2. Could also do fast temp cycling to evaluate wear rates from thermal shock induced structural fatigue, as that should be one of the big drivers of time to failure. All parts would definitely need to survive that with low defects. Another speedy way to test parts could be to hold units at just below temperature of destruction and see how long they last over time with oxidation and all. Should give some good QA data for MTBF in normal use scenarios, but be faster than a year of testing in normal range. Of course, if really good QA was being done, all of these methods and more would be used.

  2. What is it going to be after sigma5?
    Will it be the partner requires 2000°K or 50% electric efficiency or that the sun sets in the east?
    We will know soon enough I am afraid 😉

    1. Quite the opposite, I hope —
      Reverse auction amongst investors for increasing appetite for more risk for maximizing early advantage.

    2. Does it really matter. Nothing will really change.

      Should you no longer need gasoline, The Gov. will just charge a mileage tax. If people are persistant in getting ahead, the government will just double down.

  3. Since 5 sigma relates to probability, meaning that there is approximately 1:3.5×106 chance that an observed effect is not genuine it doesn’t necessarily indicate how large or small the signal is, just the likelihood that it is real. I think it very unlikely that Rossi has performed 3.5M tests yet, so suspect that it is just a sciency-sounding buzzword he picked up from someone talking about reproducibility/repeatability of his latest invention.

    1. Rossi made it quite explicit in his past comments what the “Sigma 5” refers to: reliability. It is the probability of a runaway reaction. Still, it makes no sense without specifying a timeframe.

    2. It also depends on how many of the QuarkX’s he is running in parallel. I have never heard of more than three, but maybe there are more.

    3. 5 sigma was prominently mentioned when CERN announced its detection of the Higgs particle. This is a recent “sciency-sounding” use of the term so it could be where Rossi picked it up.

        1. My read on it says the 5 sigma is physics related for new discovery validation, and 6 sigma is the business manufacturing standard based on maximizing profit. The Higgs Boson is relevent, but the manufacturing method has been used for a long time.

          Two different animals that I see.

          1. There can really be any level to manufacturing Sigma, if following the Six Sigma template (like the old but not named such at the time “Three Sigma” or the brief but not really used Eight Sigm). Six Sigma is just one particular approach, but not the only one. Science statistics involving repeated experimental/observational results are a bit different where sigma is the standard deviation spread, but it seems like from Rossi’s response to Pekka that he is referring to manufacturing and not sigma from statistical N sampling. But, he could be misusing it!

            More on Sigma and many other competing manufacturing schemes here:

            And the practical meaning of One to Six Sigma (including Five):

  4. I wonder if the device runs some how in cycles? If so perhaps it’s a measure of successful cycles.

    For example if 1 device had 10s cycles and only one failure in a year operation would it have a sigma close to this?

    Of course if my assumption here about it being related to cycles is right:

    We are talking about 3 or more devices and the cycles may be shorter or longer and we do not know the failure rate or the periods of time the device is tested or not so maybe the duration depends on this.

    It would be very interesting to know if he has no cycle failures so far and is just accumulating the required data to achieve 5 sigma. (Although there is the reported over heating event maybe).

    What is the failure rate for 6 Sigma by the way? I’m curious how many devices would be needed to achieve this in a similar duration test.

    1. Interesting point. If the machine runs at a frequency of say 1Hz then it would take around 40 days to achieve 5 sigma on reliability per cycle. But that would not be very good from an end-user point of view. With no criterion data, 5 Sigma is pretty meaningless. Sounds good though!

      1. Or If cycles are playing a role. perhaps from a failure rate at cycle level he can derive a probability of failure on a year basis of multi quark system to 5 sigma or so but I suppose that would be more complex but also more useful maybe

        A failed cycle may not of course mean a failed device I suppose unless a failure is a burnout by default.

  5. How is the supposed Hotcat plant setup going on in Sweden? Wasn’t the Swedish plant supposed to be used for district heating? Sifferkoll please enlighten us…

    1. As far as I have understood, not much happens while Rossi is busy with the lawsuit. I have the impression that what we see in the published court documents is only the tip of the iceberg.

      1. Charlotte
        October 12, 2016 at 3:40 AM
        Dr Andrea Rossi:
        Is it possible that the QuarkX is introduced to the public before the end of the litigation with Cherokee Fund Partners and IH?

        Andrea Rossi
        October 12, 2016 at 5:59 AM
        It is not impossible.
        Warm Regards,

  6. On JONP:

    “Pekka Janhunen October 19, 2016 at 7:02 AM
    Dear Andrea,
    Instead of asking whether sigma five has been reached or not, shouldn’t the question rather be: how much component redundancy is needed to get sigma five, and is such redundancy economical?
    regards, /pekka

    Andrea Rossi October 19, 2016 at 7:07 AM
    Pekka Janhunen:
    I partially do not agree: redundancy alone can raise the costs uselessly. Therefore the answer is more complex: it depends also on the calculation of the damages caused by the delay of the presentation respect the economy scale of the item. By my calculation, it is worth to aim to Sigma 5.
    Warm Regards, A.R.

    Andrea Rossi October 19, 2016 at 2:40 PM
    Pekka Janhunen:
    “delay of the presentation” is referred to the delay caused to wait for Sigma 5 respect starting before utilizing redundancies to compensate minor Sigma.
    Warm Regards, A.R.”

    1. To translate: Rossi wants to make his base components reliable to the 5 sigma level, rather than add in redundency of lesser quality parts to reach an overall 5 sigma. Most manufacturing I have seen generally follows the former more so than the latter, as the former scales far more economically than having a lot more parts and complexity does. It does lengthen the R&D phase significantly however.

      1. Do you assume, or guess, that by sigma five, Rossi refers specifically to the reliability of a single 20W Quark (over some time interval which we don’t know but which is at most 1 year)?

        1. Assume, though for the product line defect rates (defective units per million is the usual metric) over expected life time, not just a single unit. For instance, he talks about the difference in waiting to hit 5 S reliability versus redundency when it comes to economics of scale. All of that is manufacturing rather than scientific considerations.

  7. 1. Why not ask this of Rossi directly?
    2. Since you are “not falling for the carrot anymore”, I assume we will
    cease to see your constantly negative postings in this forum.

    1. They been trying to either grow or print hearts, lungs, or kidneys for over a dozen years and still nothing. What gives.

      They’ve tried to produce Hot Fusion for over 50 years and 10’s of Billions of dollars and still haven’t achieved COP=1.

      In the last 10 years, there’s been at least a dozen new battery technology breakthroughs that far surpass current lithium ion batteries. Yet NONE have made it to real world production.

      Perhaps it’s time to quit funding science. It all seems to be a big scam.

  8. Are you being offered something? A carrot is a prize for performance, so what performance are you giving to be rewarded with, which you now feel the reward is not worth the cost of performance anymore?

    Anyways, talking about Sigma capitalized like that is generally for manufacturing regarding part defect rates and product reliability (sigma, the word, is not capitalized in science and statistics). Look up “Six Sigma”, which is one such manufacturing strategy Motorola trademarked back around 1993.

    1. Google “5 sigma higgs boson”
      About 246,000 results (0.60 seconds)
      5 sigma applies to experimental results in discoveries made in physics. 6 Sigma is a renowned Manufacturing Standard.

      The dataset for proof needs to be able to exceed the volume of the critics. (I’m not sure I believe in the Higgs Boson yet, though)

      1. That is all about observing the same signal multiple times to be confident it is a real unique particle signal and not multiparticle noise in the detectors (not really applicable to high COPs that are immediately well above noise). The talk used by him here doesn’t fit that very well, really. I believe based on all that we currently hear and the use of the parlance that it is definitely manufacturing–after all, this whole partner business has been for manufacturing which he is now outsourcing instead of keeping in house (so that makes him subject to the manufacturer’s rules).

        I could always be wrong with the next reveal, of course! And your point is quite reasonable. But the answer to Pekka really solidifies for me that this is manufacturing defect/reliability related. If so, here is how what the Sigma scaling means for manufacturing, again

        1. And YET, Rossi answered back to one person that they were collecting millions of Data info when referring to 5 sigma.

          Did they slam particles together 3.5M times to claim 5 sigma for the Higgs. I don’t think so. But this leaves us not knowing what criteria they used to obtain 5 sigma.

          As I pointed out to Frank 1 time and he agreed, with some questions, you need to be very specific with no wiggle room or chance of misunderstanding, because if you aren’t, Rossi’s answer may leave you dazed and confused.

          1. As I pointed out to Frank 1 time and he agreed, with some questions, you need to be very specific with no wiggle room or chance of misunderstanding, because if you aren’t, Rossi’s answer may leave you dazed and confused.

            Haha, oh man, truer words!

            But yes, in regard to CERN, one doesn’t need to do 3.5 million collisions. That isn’t what the sigma and probability refers too. You could do an infinite amount of collisions and be within 1 sigma of the null.

            The p value (<0.0000003 for five sigma) is calculated by how different the experimental values are from the mean of the null–or put another way, the likelihood that the experimental results could have come from randomly sampling the distribution of the null's values. The magnitude of the differences between null and experiment, as well as the width of the variance and the number of data points, all affect the p value. The bigger the difference with the lower the spread, the smaller the p value, and with tight enough data with a big enough difference, you could pass the five sigma threshold with an N of only three collisions.

            I am not versed in how CERN itself evaluates the data or what their data looks like compared to null, but I believe they only needed a dozen or so collisions (N) to get such a low p value.

  9. I really don’t think what you said here is accurate. Can you produce direct evidence showing the bar was higher before the “Rossi chapter” and then that it has been lowered and in what way during that chapter? And what would lowering the bar functionally mean in your argument in this case, if there is no way to act upon it except by opinion?

    Characterizations of “pathological science” seem to be mostly aimed not at credulity, but from people who don’t believe the field is possible a priori no matter the data, and feel threatened for whatever reasons when anyone explores the issue or suggests it is worth experimentation. Of course, my main basis for saying that is from reading Wikipedia editing conversations and talk backs on the articles (I haven’t actually contributed to Wikipedia though, just watched), as I usually haven’t seen that term used in the wild. So perhaps the field has not been fighting off that term in reality anymore, which makes sense from the perspective of the large list of institutions experimenting within the field.

  10. Excuse me, but horse puckey. The antagonists to LENR have violated findings that DO meet “regular-science standards of replicability” since the days of Fleischmann and Pons. And they continue to do precisely that.

    And :”Bob” is what is called a “concern troll” in the weblog world. All his postings are antagonistic to LENR, but couched in “lets be cautious” phraseology. “Concern trolls” like Bob never comment positively on ANY aspect of LENR (of which there are many here that are NOT Rossi-related in any way).

    1. Your attitude about Bob’s contribution strikes me as unfortunate. It is the sort of thing that holds a field back.

      A field moves forward when successes are recognized as successes and failures are recognized as failures. There is nothing wrong with failure as long as you learn from it. It seems to me that because of the unfortunate early experiences of LENR / Cold Fusion, the amateur supporters of LENR are now super sensitive and scared stiff of failure.

      A more open-minded attitude would help. Let the failures fail and let the successes succeed without prejudging the issue.

      1. And what, precisely, will experiments in biological transmutation yield in understanding how to get large amounts of usable power out of an LENR reactor??

        Not a damned thing, that’s what.

        My objection is that time spent doing experiments in the biological area are diverting scarce time and scarce resources from finding the answers to the RIGHT PROBLEM currently facing the human race, which is clean energy. Disposal of radioactive waste isn’t even on the chart, by comparison to that need.

  11. Karol
    October 19, 2016 at 8:39 PM
    Dr Andrea Rossi:
    Are you already started the construction of a 1 MW plant made with 50 000 QuarkX reted 20 W each?

    Andrea Rossi
    October 20, 2016 at 1:03 PM
    Warm Regards,

  12. Bill Conley
    October 20, 2016 at 5:54 PM
    Dear Andrea Rossi,

    Regarding reaching Sigma-5 (S-5) for the Quark-X (and all the good things that it will trigger) would you say that you are now: 1) Actively making design changes to achieve S-5, 2) Mostly conducting sufficient test cycles to accumulate the volume of data required to achieve S-5, or 3) Both 1 & 2.

    Best wishes on your continued progress in this essential work.


    Andrea Rossi
    October 20, 2016 at 9:03 PM
    Bill Conley:
    Warm Regards,

    October 20, 2016 at 7:15 PM
    Dear Andrea,

    It appears to me that a lot of your followers are interpreting the 5 Sigma criterion as “There is a 1 in 3.5 million chance that the QuarkX does not work as claimed” or alternatively “There is a 1 in 3.5 million chance that a given QuarkX is defective”.

    However I think the correct interpretation is: “Assuming the QuarkX does not work, there is a 1 in 3.5 million chance that we would be getting the results we have for the QuarkX”.

    1) Is that the correct interpretation of what you are trying to achieve?

    2) Will a higher COP improve your Sigma? (I think yes)

    3) Will more QuarkX run time improve your Sigma? (I think yes)

    4) Is ash analysis a part of Sigma calculation?

    5) I assume that at this point you might not be trying to increase COP too much. Therefore the only way I see that you could get to Sigma 5 is to run more QuarkX for a longer period of time. Is that what you’re going for?


    Andrea Rossi
    October 20, 2016 at 9:02 PM
    1- Sorry, I do not understand your question
    2- No
    3- Yes
    4- No
    5- Yes
    Warm Regards,

  13. I don’t think he would talk about redundency versus Sigma for economics of scale as he does to Pekka, if that was the case. Unless he is using it in both ways interchangeably.

    But scientifically a p value outside the five standard deviations is easy to hit for large differences; even for an N of 3 with a 1.77 times difference in mean for control and experimental with a %Coefficient of Variance of 3 and 3.4 respectively, the p value is past the 4 sigma mark by t-test (this example comes from real data, but in a very different field).

    Edit: The newest reply posted by Sam supports even more that manufacturing reliability is the topic.

  14. Finally we know what is happening:

    Andrea Rossi
    October 21, 2016 at 10:30 PM
    Frank Acland:
    Since March 2016, when the first prototype has been made, we produced several tens of them, most of them burnt before having resistent prototypes. Presently we are testing three of them, but one is winning the competition.
    Now we focus on the winner to get the Sigma 5.
    Warm Regards,

    My interpretation:
    Since March AR has been looking for a more resilient type of QuarkX. Now he seems to have 3 with (slightly?) different properties in test now and hopes that the best one will reach the 5 Sigma performance.

    1. Actually, Overtime Rossi has told us this all along. Just recall that they’ve been developing new materials to deal with long term high temperatures. Note I will assume (new materials) is just new formulations/recipes of existing material.

      1. According to another question of Frank the running QuarkX’s have different specifications.
        To a question of mine he will continue testing if 5 Sigma is reached but start production with the QuarkX recipe which has the best durability and performance mix.

    2. I have held for a while now that a most convincing demonstration of the reality of LENR would consist of watching one go into runaway (heat after death) mode. It should be replicable if we are to believe the accounts of people who complain that they can’t keep their systems stable and that runaways predominate. The calorimetry could be made simple since there would be no input power. And there should be lots of radiation to measure and chances to examine the ash. What’s not to like.

      All of LENR seems to be fixated on obtaining a steady-state controlled reaction when its very existence is still in doubt. Why not carve off a bit of attention for runaways for a while and still the voices of the dissenters who worry that a whole field is being erected on not much at all? If by “burnt”, Rossi means a meltdown due to a runaway Rossi reaction then it sounds like he has a highly replicable system for demonstrating his effect (tens of runaways vs 3 stable systems). Why not demonstrate it? Why not show off a video of a runaway. Why not take a runaway system with input power disconnected and put it in a bucket of water so as to measure the temperature increase. Simple simple simple if Rossi has what he says. .

      1. I agree with you Bruce. I have been trying to convince AR to openly demonstrate his Ecats and QuarkX. For him things are different. He has put all his money into this and wants to see an return of investment. He says that the best demonstration that his Ecat works, are satisfied customers and he doesn’t want to do more (time consuming) demonstrations, unfortunately. AR does not want to be famous allone, but also being rich, it seems. There is an issue with patenting the principle of the Rossi effect, because that belongs to Piantelli, so he can just patent the way how. He seems to have found the real parameters to trigger LENR and to keep it running in self sustaining mode and at large (atomic) scale, so useable quantities of heat can be generated. I believe that IH also knows how to do it, but because they aren’t buddies anymore, AR is now trying to develop the QuarkX, which is far more sophisticated and desirable. The knowhow of IH will become obsolete when AR manages to bring the QuarkX on the market. AR has said he would demonstrate the QuarkX if 5 Sigma is reached, so maybe we will see something in the coming time?

        1. And on the issue of ‘massive’ robotised lines, he doesn’t need hundreds of robots but at most 10, as these assembles are scalable.
          But he has to have everything designed for scalability and contingencies. I have a question, has Rossi ( and Hydrafusion) got implicit approval to build these ‘catalyzers’ in Sweden.
          As you can see, this may mean he must also have ‘distributed’ production facilities, and possibly assemble the final product in the markets they are sold.
          So you see Andrea has not only had to develop the greatest invention of all time, he still has monumental hurdles ahead, and then finally legislative and governmental.
          Contingency planning can alleviate some of these major ‘unforeseen’ obstacles.

          1. As long as there is no accepted theory it is difficult to prove that LENR is safe. Obviously multiple instruments to measure radiation are used and they do not show any harmful radiation, but is that good enough? What if it breaks or runaway? Those who want to oppose LENR will use the ‘fear weapon’ and say it produces ‘unknown radiation’ or whatever and this can delay the introduction dramatically. I guess for that reason AR is focussing on an industrial introduction first. I do not know if ‘Sweden’ would allow such factory without demanding safety and clarity in its functioning.

          2. “As long as there is no accepted theory it is difficult to prove that LENR is safe”

            Theory has precisely nothing to do with the determination of safety. The entire Industrial Revolution took place without any theory at all….just empirical engineering.

          3. Not in the nuclear sector. Theoretical models with exact and undoubted details are used to prove that something cannot become critical and that critical processes can be operated safely. LENR has a similar energy/power density as a fission process and therefore theoretical knowhow is required to prove that something is safe. Simple tests (even 5 or 6 sigma tests) are not good enough the guarantee a safe process.

          4. LOL. No nuclear product bases its safety aspects on theoretical calculations. EVERY piece of nuclear tech (including nuclear weapons) ultimately relies on physical tests. There have been multiple test reactors of every type of reactor built and operated for long periods of time before any commercial use was allowed. Theory is not and never will be the ultimate proof of safety.

    3. Gerard McEk
      October 22, 2016 at 3:11 AM
      Dear Andrea,
      Your latest response to a question of Frank Ackland triggered a few other questions:
      Assume all 3 QuarkX’s will continue and reach the 5 Sigma requirements, would you:
      1. Stop testing these and autopsy them and
      A. choose the version with the best (mix of performance and durability) for starting production
      B. Chose the version with the best performance (COP) for production
      C. Chose the version with the best durability for production
      2. Continue testing, but start production on either (A. B. C. Question 1)

      If things go well, how long do you estimate it will take before 5 sigma is reached?

      Thank you for answering our questions. I keep my fingers crossed for a successful 5 Sigma test.
      Kind regards, Gerard

      Andrea Rossi
      October 22, 2016 at 9:46 AM
      Gerard McEk:
      Warm Regards,

      1. Sebastian
        October 22, 2016 at 10:36 AM
        Dear Andrea,

        It is still unclear to me what you are testing for when aiming for 5 Sigma.

        5 Sigma implies that a specific event will occur with a very high probability.

        1) Is it that out of 3.5 million QuarkX produced, only 1 will be expected to be defective?

        2) How do you calculate that probablity if you only work with 3 prototypes?

        Thanks for clarifying

        Andrea Rossi
        October 22, 2016 at 11:33 AM
        No, it does not work that way: the probabilities are related to events, not to items.
        Warm Regards,

  15. Since AR hasn’t specified what this 5-Sigma means in this case, it can mean whatever he wants it to mean, no point in second guessing it.

    1. So the Lugano tests and the year long test with IH have no value? Another must be overcome. Where are all the 50+ patents? They cant hang in limbo forever.
      Building a robotised factory to produce a gazillion quarks won’t be needed if this pace continues. One can’t install or retrofit enough sites quickly enough to use all quarks-hotcats-ecats that are manufactured. Know of anybody certified in designing/plumbing/wireing/constructing any lenr powered device of any size. Manufcturing mass quantities is 1/10th the battle.

  16. /* Is 5 sigma a goal your partner has a requirement for, before a new level of support is provided? */

    How did you come up with such an question? And I’m not even asking, how the 5 sigma is defined / supposed to be validated. For example the yield of microprocessors in semiconductor industry is often just a few percent, nevertheless their sales still run well.

    1. I asked because I was wondering Rossi’s partner had said something like: “We’ll invest x million dollars for manufacturing your QuarkX reactors, but only if you can show a 5 sigma level of certainty that this reaction is not an accident”

      1. This is important because 5 sigma in repeatable means a business case can be made for solid but low percentage yields. This is how the microprocessor industry works.

    2. Speaking of semiconductors, I ran into an interesting discussion on microprocessor yields and cost . Similar considerations would likely happen for Quark manufacturing; and it could be possible lower quality Quarks could be binned into lower powered (e.g. lower heat and stress) parts to increase profit yields while maintaining acceptably low end user failure rate rates.

  17. WaltC

    October 22, 2016 at 9:33 PM

    Dear Andrea,

    I’m used to seeing reliability information given as Availability, Mean-time-to-fail (MTTF) and Mean-time-to-repair (MTTR). {Where Availability = MTTF/(MTTF+MTTR) }

    For example, a high availability, “five nines” system, might have a MTTF of 20 years, MTTR of 1.5 hours and an availability of 99.999%. {Similarly, a light bulb that has a MTTF of 2 years and a MTTR of 9 minutes would also have a 99.999% availability.}

    Question: Do you think that once you achieve 5 sigma, that you’ll also have the information to calculate the Availability, or possibly MTTF of your QuarkX devices? (Probably not yet the entire system, but the devices themselves.)

    Andrea Rossi
    October 23, 2016 at 3:43 PM

    I think so. F8.
    Warm Regards,

    Frank Acland

    October 22, 2016 at 10:18 PM

    Dear Andrea,

    The ‘events’ you speak of in connection with obtaining the 5 sigma goal:

    1) Are they distinct occurrences that take place within a single QuarkX unit?
    2) If so, are you able to measure each occurrence separately?
    3) So far have you measured millions of data points?
    4) If you are can measure each occurrence separately, is that the data you need to evaluate whether 5 sigma level has been obtained?
    5) Do you anticipate the time required to reach the 5 sigma level (if successful) to be in days, weeks, months or years?

    Many thanks,

    Frank Acland

    Andrea Rossi

    October 23, 2016 at 3:38 PM

    Frank Acland:

    1- yes
    2- yes
    3- yes
    4- please rephrase, I do not understand exactly
    5- months

    Warm Regards,


    1. Frank Acland
      October 23, 2016 at 4:38 PM
      Dear Andrea,

      Sorry for the confusion in question 4.

      I meant to ask whether over time, each event measurement will lead you to a conclusion about whether Sigma 5 has been reached or not. From your other answers it sounds like the answer is yes.

      Could you know about 5 sigma in early 2017?

      Thank you,

      Frank Acland

      Andrea Rossi
      October 23, 2016 at 6:28 PM
      Frank Acland:
      Thank you for the rephrasing, now I understand.
      The answer is yes.
      Warm Regards,

  18. I got a question how many of you here think that the price of products and even some services is going to come down in the coming decade or two because of technological advances? And if so how much? Thanks!

    1. I suppose the assumption here is that LENR will prove wildly successful. If so, it is not at all clear to me that it would in the “short” run bring prices down. It would be easy to make the assumption that it would since the price of energy would at some point be down, but there might be first a mighty stimulus involved in the changes in energy infrastructure. There might be a “benign” inflation with prices rising but wages rising faster.

    1. Why?? He has done precisely that….multiple times. And each attempt has caused a dung-nado of FUD from the pathological skeptics.

      1. We were promised a presentation of the quark in 2016. Now its been moved forward a couple of months. The subject we are discussing here is the quark and not his other devices. There has not been a presentation of the quark unless you count that out of focus photo we were shown earlier this year. Mills may not have the finished product yet but we have seen several videos over the years of his device. Its just a shame we cant get a short video of the quark in action until the proper presentation in early 2017.

        1. LOL….anyone with experience in real research knows that there is only one unchangeable rule…. “deadlines” (even suggested ones) are subject to change.

          I’m a LOT more interested in what is happening with the E-Cat and “Hot-Cat”. From a practical product standpoint, both of those fill needed commercial spaces. The QuarkX is a research curiosity.

          1. You can laugh if you like if you feel that Rossi has wasted the last 8 months. It would seem that Rossi is pinning his hopes on the quark as the device that finally makes the breakthrough for him.

          2. I’m laughing at the CONSTANT speculation about business matters for which we have zero basis for any sort of commentary. WE DO NOT KNOW what Leonardo Corporation is doing with the already-developed technology.

            Rossi is a research guy… is what he does. He has (rightfully) turned the business details over to others.

          3. I never speculated anything. I merely commented that it would be nice to see a work in progress video from Rossi. I have followed this as many others have since 2011. I know that Rossi doesn’t do videos. He could give us a bit more than a fuzzy photo to keep us going seeing as he has delayed the presentation by a few more months.

          4. “He could give us a bit more than a fuzzy photo to keep us going seeing as he has delayed the presentation by a few more months.

            See my original comment about “dung-nado”. Why should he, when all he gets from doing so is grief?? And your speculation was “why doesn’t he ………”.

  19. Yeah….too many “Bobs” on this forum. This was confused with a different thread with Bob Greenyer. But to address your point:

    “A more open-minded attitude would help. Let the failures fail and let the successes succeed without prejudging the issue.”

    Tell that to the pathological skeptics, who prejudge EVERY “issue” in the negative, no matter how tenuous (or nonexistent) their logic/proof. After having been exposed to their scripts ever since Fleischmann/Pons, I have pretty much learned to identify the different species by the different tactics they use.

  20. No matter what Rossi did/does or does not do will change not one thing about how the skeptics treat the issue(s). I’m glad that you at least acknowledge the hopelessness of THAT issue.

    And I disagree with your assertion about he “has committed self-inflicted wounds with every one of his demos”. There is at least one demo that has never been debunked in any way, and that is the impromptu “first long run” done by Levi after Levi raised the “non-dry steam” possibility after the very first “multi-witness” demo. They “jury-rigged” the system to run at a much higher water input flow rate so that there was never any steam in the system. Since that run was done “off the cuff”, Rossi had no opportunity to “jigger” it.

    It is his research, and he runs it as he wishes, and the rest of us just have to accept that and live with it. His choices and his consequences.

    For all we know, Leonardo Corp may be selling low-temp E-cats right and left. And they might not. We simply don’t have enough information one way or another.

    1. Fair enough Warthog, but I think it is silly to say that because there will always be some hard core skeptics it is useless for Rossi to do additional tests with better protocols including his complete absence from the scene.

      I frankly had forgotten the Levi test you reference and it was impressive w/o the dry-steam issue raising its head. Unfortunately more than a few observers do not regard Levi as a dispassionate third-party tester. I think Rossi would win many more converts (but never all) with an MFMP conducted black-box test than with any conducted by Levi.

      1. “Unfortunately more than a few observers do not regard Levi as a dispassionate third-party tester.”

        A speculation with precisely zero to back it up, other than the natural nastiness of the pathological skeptics, and their ground assumption that all LENR is bunkum.

  21. Read the history section here. Replication after replication after replication, by workers held to be far more legitimate than Rossi, and nothing has changed since the immediate aftermath of Pons and Fleischmann. Rossi will be no different UNLESS he produces a working commercial device. No black-box test is going to change that.

  22. And what is the date of the video?? Before or after the first round of tests??? Or did you even bother to check?? Same old FUD…only in this case rumor and innuendo about people instead of speculation and assumptions about technical topics.

  23. I’ve studied in some depth ALL of the various tests, and read the negative comments. They are ALL based on speculation. The basic logic used is “this error might have happened”, which then warps into “therefore that error must have happened.”. This has been done for EVERY TEST RUN. I simply see no possibility that another test, no matter who runs it or how it gets run will be any different or have a different effect.

    I don’t give a damn about Rossi one way or another. I think he has done a lot of stupid things, and I think his tech is crude in the extreme. My bet is that one or another of the competitive efforts will come to market (or proven reality) sooner, and that Rossi will be a historical footnote.

  24. Yes, the reference is to non-Rossi replications. And according to science, even a replication of a 1.1 COP “should” be sufficient to prove the “science case”. But it hasn’t been.

  25. “Still take issue with your logic that if ANY skeptics take issue with a test no matter the result (COP 50+ for example) or how well planned/executed or by whom that it cannot be of net benefit to Rossi and create interest in and support for LENR in general.”

    The key word is not “ANY”, but “enough”. The pathological/psueudo skeptics have successfully generated a high enough level of FUD such that LENR has not been able to break out of the science ghetto it has been assigned to. The one thing that might happen to change that as regards Rossi is that he wins in court.

    I frankly cannot understand how my “fellow” scientists can totally ignore the Toyota-Mitsubishi transmutation work, yet they do. I mean, scientifically, how much more proof does it take…..both reputable (world class, even) labs, reputable scientists, proven replication, peer reviewed publicaton….yet nothing has changed.

Leave a Reply

Your email address will not be published. Required fields are marked *