New MFMP Glowstick Test Underway (Update: Fueled Test Started on Jan 30th)

Alan Goldwater and Mark Jurich of the Martin Fleischmann Memorial Project has started up a new Glowstick 5 test which is running live from Santa Cruz California.

UPDATE (Jan 30, 2016): Alan Goldwater has informed me that a live fueled test has just begun, and can be followed at the links below.

The main difference between this test and the previous Glowstick 5 test is in the pre-treatment of the nickel. According to this document describing the experiment, here is how the nickel will be treated:

Pre-Baking the Ni powder at 200ºC for 1hour, cooled, then baked for another hour.

Heating under vacuum to 115ºC while under vacuum for several hours, to de-gas the contents.

Heating with H2 to reduce oxides and potentially ‘load’ some H2 into the cracks and crannies created by the pre-baking.

The experiment can be followed live at this link (including chat with the experimenters): http://magicsound.us/MFMP/video/

  • Sanjeev

    Last 12 hours destruction test.
    Looks like a lot of noise in Active side TC. What ever excess was seen in fueled run could be just noise.

  • Sanjeev

    I suggest increasing the fuel quantity to 10 g for future experiments.
    Probably 1 g is too tiny to produce any detectable signal.

    • Mats002

      Good idea. MY (yes that MY) proposed in a serious post to MFMP about the Celani wire experiment to increase the number of wires to get a higher (or not) signal. It is not easy to do that because Celani wires us a scarse and expensive thing. In the GS experiments though it is not so hard to add the ratio between ‘active’ substances versus apparatus bulk overhead.

      • Sanjeev

        I think its too simplistic for me to assume that if 1g can cause an excess of 10C, 10g could cause 100C, but I expect at least some improvement (if there is any excess at all).
        Second thing is to try RF pulses with high temperatures in the same GS.

        About Celani wires, I never understood his reluctance to supply more wires or to even use more of them in his own experiments. These are not expensive compared to the cost of equipment. (probably costs a few cents per wire). With such a tiny amount of mass, its like finding a needle in the haystack, a waste of valuable time. But its Celani’s IP, so I have no rights to ask for anything.

        • Mats002

          That is precisly why open science as per MFMP is the only way to nail down this anomaly whatever the outcome will be.

    • US_Citizen71

      I think before moving on to a new design and all the complications that can come with that, it would be a good idea to do a run with water calorimetry. The current design is robust, there is enough data to likely repeat the results of the last run with the same anomalies. The design could be easily slid into a steel pipe that is running through a container of water. Forming a ring out of castable alumina like that which is used on the coil on either end of the GS could keep it centered in the pipe. The container could be anything from a large metal pot to a metal trashcan with a steel pipe running horizontally through the sides a short distance from the bottom of the container. Seal the junction between the pipe and the container with something like JB Weld, add insulation and a float valve with a line running to an external graduated tank to keep the water level stable and you have a calorimeter. Then it is just a matter of doing two identical runs for time and power, one fueled one not.

      • Sanjeev

        I guess MFMP is waiting for something significant to show up before going to calorimetry.
        But I agree, there is no harm in trying a simple Parkhomov type calorimetry in parallel. Bob Higgins is building one for this purpose, I have no idea how long it will take.

        • US_Citizen71

          In my personal opinion that last test run showed enough of a significant anomaly to warrant verifying the the thermal output with more sensitive means, but it is not my project. I’d even be willing to donate funds toward building what I described above. Any willingness from MFMP to go that route?

          • Bob Greenyer

            I am actually quite keen to do it now. Let’s see if the TCs come back as equivalent to each other.

            • US_Citizen71

              Sounds like a plan, when does Alan think he will be able to test the TCs?

            • US_Citizen71

              I think it is a good idea to move forward with an easy to build calorimeter to further your project’s goals. The design of your test reactors are not going to deviate greatly from your current GS series for awhile as far as I can tell, so I believe it is time to start collecting calorimetry data as it will be easier for the general public to understand. The difference between putting in 25 kWh of electricity and evaporating X liters of water on a null run and putting 25 kWh of electricity and evaporating X + Y liters of water on a fueled run is easy to understand.

              Also the calorimeter would provide a nice stable thermal environment, which should help reduce noise in the temperature measurements. It should end the swing from air currents from opening doors and people walking around completely. It will likely allow you to go to higher temperatures too since it should be more insulated. I sent you a bit of support to help you get there.

              One suggestion making the body out of something standardized like the below would help reproducibility.

              http://www.bayteccontainers.com/3-gallon-standard-5-gallon-open-head-steel-pails-covers-.html#gsc.tab=0

              • Bob Greenyer

                Thanks US – and you make good points

  • Sanjeev

    dT plotted for the last run in vacuum and Ar.(if that’s correct).
    Its somewhat lower than the run in H2, broadly speaking.
    Attached.

    • Ged

      Excellent, this run is a very helpful calibration; and a very nice graph :D.

      • Bob Greenyer

        It is a nice graph – it is a shame we could not remove the fuel cell – since we effectively cannot do a ‘no fuel’ back end calibration.

        • Ged

          Aye, that would be perfect. Particularly as if one approaches this with the assumption that there is amino Lois hear successfully sparked, we don’t what it takes to stop it for certain, thus potentially font animating null and book end calibrations. There may yet be ways to test that in the data we have and tease it apart. We’ll see.

  • Bob Greenyer

    From Alan

    “Each side of the heater coil measures 4.8 ohms, and 9.6 ohms end-to-end. These are after subtracting the meter lead resistance of 0.4 ohms. So no difference within the accuracy of the meter (±0.1 ohm)”

    So we know then that the difference in the temperatures was not down to fixed resistance (from imbalance at start) or changing resistance over time (from progressive degradation) leading to different power dissipation on either side.

    It could be due to different thermal conductivity of the surrounding apparatus or ‘active’ vs null fuel load and/or insulative/radiative differences in the two wire sheaths/TC covers.

    My next test though would be of the TCs to see if they have disproportionately changed with respect to each other in the run.

    To do this, I would split the cell in two and then immerse the two halves in a

    1. freezer / ice
    2. boiling water
    3. close together in a chip pan

    and try to determine if they are both reading very close temperatures across these three temperature points – the reason obviously would be to determine if the TCs are reading different temperatures. If they are the same – this will be important. I would suggest that swapping the DAQ over in each test to ensure that there is no influence or bias from that element in the reading.

    • Sanjeev

      That’s a good news.
      However, there is a big lead brick on the side of active (assuming the left one is active), it can reflect more and can cause a higher temperature. This can be easily fixed.

      • Bob Greenyer

        Should act the same in calibrations.

        • Sanjeev

          As far as I recall, the temperature during calibration never reached 1100°C (External). Or did it?

          • Bob Greenyer

            You recall right.

      • Ged

        During the oscillation part of the run, the same temperature was reached multiple times, but active slowly grow hotter each time the same temperature was reached. If the higher active was a physical aspect of reflection or materials of that side of the device, it should be a constant effect once a temperature is reached, rather than something that grows over time. We can definitely rule that out.

        • Sanjeev

          Its strange that it happened in that way. Stranger still, the Active and Null offset has disappeared during (and after) the last run, for all temperatures, high or low.

          • Ged

            I’m still crunching the data in my spare time, so we’ll see if it holds any more secrets.

  • Bob Greenyer

    You can test my logic here using a simple Kirchoffs Law circuit – Free to try in Chrome and on smartphones.

    http://everycircuit.com/

    Essentially, if you start with the same resistance in a serial circuit – you will have the same power dissipated through both resistors, the current will naturally be the same as they are in series and the voltage drop across the two will be the same. So – for two coils at 5 ohms each – and 120V applied – the current will be 12A and the voltage drop 60V – resulting in 720W per side.

    Now the resistance can go up in one of two ways

    1. Wire degrades via oxidation and stress resulting in loss of conductor width – this is largely driven by temperature

    2. Wire is hotter

    So what happens if one side is hotter?

    Firstly, the resistance will increase – but only a little and only when the wire is hotter. The rate of wire degradation (and therefore permanent resistance increase) will accelerate.

    If say the ‘active’ sides resistance went from 5ohms to 5.5 ohms and the passive side went from 5ohms to 5.2 ohms – then the ‘active’ side would now dissipate 11.2A x 61.7V = 691.04W and the passive side 11.2A x 58.3V = 652.96W a whole 38.08W different.

    Now, what we saw during the run is the two sides starting off with the ‘active’ running a good 24ºC cooler than the passive. This could be partially due to some pre-existing skew of resistance to the passive side or differing levels of insulation on the coil or TC.

    However as the run progressed – the spread between the two sides closed and indeed crossed – being more pronounced at higher temperatures. Moreover, the passive side temperature dropped as well as the ‘active’ side rising.

    This leads me to suspect that the ‘active’ sides resistance was was progressively higher than the passive – an effect that would have a positive feedback resulting in more actual power being delivered for the same TOTAL POWER which is what we were fixing in the experiment. As said above, either higher temperatures or wire degradation (rate driven by temperature) would increase the resistance of a side.

    So, if both sides are the same resistance or the passive side is less than the ‘active’ and the wires appear similarly degraded, then there may well have been progressively larger excess heat as less power would be delivered to the ‘active’ side and the insulation did not change. If the ‘active’ sides wire is significantly larger resistance now and it looks like it has visually degraded more – then this may show that it was exposed to more heat in a self feeding loop (of course this may be due to higher insulation / less dissipation on that side overall).

    I don’t know at this stage –

    1. state of each sides wires now

    2. starting resistance of each side

    3. current resistance of each side.

    My suspicion looking at the trend in the data by eye is that the ‘active’ sides resistance AND heat output was increasing during the run – having answers to the Q2 and Q3 above would help determine how much change in differential was coming from extra heat dissipation. Therefore – we must monitor the centre voltage in a run.

    Using a calorimeter, or Induction heating would not suffer these problems – but an induction heater would suffer not being able to be the same temp if the receiver of the induction heating had different properties.

    • Mats002

      Agree and want to add also non-electrical types of degradation: the wire expands with higher temp and that adds stress to the cement out from the GS body. This might cause a tiny gap between the wire and the cement on one side changing the thermal conduction properties. Even worse I think if TC is near to a conduction gap point. I don’t know how much this type of degration can change temp readings. Would be nice to learn.

      • Bob Greenyer

        About the conduction gap – Alan did some work on this before.

        We need to rule out as many alternative explanations as possible – the backend calibration being conducted now will help.

    • Andreas Moraitis

      It was to be expected that the dual GlowStick would not allow very precise measurements. But I think it is anyway good enough to determine under which conditions you can get a distinctive anomaly, let’s say with an apparent COP >=1.5. Precision calorimetry could be done later. The main problem in this setup is that you need to find a recipe that guarantees a high enough COP.

      • Mats002

        Unfortunatly high COP first time that did not happen. So now we are in a situation of trial-and-error which call for finer signals to go on. Need to know error margins better.

      • Bob Greenyer

        Yes Andreas – Recipe and maybe stimulation

    • Ged

      Hmm, I don’t know… Looking at the data right now in history form, I don’t really see a decrease in null side in ratio with the active side increase. We’ll see as I analyze.

      Though, the only way for more power to be dissipated without total power changing, is if there were inefficiencies gobbling up from the total power and preventing some fraction from reaching the resistors for dissipation, and that got fixed somehow. Otherwise we must always be in ratio with the power in, that is both sides added together must equal total power.

  • Bob Greenyer

    Contributor RabbitDuck has put together the following Graphs on google graphs

    https://www.google.com/fusiontables/DataSource?docid=1uf-9AcVq3hLvi6rU-XepKEXA8lLCkambWGrK5sJ5#chartnew:id=9

    • Ged

      Huh, what an interesting graph program thingy. Haven’t figured out how to properly interpret it yet, with all these filters.

      • Bob Greenyer

        it is yes.

        • Mats002

          At least one finger is not in a package – great!

      • Sanjeev

        I couldn’t draw any conclusions either.

  • Sanjeev

    And this is dT vs Active side temperature (Last 12 hours).
    There is something strange and chaotic going on at high temperature.
    (Attached)

    • Mats002

      Would be interesting to see that same diagram from a dummy run. If the dT is much lower in a dummy run then there is a need to explain the origin of the thermal noice when Li is present (dummy run is without LAH).

      • Sanjeev

        If by dummy run, if you mean the one done with only Ni and H2, then as far as I recall there was no excess, the null was always higher than the active.
        We need another control run with something like Iron powder+H2 to see if this excess reappears.

        • Bob Greenyer

          Control Run will happen tomorrow.

          Alan noted last night that the cell was running hotter at same input levels.

          “Point of reference, T_active is running about 25 degrees above the 600 watt cycles 2 Feb, and about 35 above the de-gassing on 26 Jan.”

          and that is at the lower temps.

          • Stephen

            Are there any particular tests planned today as well or will today be a rest day? Just curious if there will be more to follow 😉

            • Bob Greenyer

              I think Alan will try to take out the ‘fuel’ load and prepare for back end calibration.

              • Stephen

                The backend calibration and fuel analysis will be interesting, looking forward to it. This was a great test a lot of new information to think about.

        • Mats002

          Yes I mean the one with only Ni and H2, how high T did it go to? Can you make the same diagram (dT/T) from that run?

          • Sanjeev

            Actually the temperatures during that run never went higher than a few hundred, so it won’t be comparable. But please see Bob’s comment just below about 25 and 35C excess compared to other runs.

    • Bob Greenyer

      Imagine IF a burst of heat came from one side – there would be a small pressure increase and this would force potentially hotter H2 (Highest heat capacity) into the other side and this would ring between the two.

    • Ged

      Very nice graph, Sanjeev. It shows a lot. A surprisingly large absolute increase of >60 C if one takes the active sides calibration, or original (under the null) behavior!

      I am still fighting tooth and nail to get time, but I going to try doing total measured temperature (both sides added together) versus power, compared to calibration. No matter what is going on with the heater, the two sides must sum to the same temperature proportional to the power in, unless there is energy production making heat.

      • Mats002

        Hi Ged, what if the T goes up and down as in exo- and endothermal chemical reactions giving an average temp corresponding to power in?

        Would you say that such a scenario is energy neutral or is it an energy ‘cost’ for driving T up and down over time?

        • Ged

          Aye, that would be be cycling energy, or rather it would be in equilibrium and just oscillating over space, in which case it would be “energy neutral” for our purposes as out=in. Really, it would just be sum total = power in. No extra losses though, since energy is already being put in, so entropy is already being increased; that is as long as the material doing the oscillating doesn’t wear out, which would be seen as the oscillations dampening overtime and then ceasing (think like a pendulum given a starting push, where push is the fresh reactants with no wear or tear).

          • Mats002

            Well with that logic if the pendulum T up and down over time on average increase then energy is increasing and that is in fact what the HUG show in this experiment; dT between active and null on average increase. Do you agree?

            • Ged

              The dT of active compared to null is definitely averaged much higher than the calibration. dT was negative, but now positive by 20+ C, so the full change is bigger than just the “positive above null”. What I am going to look at is the sum total of both sides together, or simply active+null. I don’t have all the power data, but that will be the key to determining if there was more apparent, measured power out. Basically, doing what you are saying and looking at the average of the entire system.

              • Mats002

                OK. What if your analysis show an overflow of energy; what is the error margin of this setup? Alan and JustaGuy says 10 – 20% but that is (my understanding) a ballpark figure. How would you go about that?

                • Ged

                  We actually have an advantage here we didn’t have before. We have the pulsing experiment, which gives multiple averages I can use. So, rather than ballparking, I can use statistical analysis. Really, to be proper, I need independent N (that is, an entire new run), but that is not always feasible for awhile. So, the consequence will be that this will be the run testing against itself, and thus vulnerable to internal systematic error. This is true for all experiments in all of science, which is why replication is essential. I could use previous GS data to give me a true higher N, but there is so much data to crunch and so little time.

                  But in brief, I would go about it with stats. The “margin of error” is already encoded in statistics as the variance and its square root, standard deviation. So the statistics will tell us on their own what the margin or error is and if there is a significant signal above the noise. No need for ballparks, or football parks, or dog parks here; if I can manage it. These large datasets really challenge my computer.

                • Mats002

                  In this case I don’t think a computer is the obstacle. The obstacle is gathering data from all previous GS runs and do the sum that you want.

                  Can you describe what statistical analysis you need? Let’s ask MFMP for the data you need and crunch the numbers. What do you suggest?

                • Ged

                  There are two tests: peak null versus peak active of experiment – calibration averaged from all technically successful runs (runs where something didn’t physically mess up, or where there is no known source of error), and a longitudinal test for any run that had multiple ramp ups/downs. The former is a simple Student’s t-test (or F-test for non-parametric if not normally distributed in error, but error should be a simple electronic fluctuation in the readings from the thermalcouples which would be normal), and the latter requires an ANOVA. I think only GS5.2 can be longitudinally investigated, which I am setting up for now.

                • Mats002

                  I followed you up to the ‘longitudinal test’ – can you explain for a 10 years old please?

                • Ged

                  Longitudinal just means over time ;). If you are following the values of some parameter over time, compared to a control parameter over time. This means the data is two dimensional: dimension one is the experimental parameter versus control, and dimension two is the time points from 0 to whenever they stop.

                  The temperature pulses done in GS5.2 create very distinct breakpoints which allow me to treat each individual temperature hold as a discrete point of data in time. Since these march along in time, I can then do statistics on the -change- in the active compared to the -change- in the null side over time for each successive breakpoint.

                • Mats002

                  How is the second (latter) part different from the first part – peaks of null and active over time?

                • Mats002

                  dNull vs dActive over time is the latter? As in
                  (Null[1] – Null[0]) / (Active[1] – Active[0])

                  ?

                • Ged

                  I may do that too ;). Though, if we do it that way, each N will be each time step. Hmm, that is a good idea though, Mats. See, there are so many ways to slice and dice the data.

                  That equation idea there will answer a different question statistically though, that question being if the active -changes more than- the null over time. I’m interested in the total heat out versus power in, but I will also do your method as that fixes the problem I was having looking at how the active slowly changes over time while held.

                  It’s a different question though, and so will be a different test, if I can manage to wrangle the data for it. No promises with such unwieldy datasets.

                • Mats002

                  Time to hit the sack for me. No hurry. I can crunch the numbers if you support with the algoritm. Nighty for now!

                • Ged

                  Rest well!

                • Ged

                  If we take the average peak temperatures of the different GS runs, say GS3’s peak with GS5’s peak with GS5.2’s peak yesterday, we lose all time data. This is simply an N of 3 for max active side compared against the calibration of that active side. Or better yet, sum of null and active versus power compared with calibration’s sum of null and active versus power. This is flat data, there is no time at all, it’s a single number, like 2000 C/120 W +- 30 C/w versus 1600 C/120 W +- 20 C/W. The t-test will tell us if the means of the experimental runs averaged together compared to the means of the calibrations, given their error, is significantly different. That is, testing if there is a significantly greater amount of heat per Watt in the experimental compared to calibration, with an N of 3 independent GS runs.

                  The latter is a single GS run over time, where each time point is the total average (maybe, haven’t completely decided how I want to handle the slow increase in the active side over time during the holds) of the hold temperature between the two breakpoints (ramp up, and ramp down). The N here is each time the same temperature is held, more or less, but really it’s a single experimental run (kinda like how global temperatures for the Earth are just a single run, with an N of 1 since we don’t have another Earth to create another N; but there’s still statistics and computer models and a whole bunch of time course science done on our single Earth N for global temperatures).

                  Ideally, with the longitudinal test, I want to compare the sum total active+null/powerin against each successive time point at the same temperature hold (e.g. 3 different 1000 C holds) and against the calibration, where it exists for that temperature hold (thankfully the bookend calibration took it up to 1000 C, it looks like), which is time 0 more or less.

                  I can still do this with active versus null, though, but it won’t be as robust a test as divided by power.

                • Mats002

                  1. Average ((peak T) / (W at peak T))
                  A) You list all GS runs to use
                  B) I will give one number as the result +/- n

                  2. Same average for the GS5.2 run

                  Those two are to be compared – but what’s the error margin here?

                • Ged

                  Error is the standard deviation that is calculated from the data. When you take the averages of points, you also take their standard deviation. The statistical tests then evaluate if the mean versus that variance is different enough to be not caused by chance. If you are using excel, you would use stdev on the same data you use average.

                  Now, you can either take the single, highest point that each run had once, which is a single temperature reading and could be an unrepresentative outlier, or you could take a window pf points where temps were at the average max, using breakpoints to tell when that max is over (basically, a breakpoint is just where the percent difference between the points of some sliding window rise above a certain threshold. For the GS5.2 data, I have found a suitable threshold of 2% for data points separated by 23 seconds, for determining when the data is undergoing a breakpoint ramp). The trick with a window, is that when you create the average of the peak power, since it is all the data points over a certain time window hold, that average will itself have a standard deviation. Then when you average the averages, one has to take the a complicated sum of the square error/sum of the global mean error to get the right standard deviation for the actual averages of the independent runs.

                  … So yes, it is easier to take the single point max, but it’ll be more accurate to take the average maximum to avoid outliers. But don’t worry about that, you can just grab the single maxes if you like, that make for a quick and easy first test :D.

      • Sanjeev

        Looking forward to your graphs.

        • Mats002

          I am thinking of the noice/error margin in this setup. The weakest part seams to be coil and TC physical changes over temp and over temp cycles (time). It would be nice to know the degradation behaviour over T and T(cycle) and what is the worst acceptable physical change? If both sides degrades physically and electrically even, it is acceptable for showing that one side have XH but not acceptable for calculate energy OUT. How much uneven degradation can be acceptable to show XH? Pre- and post reference runs can be used to find the window of degrading but those runs must go all the way to Max T.

          • Ged

            Calibrating on each T level is so important. Really can’t stress that enough. For best accuracy, we need a standard protocol for temperature holds, and both calibration and active runs must follow it. Time can differ for each hold, or even for the speed or ramps, but not the target T holds themselves. That would alleviate a lot of problems.

  • Sanjeev

    dT during last 12 hours. (Attached)

    • Alain Samoun

      OK guys this is it! 20/25 deg.C. for about 30 minutes No questions it worked! CONGRATULATIONS!
      How many watts produced? In my opinion COP in these conditions has not much meaning as the reactor is not insulated.

      • Sanjeev

        JustaUser commented on chat, and I agree. This is well within error margins. Still very encouraging.

        Fri Feb 5, 10:56:15amJustaUser2: Let
        us be honest about this though … This is only about a 25 C difference
        out of 1000 C, or only a 2 % change; we know that the calorimetry of
        this particular cell may only be accurate to 20 or 10 %, so the temp
        change is still about an order of magnitude below what the calorimeter
        can resolve, without a differential analysis

        • Mats002

          Agree, MFMP show very good engineering and follow protocols according to Piantell, Parkhomov and other sources. But so far no excess heat or radiation signals of significance.

        • Bob Greenyer

          I have to agree that the ‘signal’ so far is not really meaningful.

          Having said that – we have had a number of runs now when the ‘active’ has ultimately run hotter than the null – even when it started trailing – and in the range of temperatures that the effect is claimed to be observed by the likes of Parkhomov. In this experiment, the ‘active’ sat below (one may say equivalent to) until a certain temperature range was entered. As said by Mark, other things may account for this.

          It must also be noted that the ‘active’ if hotter, will raise the temp of the null through H2 driven heat transfer – so the back end calibration is very important.

          This was a Parkhomov temp/time profile but not his Ni – the Russians claimed that the type of Nickel and size is important.

          I am of the mind that the structure of the cells need to change a little, and I will state my case in due course.

          It may have helped greatly if we had a thermal imaging camera on the cell as the ‘active’ looked noticeably brighter at the high temperature ranges and it may be that the whole average temperature of the ‘active’ is measurably higher than the Null.

          The big learning from this experiment so far is the Nickel processing / H2 ad/absorption and pressure effects in various zones in the temperature profile.

          • Sanjeev

            You read my mind !
            I was going to suggest a design change where the null is isolated as much as possible (thermally). Either we can use a long tube with active/null parts at each end and a wall with a tiny hole in the middle or we can use two totally separate reactors connected by suitable plumbing in order to equalize the pressures. The point is to minimize the crosstalk in order to increase the signal.

            Anyway, I think this “treated and degassed” Ni could be a good candidate for flow calorimetry and it should be done now instead of spending time on “Lugano type” reactors.

    • Bob Greenyer

      This graph will be more meaningful with the core temperature on.

      • Sanjeev

        Don’t have the core temperature data, but I can add the external active temperature to it.

  • Bob Greenyer

    Taking core temperature to 1050ºC now – we are following the time / temp /pressure profile of a Parkhomov published experiment (one in Calorimeter).

    Alan Goldwater has just made this comment about the current performance of the GS5.2

    “T_active is running about 25 degrees above the 600 watt cycles 2 Feb, and about 35 above the de-gassing on 26 Jan.”

    We even have cross-over already.

    • Ged

      Interesting, and divergence is growing, I notice, as of right now.

    • Ged

      Huh, looks like when pressure went back up, the the active became cooler than the null again.

  • Bob Greenyer

    Key GS5.2 Data so far…

    We still have to determine what is causing the ‘crossover’ at high temperatures.

  • Bob Greenyer

    OK, so the cell is on soak overnight (California) again whilst Alan rests.

    Alan uploaded the Power data over the first part of the run and Ecco made the following chart from it.

    http://i.imgur.com/iilHBk4.png

    • Andreas Moraitis

      Excellent. You should at last appoint Ecco as a regular member of MFMP.
      Could that cycling be done faster, without waiting for complete settling? Testing this for a limited time-span would not disturb the experiment, I hope.

      • Bob Greenyer

        Ecco likes his independence – we are very thankful for his deep and continuing contributions. Of course we would like to see him add his name to the stable, but maybe he feels he is more valuable as an outsider.

        When Alan is up later today – why not suggest it – this is everyones test!

    • Ged

      The stable power is a very good sign. Maybe Alan should do a high temp hold, see if it creeps up without cycling. Maybe a pulse structure of short-short-short-long(3-5x a short)-repeat pattern, to incorperate Andreas’ suggestion.

      • Bob Greenyer

        Alan is up – so get your vote in there!

  • Bob Greenyer

    We are planning to do a few bumps in power to see what occurs under the new Higher H2 pressure regime.

  • Bob Greenyer

    The cell is taking a breather – being let to tick over at a mid-range temp whilst Alan takes the day off. The cell was vacuumed down and 60psi (4.14bar) of fresh H2 was put in to observe pressure related effects and to see if H2 would be absorbed in some way.

    Higher pressure appears to make the ‘active’ cooler relative to the null – which is an interesting finding and also, as you can see from the attached graph, there is some clear uptake of H2 in some part of the contents of the cell.

    • Ged

      This a very good. 1) this rules out pressure effects in the anomalous heating of the active side, 2) now we still have our slow upward pressure trend mystery, but with a clean cell, we can see if it “resets”.

      Anyways, all my thanks to Alan for his masterful engineering, perseverence, and determination!

    • Stephen

      Just saw a small rise in temperature on both active and null of about 10 degrees from 14:00. Was some activity going on then? Or maybe some exothermic process started at this pressure?

  • Andreas Moraitis

    I made some conservative calculations regarding potential excess energy.

    Number of atoms in 1 g Ni = 1.026 * 10^22
    Number of atoms in 0.15 g LiAlH4 = 1.428 * 10^22
    Number of atoms in 1 mg additional H2 = 5.974 * 10^20

    That makes in total about 2.5 * 10^22 atoms in the fuel.

    Choosing 4 eV per atom as the “chemical limit” we get 2.5 * 10^22 * 4 eV = 16 kJ = 4.45 Wh.

    That is, provided that the reactor walls and steel parts do not react with each other, any excess energy beyond this value could not be ascribed to known chemical reactions. To prove that the readings reflect the released energy correctly will be the difficult part.

    • Ged

      For (humorously useless) scale, it takes about 52 kJ to heat up the average cup of coffee (or tea).

      • Andreas Moraitis

        4.45 Wh sounds little, but it would anyway be enough for some fun: 16 kW when released within one second.

        • Ged

          Give a bit of a pop! That’s like… 21 HP, enough to add wheels and take it for a spin like an RC car for a second.

  • Bob Greenyer

    We stepped up to 1150W on the upper part of the cycle and the ‘active’ is now riding CLEAR above the Null on a 1 minute average.

    • Mats002

      Very interesting and whatever the outcome of this experiment, so far very well performed with new learnings, thanks MFMP!

      Can you explain why null was hotter than active in the first place? Was it due to offset in TC signal calibrations or coil closer to null TC or hotspot on wire near null TC or…? How much temp difference between active and null is needed to be well over signal-to-offset ratio to have something significant?

      • Bob Greenyer

        If you cannot find the answer in the live doc Mats002, would direct that question to Alan on QH – he will give a detailed answer through an open channel in time.

    • Ged

      I note that the overall pressure trend is still upwards. Very interesting. What could be responsible?

      • Bob Greenyer

        A number have things have been suggested – H2 evolution, H2 reduction of Al2O3 making water vapour, Lithium Vapour…

        • Ged

          The first and last one could explain the periodic ups and downs, but the slow rising trend of the max and min across cycles is what catches my eye. The making of water from Al2O3 could explain that if shown. But, if it was from Al2O3 reduction, we should see it in every single test using the material, so we can easily test that idea by looking back.

          • Bob Greenyer

            Also – Ecco suggests on QH that the SS may absorb and release H2

            • Ged

              That could be contributing to the periodic, and if it was pressure going down as a trend it could contribute to that, but I seriously doubt there’s a way SS absorbing hydrogen could be driving a long term, cycle agnostic, upward trend at this time scale. Hmm.

              • US_Citizen71

                What if the heating makes lithium vapor and when the cooling cycle happens some of the vapor condenses on the top part of the tube and other places the aluminium isn’t. Now there is less lithium available to reverse the reaction and take up the hydrogen so the overall pressure rises.

                • Ged

                  It wouldn’t just vaporize again? The device is tight enough to hold hydrogen in the main heated cell, lithium doesn’t stand a chance of sneaking anywhere, and we know heating on this design is rather even from the imaging. It’s possible, but I would think unlikely. But, we can know for sure by seeing if lithium is coating any surfaces outside the active cell and zones of heating.

                • US_Citizen71

                  I didn’t mean to imply that the lithium wouldn’t vaporize again. It just would be in contact with the aluminum to form an alloy to absorb the hydrogen.

                • Ged

                  True, we shall find out!

                • Ecco

                  There’s a small vent hole on one end of the fuel capsule, which means that if Lithium evaporates it could escape from there and react with solid oxides in the cell (for example the mullite ceramic tube or the alumina felt used as a central spacer). This might imply, as you’ve written, that over time there will be less of it available for the reversible hydride reaction.

            • Ecco

              Actually I suggested that the immediate uptake/release of hydrogen at those temperatures after every cycle might have been also due to metals in the cell other than Nickel or Lithium (i.e. the SS capsules and rods), while the long term rise due to possible decomposition of the silica (SiO2) fraction of mullite ceramics under exposure to hydrogen at high temperature. Jones Beene suggested this is possible for Al2O3 too, but apparently it’s true mostly for significantly higher temperatures under atomic hydrogen exposure (so he’s technically correct).

              See:

              Mullite and alumina decomposition in a hydrogen atmosphere
              Excerpt 1
              Excerpt 2

              Wikipedia: Silicon monoxide formation

              Kinetics of silica reduction in hydrogen
              Excerpt 1

              Solubility of Hydrogen in steel 1
              Solubility of Hydrogen in steel 2
              Solubility of Hydrogen in steel 3 (and Nickel)

              The possible Reduction of Alumina to Aluminum Using Hydrogen

  • Andreas Moraitis

    I like Alan G’s idea to play with the frequencies of the power supply. An effective method to approach the optimum might be choosing multiples of primes, for example 3/5/7/11 kHz etc. In case that there is some effect, one could use common multiples of the best performing primes, and so on.

    • Andreas Moraitis

      3/7/11/13…kHz would contain the “5”, so maybe 5 kHz could be left out.

    • nietsnie

      I also agree that the crucial missing piece will be found in the frequency of electromagnetism that the fuel is exposed to. Except – I don’t think that frequency will be found in the kilohertz range. I think it will be much higher. This could be identified by combining the output of two frequency sources together to produce harmonics above them. The mathematics of harmonic generation could be used to choose frequencies to attempt by adjusting the frequencies of the two contributing tonic frequency generators to produce harmonics above them that include the desired test frequency. The experiment parameters would be to raise the temperature to a promising range and hold the *power level* there. Then use the two frequency generators together like filters to slowly walk through a large test set of combinations. Hold each one for, say, 3 minutes while measuring the temperature – then advance to the next. Each tonic frequency combination will produce a large set of harmonics with known frequencies. At the end of the test run the combination of temperature change and frequency list for each step can be cross referenced to narrow down the search area to smaller frequency ranges to subsequently test more thoroughly.

      • Andreas Moraitis

        The optimum frequencies might be higher, but with a rectangular waveform you would get anyway enough harmonics. As far as I know, their power supply can provide up to 30 kHz. Much higher frequencies would require special wiring, I guess.

        • nietsnie

          Yes. And if you happened to hit the right frequency and a lot of energy was produced – maybe that’s plenty good enough at this stage. That would certainly feel satisfying to me, at least. Plus, it has the advantage of being able to use the already available equipment. But, at the end, you wouldn’t know how to generally repeat the result – it would be reliant upon the individual power supply – just as Parkamov’s seems to be. My idea would require sine rather than square wave generation. Its advantage would be that the operational frequency could be zeroed in on rather than just hoping a random harmonic reached it. You could ultimately narrow it down to a particular frequency that could be relied upon to get a positive result. At least – that’s how it looks to me up here in the cheap seats.

  • Bob Greenyer

    An overview of the cycles.

    • Ged

      Looks like the rising pressure (over all trend, as obviously it drops during heating cycles) may be related to the increasing active side while null stays static. Does suggest something is changing in the active side with the hydrogen loading equilibrium.

  • artefact

    Crossover 🙂 @12:29:30

    • Bob Greenyer

      Yep even on a 30s average in the high temp part of the cycle.

      The BEAMS HAVE CROSSED on the *GlowStick* 5.2

      What this means is to be determined but right now the trend is encouraging.

      • Mats002

        Well time to ask (again) for a voltage measure point in the middle if the coil that spans over both active and null. What if one half of the coil degrades and that this is the root cause of the temp shift? How to rule out this possibility?

        • Mats002

          Should be “in the middle of the coil”, thanks to my smart phone.

        • Bob Greenyer

          It could be degradation – or one side being hotter, causing marginal relative resistance shift and therefore power dissipation change – despite the overall power being the same and obviously equivalent current through all wire.

          Looking at the voltage over time may be interesting

          • Ged

            I am sure we would have seen that much earlier at the previous 1000 C holds where nothing changed. But power is power, so just gotta look at that and wire resistance per side over time.

        • Ged

          Looking at the long time trace… Active is rising but null is not decreasing. If wire sidedness was changing but overall power was the same, then the null would -have to change proportionally- to the active side. There is no way around this as it is a ratio of a known total.

          So, I don’t see evidence of that right now (power in would have to be increasing as -overall- combined temp has gone up). But we need to know the full power budget, then we can observe for sidedness ratio changes.

      • Ged

        This plus the previous plus the celani wire… We definitely have a serious phenomenon going on. Any rule outs shared between we need to do?

  • Bob Greenyer

    The gap in the high temperature part of the cycle between the ‘active’ and null is as low as approx. 5ºC now at high power – it was 24ºC last night