Standard Model Possibly Under Question after CERN Experiment Finds Anomalous Effect

Andrea Rossi noted on the Journal of Nuclear Physics yesterday that there was news from CERN that could put the Standard Model of physics into question. He wrote:

Yesterday the CERN of Geneva has published on “Physical Review Letters” an important article regarding an anomalous effect discovered by the Large Hadrons Collider in the experiment LHCb on course since 2011 under the direction of Prof. Luigi Campana ( Director of the Frascati Laboratories of the INFN, Italy). The announcement is related to the fact that B mesons have an anomalous tendency to decay into tau leptons instead of into muons, in which they are supposed to decay along the Standard Model: if the Higgs turns out to be different , after it decays, from what we expect, it is sign the Standard Model has failed us; this anomalous effect could therefore open the gate to new Physics and maybe to new information indirectly introducing possible better theoretical explications of LENR, even if they cannot be directly connected with the high energy effects inside the LHC.

I have not been able to find the original article from Physical Review Letters (there’s a pre-print article on arxiv.org here), but there are a couple of recent news articles that cover this development: one from the Italian website Tiscali.it (http://notizie.tiscali.it/articoli/scienza/15/09/annuncio-cern-rivoluzione-fisica.html), and the other from the Indian site The Wire (http://thewire.in/2015/09/07/weak-but-recurring-anomalous-signal-rivets-particle-physicists-10162/)

The second article mentions that this is the third time that results that deviate from the standard model have been found in the particle collider, and combining all these results together them together gives a significance of 3.9 sigma. You have to reach 5 sigma for something to be considered a discovery. So there’s nothing conclusive here yet, but a new question has been raised that at some point might lead to a need for some new physics which could be important for understanding what is going on with LENR effects.

UPDATE: Thanks to Gerrit for the reference to this story in Nature: http://www.nature.com/nature/journal/v525/n7568/full/525160b.html (full text not available for free)

  • GreenWin

    Thanks Mark. Another Khunian scientist and author of the 2013 book, Bankrupting Physics: How Today’s Top Scientists are Gambling Away Their Credibility. Co-authored with Sheilla Jones, Unzicker argues against the standard
    models of physics contending that they are no longer credible because
    of their complexity.

    “Can an entire intellectual elite like string theorists have permanently
    devoted themselves to foolishness? Can string theory really be nonsense
    if the best and brightest are convinced of it? Yes, it can. Big brains
    can be brainwashed, as well as little ones. Take, for instance the
    medieval theologians. Weren’t they the elites back then? Didn’t the
    brightest guys of that era stall science for centuries?”

    As we have noted often, BIG science like CERN, ITER, and NIF are taxpayer funded make-work programs. Unzicker is, like cold fusion scientists, unafraid to call a spade a spade. http://amzn.to/1Kd9trs

  • GreenWin

    “Lepton universality is truly enshrined in the Standard Model. If this universality is broken, we can say that we’ve found evidence for non-standard physics.”
    http://www.rt.com/news/313848-leptons-standard-model-overturn/

    One might assume Heisenberg uncertainty predicts “non-standard” physics is implied within the Standard Model.

    • Axil Axil

      In the standard model, Lepton decay is associated with the mass of the Lepton. Trouble with this decay process indicates a malfunction in the mass mechanism: the higgs field.

      • GreenWin

        Indeed. The idea of universal consciousness reduced to something calling itself “The God Particle” is curiously belittling IMO. 🙂

        • LilyLover

          So, so, true.

      • LCD

        There is already still a mass problem and that’s the reason supersymmetry people still exist.

        But these issues are not at low temperature and energy levels. Physics at these energy levels have other problems.

        Are they connected? Who knows.

  • Axil Axil

    What does Rossi mean by this statement regarded the Higgs boson?

    “if the Higgs turns out to be different , after it decays, from what we expect, it is sign the Standard Model has failed us; this anomalous effect could therefore open the gate to new Physics and maybe to new information indirectly introducing possible better theoretical explications of LENR,”

    While the W particles are force carriers of the weak force, they themselves carry charges under the electromagnetic force. While it is not so strange that force carriers are themselves, the fact that it is electromagnetic charge suggests that QED and the weak force are connected. Glashow’s theory of the weak force took this into account by allowing for a mixing between the weak force and the electromagnetic force. The amount of mixing is labeled by a measurable parameter.

    Unifying forces

    The full theory of electroweak forces includes four force carriers: W+, W-, and two uncharged particles that mix at low energies—that is, they evolve into each other as they travel. This mixing is analogous to the mixing of neutrinos with one another. One mixture is the massless photon, while the other combination is the Z. In order for a particle to gain speed, it must loss mass. Also the range of it influence increases as energy is added. So at high energies, when all particles move at nearly the speed of light, particles loss all mass.

    At high energy, the W particles behave like photons and QED and the weak interactions unify into a single theory that we call the electroweak theory. A theory with four massless force carriers has a symmetry that is broken in a theory where three of them have masses. In fact, the Ws and Z have different masses. Glashow put these masses determined by experiment into the theory by hand, but did not explain their origin theoretically.

    This single mixing parameter is critical in LENR, It predicts many different observable phenomena in the weak interactions. First, it gives the ratio of the W and Z masses (it is the cosine of ). It also gives the ratio of the coupling strength of the electromagnetic and weak forces (the sine of ). In addition, many other measurable quantities, such as how often electrons or muons or quarks are spinning one way versus another when they come from a decaying Z particle, depend on the single mixing parameter. Thus, the way to test the electroweak theory is to measure all of these things and see if you get the same number for this one parameter.

    A sickness and a cure

    While the electroweak theory could successfully account for what was observed experimentally at low energies, one could imagine an experiment that could not be explained. If one takes this theory and tries to compute what happens when Standard Model particles scatter at very high energies (above 1 TeV) using Feynman diagrams, one gets nonsense. Nonsense looks like, for example, probabilities greater than 100%, measurable quantities predicted to be infinity, or simply approximations where the next correction to a calculation is always bigger than the last. If a theory produces nonsense when trying to predict a physical result, it is the wrong theory.

    A “fix” to a theory can be as simple as a single new fix-em-up field (and therefore, a new particle). We need a particle to help Glashow’s theory, so we’ll call it H. If a particle like H exists, and it interacts with the known particles, then it must be included in the Feynman diagrams we use to calculate things like scattering and decay cross sections. Thus, though we may never have seen such a particle, its virtual effects change the results of the calculations. Introducing H in the right way changes the results of the scattering calculation and gives sensible results.

    In the mid-1960s, a number of physicists, including Scottish physicist Peter Higgs, wrote down theories in which a force carrier could get a mass due to the existence of a new field. This field explains how a particle gets mass and therefore the range of its interactions. In 1967, Steven Weinberg (and independently, Abdus Salam), incorporated this effect into Glashow’s electroweak theory producing a consistent, unified electroweak theory. It included a new particle, dubbed the Higgs boson, which, when included in the scattering calculations, completed a new theory—the Standard Model—which made sensible predictions even for very high-energy scattering. It predicted how a W particle changed mass as energy is added to became a photon at high energies.

    A mechanism for mass

    The way the Higgs field gives masses to the W and Z particles, and all other fundamental particles of the Standard Model (the Higgs mechanism), is subtle. The Higgs field—which like all fields lives everywhere in space—is in a different phase than other fields in the Standard Model. Because the Higgs field interacts with nearly all other particles, and the Higgs field affects the vacuum, the state of the vacuum affect the Higgs field, the coupling constant, and the range that the weak force can act. the space (vacuum) particles travel through affects them in a dramatic way: It gives them mass and restricts the ranbe of interaction. The bigger the coupling between a particle and the Higgs, the bigger the effect, and thus the bigger the particle’s mass.

    If the Higgs field does not act as the standards model predicts, the way the weak force and electromagnetism couples is not well defined. This variation in the state of the vacuum, the range of the weak force, and how electromagnetism affects the weak force come into question.

    If the vacuum can be manipulated such that a volume of space can be partitioned into a zone of high energy and an adjacent zone of low energy, the zone of negitive vacuum energy would allow the weak force to be more readily modified by EMF to increase it range and change its mode of interation. Such behavior has been seen when LENR increases the rate of nuclear decay of radio active isotopes in LENR experiments.

    This uncertainty in the coupling constant and the associated Higgs mechanism now seen in the standard model give LENR a opening and a place at the table in the full sunshine and acceptance by the standard model.

    • LCD

      Good synopsis but you end it with, if the vaccumm can be manipulated…

      That’s the rub there, how?

  • Jouni Tuomela
  • Andrea Calaon

    The generations of quark are four, while the lepton generations are only three … See you when this will become common knowledge.

  • Axil Axil

    In particle physics, the electroweak interaction is the unified description of two of the four known fundamental interactions of nature: electromagnetism and the weak interaction. Although these two forces appear very different at everyday low energies, the theory models them as two different aspects of the same force. Above the unification energy, on the order of 100 GeV, they would merge into a single electroweak force.

    The weak force is through by many to be the force that is responsible for LENR. For example, the L&W theory depends on the weak force to explain how protons become neutrons through a decay process.

    If the weak force does not behave like it is predicted to behave, that means that there is an unknown factor that is driving this weak force: the LENR force, into unexpected behavior.

    The connection between electromagnetism and the weak force may not be understood as current theory predicts. Since the B-meson decays into high energy tau particles, something is disturbing this decay process.

    Could the electromagnetic force combine with the weak force at lower energies, lower that 100 GeV? Could the electromagnetic force combine into the electroweak force at lower energies than expected?

    LENR could be driven by a much lower power level than expected for the electroweak force to be reached through engineering methods. The cross section for electroweak transmutation could be achieved through the application of very low electromagnetic power levels. Many LENR experiments point to this EMF based LENR mechanism. Rossi may be seeing indications of this occurring in his new E-Cat X reactor.

  • Sanjeev

    A single unconfirmed anomaly should not invalidate the whole standard model, it merely can show that the model is incomplete, which it is. Doesn’t everyone know that it is incomplete since last 100 years?

    • Gerrit

      it is the third time this anomaly is noted.

      “But physicists are excited because the same anomaly has also been seen
      in results from two previous experiments: the ‘BaBar’ experiment at the
      SLAC National Accelerator Laboratory in Menlo Park, California, which reported it in 20122,
      and the ‘Belle’ experiment at Japan’s High Energy Accelerator Research
      Organization (KEK) in Tsukuba, which reported its latest results at a
      conference in May.”

      • Sanjeev

        Thanks for the correction Gerrit. I guess the point I wanted to make still stands.

    • Valeriy Tarasov

      Incompleteness of standard model is a nice definition 🙂 allowing to put any uncomfortable questions under the carpet. One question to feel this incompleteness 🙂 – Why no any kind of particles pairs (consisting of both negatively and positively particles) were never observed in results of two electrons beams collision ? Energy of electrons in Large Electron–Positron Collider (which was dismantled to build instead the Large Hadron Collider) was high enough, up to 209 GeV, to see some particles pairs, without violation any conservation low.

    • LilyLover

      This is foot in the door. Until now they were afraid of speaking of it, lest the toes get crushed. Now, once the finished product seems in sight – they HAVE TO begin the foot in the door ELSE two years down the road, after losing ALL the credibility their “research” funding would vanish. Hence, for the sake of SURVIVAL, they are willing to get some toes crushed. Pretty much like the MIC needs wars to convert “jingo-patriotism” of “underinformed” into the salaries of the MIC and sustain the banking/QE system to “preserve-our-way-of-life”.
      Ivory Towers, The Vatican, Big-Con-Science, Politicians, MIC – all have one thing in common … people needing salaries for fictitious work. This is our version of minimum income.
      They are slaves to easy-life. Whether it comes from consensus Science, or from real science matters not to them. In con-Science, peers will shield them for their mediocrity; in real Science, they may actually do something or keep appearances thereof.

      • orsobubu

        Is MIC the military industrial complex? I like and I agree with your analysis about inefficiencies of wage system, but I fear the situation is worse than in your “generous” description. There is a deadly class struggle going on between capitalists and proletarians and among the various fractions of competing world capitalism; yes, there is fictious work, as there is fictious financial capital, but this is not the root of the problem. The root is that bourgeoisie – through the MIC, politicians, and all various imperialistic structures like WTO, BCE etc – are fighting for the redistribution of the surplus value extorted to exploited workers. The most important thing isn’t that peolple need salaries for fictious work, instead capitalists need to exploit people working real jobs for real salaries to extract their quota of profits. And because of unrelenting competition, and unrelenting fall of the profit rate, this phenomenon generates all the wars, all the crisis, all the poverty, all the parasitism, all the waste for useless duplications of production, all the curbing to the scientific development. And they send us the ideology of free market and progress as an excuse, and we believe it. Capitalism is not able to put billions of people at work, due to insufficient capitals to invest, but spends thousands of billions in weapons. In the mean time, people are happy with having to choose among Iphones and Android virtually identical devices, and these are the best outputs of all the global human work effort driving the world markets. Is this a scientific behavior?

        • LilyLover

          Well, thank you for bringing up my favorite key-word – parasitism.
          Cowards and stupids pay the price through their blood being sucked by the vulturesque-vampires for allowing the willfully-ignorants to perpetrate cruelty under the guise of “bravery” and “managing society”. Sheer the sheep with a screen well lit. Mass hypnosis through disproportionate magnification of real and irrelevant issues.
          Yes, it’s a bloody mess. Bloody war. It’s a war between coward-producers and the smart-enjoyers. The – “do-as-you-are-told” – trained dogs are used to bite the ones that try to uprise. The dogs may kill a few enemy dogs but the real profits remain in stealing from your own. Instead of working 5 hrs/week – the parasites make-up fake jobs and fake laws to make sure they are too busy to question the “authority” and therefore have no desire for anything else other than servile obedience while keeping their heads down.

  • Gerrit

    A crack in the standard model?

    Nature 525, 160 (10 September 2015)
    doi:10.1038/525160b

    (behind paywall, because Nature open science)

  • gdaigle

    @pg – “In particular, the presence of additional charged Higgs bosons, which are often required in these models, can have a significant effect on the rate of the semitauonic decay …”.

    One alternative theory to the standard model predicts the existence of 12 Higgs fields responsible for all physical charges, and not only a single Higgs field that is responsible for the gravitational charge, i.e. mass. That nascent theory also postulates additional forces outside the standard model.

  • pg

    Lepton universality, enshrined within the standard model (SM), requires equality of couplings between the gauge bosons and the three families of leptons. Hints of lepton nonuniversal effects in B+→K+e+e− and B+→K+μ+μ− decays [1] have been seen, but no definitive observation of a deviation has yet been made. However, a large class of models that extend the SM contain additional interactions involving enhanced couplings to the third generation that would violate this principle. Semileptonic decays of b hadrons (particles containing a b quark) to third generation leptons provide a sensitive probe for such effects. In particular, the presence of additional charged Higgs bosons, which are often required in these models, can have a significant effect on the rate of the semitauonic decay B¯0→D*+τ−ν¯τ [2]. The use of charge-conjugate modes is implied throughout this Letter.

    Semitauonic B meson decays have been observed by the BABAR and Belle collaborations [3–7]. Recently BABAR reported updated measurements [6,7] of the ratios of branching fractions,R(D*)≡B(B¯0→D*+τ−ν¯τ)/B(B¯0→D*+μ−ν¯μ) andR(D)≡B(B¯0→D+τ−ν¯τ)/B(B¯0→D+μ−ν¯μ), which show deviations of 2.7σ and 2.0σ, respectively, from the SM predictions [8,9]. These ratios have been calculated to high precision, owing to the cancellation of most of the uncertainties associated with the strong interaction in the B to D(*) transition. Within the SM they differ from unity mainly because of phase-space effects due to the differing charged lepton masses.

    This Letter presents the first measurement of R(D*) in hadron collisions using the data recorded by the LHCb detector at the Large Hadron Collider in 2011–2012. The data correspond to integrated luminosities of 1.0  fb−1 and 2.0  fb−1, collected at proton-proton ( pp) center-of-mass energies of 7 TeV and 8 TeV, respectively. The B¯0→D*+τ−ν¯τ decay withτ−→μ−ν¯μντ (the signal channel) and the B¯0→D*+μ−ν¯μ decay (the normalization channel) produce identical visible final-state topologies; consequently both are selected by a common reconstruction procedure. The selection identifies semileptonic B¯0 decay candidates containing a muon candidate and a D*+ candidate reconstructed through the decay chainD*+→D0(→K−π+)π+. The selected sample contains contributions from the signal and the normalization channel, as well as several background processes, which include partially reconstructed B decays and candidates from combinations of unrelated particles from differentb hadron decays. The kinematic and topological properties of the various components are exploited to suppress the background contributions. Finally, the signal, the normalization component and the residual background are statistically disentangled with a multidimensional fit to the data using template distributions derived from control samples or from simulation validated against data.

    The LHCb detector [10,11] is a single-arm forward spectrometer covering the pseudorapidity range 2<η2  GeV/c. Quality requirements are applied to the tracks of the charged particles that originate from a candidateD0 decay: their momenta must exceed 5  GeV/c and at least one must have pT>1.5  GeV/c. The momentum vector of the D0 candidate must point back to one of the PVs in the event and the reconstructed mass must be consistent with the known D0 mass [25].

    In the offline reconstruction, the D0 candidates satisfying the trigger are further required to have well-identified K− and π+ daughters, and the decay vertex is required to be significantly separated from any PV. The invariant mass of the D0 candidate is required to be within23.5  MeV/c2 of the peak value, corresponding to approximately three times the D0 mass resolution. These candidates are combined with low-energy pions to form candidateD*+→D0π+ decays, which are subjected to a kinematic and vertex fit to the decay chain. Candidates are then required to have a mass difference Δm≡m(D0π+)−m(D0) within2  MeV/c2 of the known value, corresponding to approximately 2.5 times the observed resolution. The muon candidate is required to be consistent with a muon signature in the detector, to have momentum 3<p<100  GeV/c, to be significantly separated from the primary vertex, and to form a good vertex with the D0 candidate. The D*+μ− combinations are required to have an invariant mass less than 5280  MeV/c2 and their momentum vector must point approximately to one of the reconstructed PV locations, which removes combinatoric candidates while preserving a large fraction of semileptonic decays. In addition to the signal candidates, two independent samples of “wrong sign” candidates, D*+μ+ and D0π−μ−, are formed for estimating the combinatorial background. The former represents random combinations of D*+ candidates with muons from unrelated decays, and the latter is used to model the contribution of misreconstructed D*+ decays. Mass regions5280<m(D*+μ−)<10000  MeV/c2 and 139<Δm<160  MeV/c2 are included in all samples for study of the combinatorial backgrounds. Finally, a sample of candidates is selected where the track paired with the D*+ fails all muon identification requirements. These D*+h±candidates are used to model the background from hadrons misidentified as muons.

    To suppress the contributions of partially reconstructed B decays, including B decays to pairs of charmed hadrons, and semileptonic B¯→D*+(nπ)μ−νμ decays with n≥1 additional pions, the D*+μ− candidates are required to be isolated from additional tracks in the event. An algorithm is developed and trained to determine whether a given track is likely to have originated from the signal B candidate or from the rest of the event based on a multivariate analysis (MVA) method. For each track in the event, the algorithm employs information on the track separation from the PV, the track separation from the decay vertex, the angle between the track and the candidate momentum vector, the decay length significance of the decay vertex under the hypothesis that the track does not originate from the candidate and the change in this significance under the hypothesis that it does. A signal sample, enriched in B¯0→D*+τ−ν¯τand B¯0→D*+μ−ν¯μ decays, is constructed by requiring that no tracks in the event reach a threshold in the MVA output. In addition, the output is used to select three control samples enriched in partially reconstructed B decays of interest for background studies by requiring that only one or two tracks be selected by the MVA ( D*+μ−π− or D*+μ−π+π−) or that at least one track selected by the MVA passes K± identification requirements ( D*+μ−K±). These samples are depleted of B¯0→D*+μ−ν¯μ and B¯0→D*+τ−ν¯τ decays and are used to study and constrain the shapes of remaining backgrounds in the signal sample.

    The efficiencies ϵs and ϵn for the signal and the normalization channels, respectively, are determined in simulation. These include the effects of the trigger, event reconstruction, event selection, particle identification procedure, isolation method, and the detector acceptance. To account for the effect of differing detector occupancy distributions between simulation and data, the simulated samples are reweighted to match the occupancy observed in data. The overall efficiency ratio is ϵs/ϵn=(77.6±1.4)%, with the deviation from unity primarily due to the particle identification, which dominantly removes low- pT muon candidates, and vertex quality requirements.

    The separation of the signal from the normalization channel, as well as from background processes, is achieved by exploiting the distinct kinematic distributions that characterize the various decay modes, resulting from the μ−τ mass difference and the presence of extra neutrinos from the decay τ−→μ−ν¯μντ. The most discriminating kinematic variables are the following quantities, computed in the B rest frame: the muon energy, E∗μ; the missing mass squared, defined as m2miss=(pμB−pμD−pμμ)2; and the squared four-momentum transfer to the lepton system, q2=(pμB−pμD)2, where pμB, pμD and pμμ are the four-momenta of the B meson, the D*+ meson and the muon. The determination of the rest-frame variables requires knowledge of the B candidate momentum vector in the laboratory frame, which is estimated from the measured parameters of the reconstructed final-state particles. The B momentum direction is determined from the unit vector to the B decay vertex from the associated PV. The component of the B momentum along the beam axis is approximated using the relation(pB)z=(mB/mreco)(preco)z, where mB is the known B mass, and mreco and preco are the mass and momentum of the system of reconstructed particles. The rest-frame variables described above are then calculated using the resulting estimated B four-momentum and the measured four-momenta of the μ− and D*+. The rest-frame variables are shown in simulation studies to have sufficient resolution ( ≈15%–20% full width at half maximum) to preserve the discriminating features of the original distributions.

    Simulated events are used to derive kinematic distributions from signal and B backgrounds that are used to fit the data. The hadronic transition-matrix elements for B¯0→D*+τ−ν¯τ andB¯0→D*+μ−ν¯μ decays are described using form factors derived from heavy quark effective theory [26]. Recent world averages for the corresponding parameters are taken from Ref. [27]. These values, along with their correlations and uncertainties, are included as external constraints on the respective fit parameters. The hadronic matrix elements describingB¯0→D*+τ−ν¯τ decays include a helicity-suppressed component, which is negligible inB¯0→D*+μ−ν¯μ decays [28]. This parameter is not well constrained by data; hence, the central value and uncertainty from the sum rule presented in Ref. [8] are used as a constraint. It is assumed that the kinematic properties of the B¯0→D*+τ−ν¯τ decay are not modified by any SM extensions.

    For the background semileptonic decays B¯→[D1(2420),D∗2(2460),D′1(2430)]μ−ν¯μ[collectively referred to as B¯→D**(→D*+π)μ−ν¯μ], form factors are taken from Ref. [29]. The slope of the Isgur-Wise function [30,31] is included as a free parameter in the fit, with a constraint derived from fitting the D*+μ−π− control sample. This fit also serves to validate this choice of model for this background. Contributions from B¯0s→[D′s1+(2536),D∗s2+(2573)]μ−ν¯μdecays use a similar parametrization, keeping only the lowest-order terms. Semileptonic decays to heavier charmed hadrons decaying as D**→D*+ππ and semitauonic decaysB¯→[D1(2420),D∗2(2460),D′1(2430)]τ−ντ are modeled using the ISGW2 [32] parametrization. To improve the modeling for the former, a fit is performed to the D*+μ−π+π− control sample to generate an empirical correction to the q2 distribution, as the resonances that contribute to this final state and their respective form factors are not known. The contribution of semimuonic decays to excited charm states amounts to approximately 12% of the normalization mode in the fit to the signal sample.

    An important background source is B decays into final states containing two charmed hadrons,B¯→D*+HcX, followed by semileptonic decay of the charmed hadron Hc→μνμX. This process occurs at a total rate of 6%–8% relative to the normalization mode. The template for this process is generated using a simulated event sample of B+ and B0 decays, with an appropriate admixture of final states. Corrections to the simulated template are obtained by fitting the D*+μ−K± control sample. A similar simulated sample is also used to generate kinematic distributions for final states containing a tertiary muon from B¯→D*+D−sX decays, with D−s→τ−ν¯τ and τ−→μ−ν¯μντ.

    The kinematic distributions of hadrons misidentified as muons are derived based on the sample of D*+h± candidates. Control samples of D*+ ( Λ) decays are used to determine the probabilities for a π or K ( p) to be misidentified as a muon, and to generate a 3×3 matrix of probabilities for each species to satisfy the criteria for identification as a π,K or p. These are used to determine the composition of the D*+h± sample in order to model the background from hadrons misidentified as muons. Two methods are developed to handle the unfolding of the individual contributions of π, K, and p, which result in different values for R(D*). The average of the two methods is taken as the nominal central value, and half the difference is assigned as a systematic uncertainty.

    Combinatorial backgrounds are classified based on whether or not a genuine D*+→D0π+decay is present. Wrong-sign D0π−μ− combinations are used to determine the component with misreconstructed or false D*+ candidates. The size of this contribution is constrained by fitting the Δm distribution of D*+μ− candidates in the full Δm region. The contribution from correctly reconstructed D*+ candidates combined with μ− from unrelated b hadron decays is determined from wrong-sign D*+μ+ combinations. The size of this contribution is constrained by use of the mass region 5280<m(D*+μ∓)<10000  MeV/c2, which determines the expected ratio of D*+μ− to D*+μ+ yields. In both cases, the contributions of misidentified muons are subtracted when generating the kinematic distributions for the fit.

    The binned m2miss, E∗μ, and q2 distributions in data are fit using a maximum likelihood method with three dimensional templates representing the signal, the normalization and the background sources. To avoid bias, the procedure is developed and finalized without knowledge of the resulting value of R(D*). The templates extend over the kinematic region−2<m2miss<10  GeV2/c4 in 40 bins, 100<E∗μ<2500  MeV in 30 bins, and−0.4<q2<12.6  GeV2/c4 in 4 bins. The fit extracts the relative contributions of signal and normalization modes and their form factors; the relative yields of each of theB¯→D**(→D*+π)μν and their form factors; the relative yields ofB¯0s→D**s+(→D*+K0S)μ−ν¯μ and B¯→D**(→D*+ππ)μ−ν¯ decays; the relative yield ofB¯→D*+Hc(→μνX′)X decays; the yield of misreconstructed D*+ and combinatorial backgrounds; and the background yield from hadrons misidentified as muons separately above and below |pμ|=10  GeV. Uncertainties in the shapes of the templates due to the finite number of simulated events, which are therefore uncorrelated bin-to-bin, are incorporated directly into the likelihood using the Beeston-Barlow “lite” procedure [33]. The fit includes shape uncertainties with bin-to-bin correlations (e.g. form factor uncertainties) via interpolation between nominal and alternative histograms. Control samples for partially reconstructed backgrounds (i.e. D*+μ−π−, D*μ−π+π−, and D*μ−K±) are fit independently from the fit to the signal sample. Since the selections used for these control samples include inverting the isolation requirement used to select the signal sample, this method allows for the determination of the corrections to the B¯→D*+Hc(→μνX′)X and B¯→D*+ππμ−ν¯μ backgrounds with negligible influence from the signal and normalization events. The results are validated with an independently developed alternative fit. In this second approach, control samples are fit simultaneously with the signal sample with correction parameters allowed to vary, allowing correlations among parameters to be incorporated exactly. This fit also forgoes the use of interpolation in favor of reweighting the simulated samples and recomputing the kinematic distributions for each value of the corresponding parameters. The two fits are extensively cross-checked and give consistent results.

    The results of the fit to the signal sample are shown in Fig. 1. Values of the B¯0→D*+μ−ν¯μform factor parameters determined by the fit agree with the current world average values. The fit finds 363 000±1600B¯0→D*+μ−ν¯μ decays in the signal sample and an uncorrected ratio of yields N(B¯0→D*+τ−ν¯τ)/N(B¯0→D*+μ−ν¯μ)=(4.54±0.46)×10−2. Accounting for theτ−→μ−ν¯μντ branching fraction [25] and the ratio of efficiencies results inR(D*)=0.336±0.034, where the uncertainty includes the statistical uncertainty, the uncertainty due to form factors, and the statistical uncertainty in the kinematic distributions used in the fit. As the signal yield is large, this uncertainty is dominated by the determination of various background yields in the fit and their correlations with the signal, which are as large as−0.68 in the case of B¯→D*+Hc(→μνX′)X.+Open in new window

    FIG. 1Distributions of m2miss (left) and E∗μ (right) of the four q2 bins of the signal data, overlaid with projections of the fit model with all normalization and shape parameters at their best-fit values. Below each panel differences between the data and fit are shown, normalized by the Poisson uncertainty in the data. The bands give the 1σ template uncertainties.

    Systematic uncertainties on R(D*) are summarized in Table I. The uncertainty in extractingR(D*) from the fit (model uncertainty) is dominated by the statistical uncertainty of the simulated samples; this contribution is estimated via the reduction in the fit uncertainty when the sample statistical uncertainty is not considered in the likelihood. The systematic uncertainty from the kinematic shapes of the background from hadrons misidentified as muons is taken to be half the difference in R(D*) using the two unfolding methods. Form factor parameters are included in the likelihood as nuisance parameters, and represent a source of systematic uncertainty. The total uncertainty on R(D*) estimated from the fit therefore incorporates these sources. To separate the statistical uncertainty and the contribution of the form factor uncertainty, the fit is repeated with form factor parameters fixed to their best-fit values, and the reduction in uncertainty is used to determine the contribution from the form factor uncertainties. The systematic uncertainty from empirical corrections to the kinematic distributions ofB¯→D**(→D*+ππ)μ−ν¯μ and B¯→D*+Hc(→μνX′)X backgrounds is also computed based on fixing the relevant parameters to their best fit values, as described above. The contribution of B¯→D**(→D*+π)τ−ν¯τ, B¯→D**(→D*+ππ)τ−ν¯τ andB¯0s→[D+s1(2536),D+s2(2573)]τ−ν¯τ events is fixed to 12% of the corresponding semimuonic modes, with half of this yield assigned as a systematic uncertainty on R(D∗). Similarly the contribution of B¯→D*+D−s(→τ−ν¯τ) decays is fixed using known branching fractions [25], and 30% changes in the nominal value are taken as a systematic uncertainty. Corrections to the modeling of variables related to the pointing of the D0 candidates to the PV are needed to derive the kinematic distributions for the fit. These corrections are derived from a comparison of simulated B¯0→D*+μ−ν¯μ events with a pure B¯0→D*+μ−ν¯μ data sample, and a systematic uncertainty is assigned by computing an alternative set of corrections using a different selection for this data subsample.

    TABLE I.Systematic uncertainties in the extraction of R(D∗).Model uncertaintiesAbsolute size (×10−2)Simulated sample size2.0Misidentified μ template shape1.6B¯0→D*+(τ−/μ−)ν¯ form factors0.6B¯→D*+Hc(→μνX′)X shape corrections0.5B(B¯→D**τ−ν¯τ)/B(B¯→D**μ−ν¯μ)0.5B¯→D**(→D∗ππ)μν shape corrections0.4Corrections to simulation0.4Combinatorial background shape0.3B¯→D**(→D*+π)μ−ν¯μ form factors0.3B¯→D*+(Ds→τν)X fraction0.1Total model uncertainty2.8Normalization uncertaintiesAbsolute size (×10−2)Simulated sample size0.6Hardware trigger efficiency0.6Particle identification efficiencies0.3Form factors0.2B(τ−→μ−ν¯μντ)<0.1Total normalization uncertainty0.9Total systematic uncertainty3.0

    The expected yield of D*+μ− candidates compared to D*+μ+ candidates (used to model the combinatorial background) varies as a function of m(D*+μ∓). The size of this effect is estimated in the 5280<m(D*+μ∓)<10000  MeV/c2 region and the uncertainty is propagated as a systematic uncertainty on R(D*).

    Uncertainties in converting the fitted ratio of signal and normalization yields into R(D*)(normalization uncertainties) come from the finite statistical precision of the simulated samples used to determine the efficiency ratio, and several other sources. The efficiency of the hardware triggers obtained in simulation differs between magnet polarities and between pythia versions—the midpoint of the predictions is taken as the nominal value and the range of variation is taken as a systematic uncertainty on the efficiency ratio. Particle identification efficiencies are applied to simulation based on binned J/ψ→μ+μ− and D0→K−π+ control samples, which introduces a systematic uncertainty that is estimated by binning the control samples differently and by comparing to simulated particle identification. The signal and normalization form factors alter the expected ratio of detector acceptances, and 1σ variations in these with respect to the world averages are used to assign a systematic uncertainty. Finally, the uncertainty in the current world average value of B(τ−→μ−ν¯μντ) contributes a small normalization uncertainty.

    In conclusion, the ratio of branching fractionsR(D*)=B(B¯0→D*+τ−ν¯τ)/B(B¯0→D*+μ−ν¯μ) is measured to be0.336±0.027(stat)±0.030(syst). The measured value is in good agreement with previous measurements at BABAR and Belle [3,5] and is 2.1 standard deviations greater than the SM expectation of 0.252±0.003 [8]. This is the first measurement of any decay of a b hadron into a final state with tau leptons at a hadron collider, and the techniques demonstrated in this Letter open the possibility to study a broad range of similar b hadron decay modes with multiple missing particles in hadron collisions in the future.

    • pg

      Sorry, it came out badly written

      Lepton universality, enshrined within the standard model (SM), requires equality of couplings between the gauge bosons and the three families of leptons. Hints of lepton nonuniversal effects in B+→K+e+e− and B+→K+μ+μ− decays [1] have been seen, but no definitive observation of a deviation has yet been made. However, a large class of models that extend the SM contain additional interactions involving enhanced couplings to the third generation that would violate this principle. Semileptonic decays of b hadrons (particles containing a b quark) to third generation leptons provide a sensitive probe for such effects. In particular, the presence of additional charged Higgs bosons, which are often required in these models, can have a significant effect on the rate of the semitauonic decay B¯0→D*+τ−ν¯τ [2]. The use of charge-conjugate modes is implied throughout this Letter.

      Semitauonic B meson decays have been observed by the BABAR and Belle collaborations [3–7]. Recently BABAR reported updated measurements [6,7] of the ratios of branching fractions,R(D*)≡B(B¯0→D*+τ−ν¯τ)/B(B¯0→D*+μ−ν¯μ) andR(D)≡B(B¯0→D+τ−ν¯τ)/B(B¯0→D+μ−ν¯μ), which show deviations of 2.7σ and 2.0σ, respectively, from the SM predictions [8,9]. These ratios have been calculated to high precision, owing to the cancellation of most of the uncertainties associated with the strong interaction in the B to D(*) transition. Within the SM they differ from unity mainly because of phase-space effects due to the differing charged lepton masses.

      This Letter presents the first measurement of R(D*) in hadron collisions using the data recorded by the LHCb detector at the Large Hadron Collider in 2011–2012. The data correspond to integrated luminosities of 1.0  fb−1 and 2.0  fb−1, collected at proton-proton ( pp) center-of-mass energies of 7 TeV and 8 TeV, respectively. The B¯0→D*+τ−ν¯τ decay withτ−→μ−ν¯μντ (the signal channel) and the B¯0→D*+μ−ν¯μ decay (the normalization channel) produce identical visible final-state topologies; consequently both are selected by a common reconstruction procedure. The selection identifies semileptonic B¯0 decay candidates containing a muon candidate and a D*+ candidate reconstructed through the decay chainD*+→D0(→K−π+)π+. The selected sample contains contributions from the signal and the normalization channel, as well as several background processes, which include partially reconstructed B decays and candidates from combinations of unrelated particles from differentb hadron decays. The kinematic and topological properties of the various components are exploited to suppress the background contributions. Finally, the signal, the normalization component and the residual background are statistically disentangled with a multidimensional fit to the data using template distributions derived from control samples or from simulation validated against data.

      The LHCb detector [10,11] is a single-arm forward spectrometer covering the pseudorapidity range 2<η2  GeV/c. Quality requirements are applied to the tracks of the charged particles that originate from a candidateD0 decay: their momenta must exceed 5  GeV/c and at least one must have pT>1.5  GeV/c. The momentum vector of the D0 candidate must point back to one of the PVs in the event and the reconstructed mass must be consistent with the known D0 mass [25].

      In the offline reconstruction, the D0 candidates satisfying the trigger are further required to have well-identified K− and π+ daughters, and the decay vertex is required to be significantly separated from any PV. The invariant mass of the D0 candidate is required to be within23.5  MeV/c2 of the peak value, corresponding to approximately three times the D0 mass resolution. These candidates are combined with low-energy pions to form candidateD*+→D0π+ decays, which are subjected to a kinematic and vertex fit to the decay chain. Candidates are then required to have a mass difference Δm≡m(D0π+)−m(D0) within2  MeV/c2 of the known value, corresponding to approximately 2.5 times the observed resolution. The muon candidate is required to be consistent with a muon signature in the detector, to have momentum 3<p<100  GeV/c, to be significantly separated from the primary vertex, and to form a good vertex with the D0 candidate. The D*+μ− combinations are required to have an invariant mass less than 5280  MeV/c2 and their momentum vector must point approximately to one of the reconstructed PV locations, which removes combinatoric candidates while preserving a large fraction of semileptonic decays. In addition to the signal candidates, two independent samples of “wrong sign” candidates, D*+μ+ and D0π−μ−, are formed for estimating the combinatorial background. The former represents random combinations of D*+ candidates with muons from unrelated decays, and the latter is used to model the contribution of misreconstructed D*+ decays. Mass regions5280<m(D*+μ−)<10000  MeV/c2 and 139<Δm<160  MeV/c2 are included in all samples for study of the combinatorial backgrounds. Finally, a sample of candidates is selected where the track paired with the D*+ fails all muon identification requirements. These D*+h±candidates are used to model the background from hadrons misidentified as muons.

      To suppress the contributions of partially reconstructed B decays, including B decays to pairs of charmed hadrons, and semileptonic B¯→D*+(nπ)μ−νμ decays with n≥1 additional pions, the D*+μ− candidates are required to be isolated from additional tracks in the event. An algorithm is developed and trained to determine whether a given track is likely to have originated from the signal B candidate or from the rest of the event based on a multivariate analysis (MVA) method. For each track in the event, the algorithm employs information on the track separation from the PV, the track separation from the decay vertex, the angle between the track and the candidate momentum vector, the decay length significance of the decay vertex under the hypothesis that the track does not originate from the candidate and the change in this significance under the hypothesis that it does. A signal sample, enriched in B¯0→D*+τ−ν¯τand B¯0→D*+μ−ν¯μ decays, is constructed by requiring that no tracks in the event reach a threshold in the MVA output. In addition, the output is used to select three control samples enriched in partially reconstructed B decays of interest for background studies by requiring that only one or two tracks be selected by the MVA ( D*+μ−π− or D*+μ−π+π−) or that at least one track selected by the MVA passes K± identification requirements ( D*+μ−K±). These samples are depleted of B¯0→D*+μ−ν¯μ and B¯0→D*+τ−ν¯τ decays and are used to study and constrain the shapes of remaining backgrounds in the signal sample.

      The efficiencies ϵs and ϵn for the signal and the normalization channels, respectively, are determined in simulation. These include the effects of the trigger, event reconstruction, event selection, particle identification procedure, isolation method, and the detector acceptance. To account for the effect of differing detector occupancy distributions between simulation and data, the simulated samples are reweighted to match the occupancy observed in data. The overall efficiency ratio is ϵs/ϵn=(77.6±1.4)%, with the deviation from unity primarily due to the particle identification, which dominantly removes low- pT muon candidates, and vertex quality requirements.

      The separation of the signal from the normalization channel, as well as from background processes, is achieved by exploiting the distinct kinematic distributions that characterize the various decay modes, resulting from the μ−τ mass difference and the presence of extra neutrinos from the decay τ−→μ−ν¯μντ. The most discriminating kinematic variables are the following quantities, computed in the B rest frame: the muon energy, E∗μ; the missing mass squared, defined as m2miss=(pμB−pμD−pμμ)2; and the squared four-momentum transfer to the lepton system, q2=(pμB−pμD)2, where pμB, pμD and pμμ are the four-momenta of the B meson, the D*+ meson and the muon. The determination of the rest-frame variables requires knowledge of the B candidate momentum vector in the laboratory frame, which is estimated from the measured parameters of the reconstructed final-state particles. The B momentum direction is determined from the unit vector to the B decay vertex from the associated PV. The component of the B momentum along the beam axis is approximated using the relation(pB)z=(mB/mreco)(preco)z, where mB is the known B mass, and mreco and preco are the mass and momentum of the system of reconstructed particles. The rest-frame variables described above are then calculated using the resulting estimated B four-momentum and the measured four-momenta of the μ− and D*+. The rest-frame variables are shown in simulation studies to have sufficient resolution ( ≈15%–20% full width at half maximum) to preserve the discriminating features of the original distributions.

      Simulated events are used to derive kinematic distributions from signal and B backgrounds that are used to fit the data. The hadronic transition-matrix elements for B¯0→D*+τ−ν¯τ andB¯0→D*+μ−ν¯μ decays are described using form factors derived from heavy quark effective theory [26]. Recent world averages for the corresponding parameters are taken from Ref. [27]. These values, along with their correlations and uncertainties, are included as external constraints on the respective fit parameters. The hadronic matrix elements describingB¯0→D*+τ−ν¯τ decays include a helicity-suppressed component, which is negligible inB¯0→D*+μ−ν¯μ decays [28]. This parameter is not well constrained by data; hence, the central value and uncertainty from the sum rule presented in Ref. [8] are used as a constraint. It is assumed that the kinematic properties of the B¯0→D*+τ−ν¯τ decay are not modified by any SM extensions.

      For the background semileptonic decays B¯→[D1(2420),D∗2(2460),D′1(2430)]μ−ν¯μ[collectively referred to as B¯→D**(→D*+π)μ−ν¯μ], form factors are taken from Ref. [29]. The slope of the Isgur-Wise function [30,31] is included as a free parameter in the fit, with a constraint derived from fitting the D*+μ−π− control sample. This fit also serves to validate this choice of model for this background. Contributions from B¯0s→[D′s1+(2536),D∗s2+(2573)]μ−ν¯μdecays use a similar parametrization, keeping only the lowest-order terms. Semileptonic decays to heavier charmed hadrons decaying as D**→D*+ππ and semitauonic decaysB¯→[D1(2420),D∗2(2460),D′1(2430)]τ−ντ are modeled using the ISGW2 [32] parametrization. To improve the modeling for the former, a fit is performed to the D*+μ−π+π− control sample to generate an empirical correction to the q2 distribution, as the resonances that contribute to this final state and their respective form factors are not known. The contribution of semimuonic decays to excited charm states amounts to approximately 12% of the normalization mode in the fit to the signal sample.

      An important background source is B decays into final states containing two charmed hadrons,B¯→D*+HcX, followed by semileptonic decay of the charmed hadron Hc→μνμX. This process occurs at a total rate of 6%–8% relative to the normalization mode. The template for this process is generated using a simulated event sample of B+ and B0 decays, with an appropriate admixture of final states. Corrections to the simulated template are obtained by fitting the D*+μ−K± control sample. A similar simulated sample is also used to generate kinematic distributions for final states containing a tertiary muon from B¯→D*+D−sX decays, with D−s→τ−ν¯τ and τ−→μ−ν¯μντ.

      The kinematic distributions of hadrons misidentified as muons are derived based on the sample of D*+h± candidates. Control samples of D*+ ( Λ) decays are used to determine the probabilities for a π or K ( p) to be misidentified as a muon, and to generate a 3×3 matrix of probabilities for each species to satisfy the criteria for identification as a π,K or p. These are used to determine the composition of the D*+h± sample in order to model the background from hadrons misidentified as muons. Two methods are developed to handle the unfolding of the individual contributions of π, K, and p, which result in different values for R(D*). The average of the two methods is taken as the nominal central value, and half the difference is assigned as a systematic uncertainty.

      Combinatorial backgrounds are classified based on whether or not a genuine D*+→D0π+decay is present. Wrong-sign D0π−μ− combinations are used to determine the component with misreconstructed or false D*+ candidates. The size of this contribution is constrained by fitting the Δm distribution of D*+μ− candidates in the full Δm region. The contribution from correctly reconstructed D*+ candidates combined with μ− from unrelated b hadron decays is determined from wrong-sign D*+μ+ combinations. The size of this contribution is constrained by use of the mass region 5280<m(D*+μ∓)<10000  MeV/c2, which determines the expected ratio of D*+μ− to D*+μ+ yields. In both cases, the contributions of misidentified muons are subtracted when generating the kinematic distributions for the fit.

      The binned m2miss, E∗μ, and q2 distributions in data are fit using a maximum likelihood method with three dimensional templates representing the signal, the normalization and the background sources. To avoid bias, the procedure is developed and finalized without knowledge of the resulting value of R(D*). The templates extend over the kinematic region−2<m2miss<10  GeV2/c4 in 40 bins, 100<E∗μ<2500  MeV in 30 bins, and−0.4<q2<12.6  GeV2/c4 in 4 bins. The fit extracts the relative contributions of signal and normalization modes and their form factors; the relative yields of each of theB¯→D**(→D*+π)μν and their form factors; the relative yields ofB¯0s→D**s+(→D*+K0S)μ−ν¯μ and B¯→D**(→D*+ππ)μ−ν¯ decays; the relative yield ofB¯→D*+Hc(→μνX′)X decays; the yield of misreconstructed D*+ and combinatorial backgrounds; and the background yield from hadrons misidentified as muons separately above and below |pμ|=10  GeV. Uncertainties in the shapes of the templates due to the finite number of simulated events, which are therefore uncorrelated bin-to-bin, are incorporated directly into the likelihood using the Beeston-Barlow “lite” procedure [33]. The fit includes shape uncertainties with bin-to-bin correlations (e.g. form factor uncertainties) via interpolation between nominal and alternative histograms. Control samples for partially reconstructed backgrounds (i.e. D*+μ−π−, D*μ−π+π−, and D*μ−K±) are fit independently from the fit to the signal sample. Since the selections used for these control samples include inverting the isolation requirement used to select the signal sample, this method allows for the determination of the corrections to the B¯→D*+Hc(→μνX′)X and B¯→D*+ππμ−ν¯μ backgrounds with negligible influence from the signal and normalization events. The results are validated with an independently developed alternative fit. In this second approach, control samples are fit simultaneously with the signal sample with correction parameters allowed to vary, allowing correlations among parameters to be incorporated exactly. This fit also forgoes the use of interpolation in favor of reweighting the simulated samples and recomputing the kinematic distributions for each value of the corresponding parameters. The two fits are extensively cross-checked and give consistent results.

      The results of the fit to the signal sample are shown in Fig. 1. Values of the B¯0→D*+μ−ν¯μform factor parameters determined by the fit agree with the current world average values. The fit finds 363 000±1600B¯0→D*+μ−ν¯μ decays in the signal sample and an uncorrected ratio of yields N(B¯0→D*+τ−ν¯τ)/N(B¯0→D*+μ−ν¯μ)=(4.54±0.46)×10−2. Accounting for theτ−→μ−ν¯μντ branching fraction [25] and the ratio of efficiencies results inR(D*)=0.336±0.034, where the uncertainty includes the statistical uncertainty, the uncertainty due to form factors, and the statistical uncertainty in the kinematic distributions used in the fit. As the signal yield is large, this uncertainty is dominated by the determination of various background yields in the fit and their correlations with the signal, which are as large as−0.68 in the case of B¯→D*+Hc(→μνX′)X.+Open in new window

      FIG. 1Distributions of m2miss (left) and E∗μ (right) of the four q2 bins of the signal data, overlaid with projections of the fit model with all normalization and shape parameters at their best-fit values. Below each panel differences between the data and fit are shown, normalized by the Poisson uncertainty in the data. The bands give the 1σ template uncertainties.

      Systematic uncertainties on R(D*) are summarized in Table I. The uncertainty in extractingR(D*) from the fit (model uncertainty) is dominated by the statistical uncertainty of the simulated samples; this contribution is estimated via the reduction in the fit uncertainty when the sample statistical uncertainty is not considered in the likelihood. The systematic uncertainty from the kinematic shapes of the background from hadrons misidentified as muons is taken to be half the difference in R(D*) using the two unfolding methods. Form factor parameters are included in the likelihood as nuisance parameters, and represent a source of systematic uncertainty. The total uncertainty on R(D*) estimated from the fit therefore incorporates these sources. To separate the statistical uncertainty and the contribution of the form factor uncertainty, the fit is repeated with form factor parameters fixed to their best-fit values, and the reduction in uncertainty is used to determine the contribution from the form factor uncertainties. The systematic uncertainty from empirical corrections to the kinematic distributions ofB¯→D**(→D*+ππ)μ−ν¯μ and B¯→D*+Hc(→μνX′)X backgrounds is also computed based on fixing the relevant parameters to their best fit values, as described above. The contribution of B¯→D**(→D*+π)τ−ν¯τ, B¯→D**(→D*+ππ)τ−ν¯τ andB¯0s→[D+s1(2536),D+s2(2573)]τ−ν¯τ events is fixed to 12% of the corresponding semimuonic modes, with half of this yield assigned as a systematic uncertainty on R(D∗). Similarly the contribution of B¯→D*+D−s(→τ−ν¯τ) decays is fixed using known branching fractions [25], and 30% changes in the nominal value are taken as a systematic uncertainty. Corrections to the modeling of variables related to the pointing of the D0 candidates to the PV are needed to derive the kinematic distributions for the fit. These corrections are derived from a comparison of simulated B¯0→D*+μ−ν¯μ events with a pure B¯0→D*+μ−ν¯μ data sample, and a systematic uncertainty is assigned by computing an alternative set of corrections using a different selection for this data subsample.

      TABLE I.Systematic uncertainties in the extraction of R(D∗).Model uncertaintiesAbsolute size (×10−2)Simulated sample size2.0Misidentified μ template shape1.6B¯0→D*+(τ−/μ−)ν¯ form factors0.6B¯→D*+Hc(→μνX′)X shape corrections0.5B(B¯→D**τ−ν¯τ)/B(B¯→D**μ−ν¯μ)0.5B¯→D**(→D∗ππ)μν shape corrections0.4Corrections to simulation0.4Combinatorial background shape0.3B¯→D**(→D*+π)μ−ν¯μ form factors0.3B¯→D*+(Ds→τν)X fraction0.1Total model uncertainty2.8Normalization uncertaintiesAbsolute size (×10−2)Simulated sample size0.6Hardware trigger efficiency0.6Particle identification efficiencies0.3Form factors0.2B(τ−→μ−ν¯μντ)<0.1Total normalization uncertainty0.9Total systematic uncertainty3.0

      The expected yield of D*+μ− candidates compared to D*+μ+ candidates (used to model the combinatorial background) varies as a function of m(D*+μ∓). The size of this effect is estimated in the 5280<m(D*+μ∓)<10000  MeV/c2 region and the uncertainty is propagated as a systematic uncertainty on R(D*).

      Uncertainties in converting the fitted ratio of signal and normalization yields into R(D*)(normalization uncertainties) come from the finite statistical precision of the simulated samples used to determine the efficiency ratio, and several other sources. The efficiency of the hardware triggers obtained in simulation differs between magnet polarities and between pythia versions—the midpoint of the predictions is taken as the nominal value and the range of variation is taken as a systematic uncertainty on the efficiency ratio. Particle identification efficiencies are applied to simulation based on binned J/ψ→μ+μ− and D0→K−π+ control samples, which introduces a systematic uncertainty that is estimated by binning the control samples differently and by comparing to simulated particle identification. The signal and normalization form factors alter the expected ratio of detector acceptances, and 1σ variations in these with respect to the world averages are used to assign a systematic uncertainty. Finally, the uncertainty in the current world average value of B(τ−→μ−ν¯μντ) contributes a small normalization uncertainty.

      In conclusion, the ratio of branching fractionsR(D*)=B(B¯0→D*+τ−ν¯τ)/B(B¯0→D*+μ−ν¯μ) is measured to be0.336±0.027(stat)±0.030(syst). The measured value is in good agreement with previous measurements at BABAR and Belle [3,5] and is 2.1 standard deviations greater than the SM expectation of 0.252±0.003 [8]. This is the first measurement of any decay of a b hadron into a final state with tau leptons at a hadron collider, and the techniques demonstrated in this Letter open the possibility to study a broad range of similar b hadron decay modes with multiple missing particles in hadron collisions in the future.

      • Gerrit

        please you shorten your comment, thanks

        • pg

          It s not a comment, it s the article from CERN

          • Gerrit

            look at the bottom half – that’s not an article – better delete that bit

            • pg

              That s the problem, when I paste it in it’s ok, but when it gets visualised it does that. You can find the article on the Physical Review Letters website