Higgs Boson and statistical issues
Thu 12 Jul 2012  9:15
Καταγράφω μια ανταλλαγή email που έγινε στη λίστα του ISBA σχετική με το Higgs Boson
[Από Tony O'Hagan  πολύ γνωστό Bayesian]
Dear Bayesians,
A question from Dennis Lindley prompts me to consult this list in search of
answers.
We've heard a lot about the Higgs boson. The news reports say that the LHC
needed convincing evidence before they would announce that a particle had
been found that looks like (in the sense of having some of the right
characteristics of) the elusive Higgs boson. Specifically, the news referred
to a confidence interval with 5sigma limits.
Now this appears to correspond to a frequentist significance test with an
extreme significance level. Five standard deviations, assuming normality,
means a pvalue of around 0.0000005. A number of questions spring to mind.
1. Why such an extreme evidence requirement? We know from a Bayesian
perspective that this only makes sense if (a) the existence of the Higgs
boson (or some other particle sharing some of its properties) has extremely
small prior probability and/or (b) the consequences of erroneously announcing
its discovery are dire in the extreme. Neither seems to be the case, so why
5sigma?
2. Rather than ad hoc justification of a pvalue, it is of course better to
do a proper Bayesian analysis. Are the particle physics community completely
wedded to frequentist analysis? If so, has anyone tried to explain what bad
science that is?
3. We know that given enough data it is nearly always possible for a
significance test to reject the null hypothesis at arbitrarily low pvalues,
simply because the parameter will never be exactly equal to its null value.
And apparently the LNC has accumulated a very large quantity of data. So
could even this extreme pvalue be illusory?
If anyone has any answers to these or related questions, I'd be interested to
know and will be sure to pass them on to Dennis.
Regards,
Tony

Professor A O'Hagan Email: a.ohagan@sheffield.ac.uk
Department of Probability and Statistics
University of Sheffield Phone: +44 114 222 3773
Hicks Building
Sheffield S3 7RH, UK Fax: +44 114 222 3759
 http://www.tonyohagan.co.uk/ 
[Από Tony O'Hagan  πολύ γνωστό Bayesian]
Dear Bayesians,
A question from Dennis Lindley prompts me to consult this list in search of
answers.
We've heard a lot about the Higgs boson. The news reports say that the LHC
needed convincing evidence before they would announce that a particle had
been found that looks like (in the sense of having some of the right
characteristics of) the elusive Higgs boson. Specifically, the news referred
to a confidence interval with 5sigma limits.
Now this appears to correspond to a frequentist significance test with an
extreme significance level. Five standard deviations, assuming normality,
means a pvalue of around 0.0000005. A number of questions spring to mind.
1. Why such an extreme evidence requirement? We know from a Bayesian
perspective that this only makes sense if (a) the existence of the Higgs
boson (or some other particle sharing some of its properties) has extremely
small prior probability and/or (b) the consequences of erroneously announcing
its discovery are dire in the extreme. Neither seems to be the case, so why
5sigma?
2. Rather than ad hoc justification of a pvalue, it is of course better to
do a proper Bayesian analysis. Are the particle physics community completely
wedded to frequentist analysis? If so, has anyone tried to explain what bad
science that is?
3. We know that given enough data it is nearly always possible for a
significance test to reject the null hypothesis at arbitrarily low pvalues,
simply because the parameter will never be exactly equal to its null value.
And apparently the LNC has accumulated a very large quantity of data. So
could even this extreme pvalue be illusory?
If anyone has any answers to these or related questions, I'd be interested to
know and will be sure to pass them on to Dennis.
Regards,
Tony

Professor A O'Hagan Email: a.ohagan@sheffield.ac.uk
Department of Probability and Statistics
University of Sheffield Phone: +44 114 222 3773
Hicks Building
Sheffield S3 7RH, UK Fax: +44 114 222 3759
 http://www.tonyohagan.co.uk/ 
Re: Higgs Boson and statistical issues
Thu 12 Jul 2012  9:16
Dear Tony, dear all,
this paper provides some hints
http://arxiv.org/abs/1112.3620
(see also http://www.roma1.infn.it/~dagos/badmath/index.html#added )
Moreover
 The "higgs pvalues" does not seem to be what a professional
(frequentistic) statistician would mean by that term:
> there is no serious null hypothesis without Higgs, because
a Standard Model without Higgs mechanism loses completely meaning.
> what has the meaning of a pvalue depending on the mass?
(a number, calculated in the hypothesis that the Higgs does not
exist, reported in function of its mass...)
 Also the "95% CL exclusion" regions have dubious meaning
because they are derived by "prescriptions" that do not
provide a quantitative statement of how much we should
be confidence on something.
Regards, and best greetings to Dennis,
Giulio
this paper provides some hints
http://arxiv.org/abs/1112.3620
(see also http://www.roma1.infn.it/~dagos/badmath/index.html#added )
Moreover
 The "higgs pvalues" does not seem to be what a professional
(frequentistic) statistician would mean by that term:
> there is no serious null hypothesis without Higgs, because
a Standard Model without Higgs mechanism loses completely meaning.
> what has the meaning of a pvalue depending on the mass?
(a number, calculated in the hypothesis that the Higgs does not
exist, reported in function of its mass...)
 Also the "95% CL exclusion" regions have dubious meaning
because they are derived by "prescriptions" that do not
provide a quantitative statement of how much we should
be confidence on something.
Regards, and best greetings to Dennis,
Giulio
Re: Higgs Boson and statistical issues
Thu 12 Jul 2012  9:17
Hello Bayesians,
Below are a few answers. Thnaks for the interest
Louis Lyons (organiser of PHYSTAT series of meetings,
and member of CMS Collaboration at CERN.)
________________________________________
From: ISBA Webmaster [hans@stat.duke.edu]
Sent: 11 July 2012 02:46
To: news@bayesian.org
Subject: Higgs boson
Dear Bayesians,
A question from Dennis Lindley prompts me to consult this list in search of
answers.
We've heard a lot about the Higgs boson. The news reports say that the LHC
needed convincing evidence before they would announce that a particle had
been found that looks like (in the sense of having some of the right
characteristics of) the elusive Higgs boson.
************ The test statistic we use for looking at pvalues is basically
the likelihood
ratio for the two hypotheses (H_0 = Standard Model (S. M.) of Particle
Physics, but no Higgs;
H_1 = S.M with Higgs). A small p_0 (and a reasonable p_1) then implies that
H_1 is a better
description of the data than H_0. This of course does not prove that H_1 is
correct, but
maybe Nature corresponds to some H_2, which is more like H_1 than it is like
H_0. Indeed
in principle data will never prove a theory is true, but the more
experimental tests it survives,
the happier we are to use it  e.g. Newtonian mechanics was fine for
centuries till the arrival
of Relativity.
************* In the case of the Higgs, it can decay to different sets of
particles, and these rates
are defined by the S.M. We measure these ratios, but with large
uncertainties with the present data.
They are consistent with the S.M. predictions, but it could be much more
convincing with more
data. Hence the caution about saying we have discovered the Higgs of the S.
M..
Specifically, the news referred
to a confidence interval with 5sigma limits.
************** 5sigma really refers to p_0
Now this appears to correspond to a frequentist significance test with an
extreme significance level. Five standard deviations, assuming normality,
means a pvalue of around 0.0000005. A number of questions spring to mind.
1. Why such an extreme evidence requirement? We know from a Bayesian
perspective that this only makes sense if (a) the existence of the Higgs
boson (or some other particle sharing some of its properties) has extremely
small prior probability and/or (b) the consequences of erroneously announcing
its discovery are dire in the extreme. Neither seems to be the case, so why
5sigma?
********************** This is an unfortunate tradition, that is used more
readily
by journal editors than by Particle Physicists. Reasons are
a) Historically we have had 3 and 4 sigma effects that have gone away
b) The 'Look Elsewhere Effect' (LEE). We are worried about the chance of a
statistical fluctuation mimicking our observation, not only at the given mass
of 125 GeV but anywhere in the spectrum. The quoted pvalues are
'local' i.e. the chance of a fluctuation at the observed mass. Unfortunately
the LEE correction factor is not very precisely defined, because of
ambiguities about
what is meant by 'elsewhere'
c) The possibility of some systematic effect (characterised by a nuisance
parameter)
being more important than allowed for in the analysis, or even overlooked 
see the
recent experiment at CERN which claimed that neutrinos travelled faster than
the speed of
light.
d) A subconscious use of Bayes Theorem to turn pvalues into probabilities
about the
hypotheses.
All the above vary from experiment to experiment, so we realise that it is a
bit unfair to
use the same standard for discovery for all analyses. We prefer just to quote
the pvalues
(or whatever).
2. Rather than ad hoc justification of a pvalue, it is of course better to
do a proper Bayesian analysis. Are the particle physics community completely
wedded to frequentist analysis?
************** No we are not antiBayesian, and indeed our test statistics is
a likelihood ratio.
If you like, you can regard our pvalues as an attempt to calibrate the
meaning of a
particular value of the likelihood ratio.
************** We actually recommend that for parameter determination at the
LHC, it is
useful to compare Bayesian and Frequentist methods. But for comparing
hypotheses
(e.g. an experimental distribution is fitted by H_0 = a smooth distribution;
or by H_1 = a smooth
distribution plus a localised peak), we are worried about what priors to use
for the extra
parameters that occur in the alternative hypothesis. ******* We would welcome
advice.**********
If so, has anyone tried to explain what bad
science that is?
*************** Comment ignored
3. We know that given enough data it is nearly always possible for a
significance test to reject the null hypothesis at arbitrarily low pvalues,
simply because the parameter will never be exactly equal to its null value.
And apparently the LHC has accumulated a very large quantity of data. So
could even this extreme pvalue be illusory?
************** We are aware of this. But in fact, although the LHC has
accumulated enormous
amounts of data, the Higgs search is like looking for a needle in a
haystack. The final samples
of events that are used to look for the Higgs contain only tens to thousands
of events.
If anyone has any answers to these or related questions,
***************** These and related issues are discussed to some extent in my
article "Open statistical issues in Particle Physics", Ann. Appl. Stat.
Volume 2, Number 3 (2008),
887915. It is supposed to be statisticianfriendly
Below are a few answers. Thnaks for the interest
Louis Lyons (organiser of PHYSTAT series of meetings,
and member of CMS Collaboration at CERN.)
________________________________________
From: ISBA Webmaster [hans@stat.duke.edu]
Sent: 11 July 2012 02:46
To: news@bayesian.org
Subject: Higgs boson
Dear Bayesians,
A question from Dennis Lindley prompts me to consult this list in search of
answers.
We've heard a lot about the Higgs boson. The news reports say that the LHC
needed convincing evidence before they would announce that a particle had
been found that looks like (in the sense of having some of the right
characteristics of) the elusive Higgs boson.
************ The test statistic we use for looking at pvalues is basically
the likelihood
ratio for the two hypotheses (H_0 = Standard Model (S. M.) of Particle
Physics, but no Higgs;
H_1 = S.M with Higgs). A small p_0 (and a reasonable p_1) then implies that
H_1 is a better
description of the data than H_0. This of course does not prove that H_1 is
correct, but
maybe Nature corresponds to some H_2, which is more like H_1 than it is like
H_0. Indeed
in principle data will never prove a theory is true, but the more
experimental tests it survives,
the happier we are to use it  e.g. Newtonian mechanics was fine for
centuries till the arrival
of Relativity.
************* In the case of the Higgs, it can decay to different sets of
particles, and these rates
are defined by the S.M. We measure these ratios, but with large
uncertainties with the present data.
They are consistent with the S.M. predictions, but it could be much more
convincing with more
data. Hence the caution about saying we have discovered the Higgs of the S.
M..
Specifically, the news referred
to a confidence interval with 5sigma limits.
************** 5sigma really refers to p_0
Now this appears to correspond to a frequentist significance test with an
extreme significance level. Five standard deviations, assuming normality,
means a pvalue of around 0.0000005. A number of questions spring to mind.
1. Why such an extreme evidence requirement? We know from a Bayesian
perspective that this only makes sense if (a) the existence of the Higgs
boson (or some other particle sharing some of its properties) has extremely
small prior probability and/or (b) the consequences of erroneously announcing
its discovery are dire in the extreme. Neither seems to be the case, so why
5sigma?
********************** This is an unfortunate tradition, that is used more
readily
by journal editors than by Particle Physicists. Reasons are
a) Historically we have had 3 and 4 sigma effects that have gone away
b) The 'Look Elsewhere Effect' (LEE). We are worried about the chance of a
statistical fluctuation mimicking our observation, not only at the given mass
of 125 GeV but anywhere in the spectrum. The quoted pvalues are
'local' i.e. the chance of a fluctuation at the observed mass. Unfortunately
the LEE correction factor is not very precisely defined, because of
ambiguities about
what is meant by 'elsewhere'
c) The possibility of some systematic effect (characterised by a nuisance
parameter)
being more important than allowed for in the analysis, or even overlooked 
see the
recent experiment at CERN which claimed that neutrinos travelled faster than
the speed of
light.
d) A subconscious use of Bayes Theorem to turn pvalues into probabilities
about the
hypotheses.
All the above vary from experiment to experiment, so we realise that it is a
bit unfair to
use the same standard for discovery for all analyses. We prefer just to quote
the pvalues
(or whatever).
2. Rather than ad hoc justification of a pvalue, it is of course better to
do a proper Bayesian analysis. Are the particle physics community completely
wedded to frequentist analysis?
************** No we are not antiBayesian, and indeed our test statistics is
a likelihood ratio.
If you like, you can regard our pvalues as an attempt to calibrate the
meaning of a
particular value of the likelihood ratio.
************** We actually recommend that for parameter determination at the
LHC, it is
useful to compare Bayesian and Frequentist methods. But for comparing
hypotheses
(e.g. an experimental distribution is fitted by H_0 = a smooth distribution;
or by H_1 = a smooth
distribution plus a localised peak), we are worried about what priors to use
for the extra
parameters that occur in the alternative hypothesis. ******* We would welcome
advice.**********
If so, has anyone tried to explain what bad
science that is?
*************** Comment ignored
3. We know that given enough data it is nearly always possible for a
significance test to reject the null hypothesis at arbitrarily low pvalues,
simply because the parameter will never be exactly equal to its null value.
And apparently the LHC has accumulated a very large quantity of data. So
could even this extreme pvalue be illusory?
************** We are aware of this. But in fact, although the LHC has
accumulated enormous
amounts of data, the Higgs search is like looking for a needle in a
haystack. The final samples
of events that are used to look for the Higgs contain only tens to thousands
of events.
If anyone has any answers to these or related questions,
***************** These and related issues are discussed to some extent in my
article "Open statistical issues in Particle Physics", Ann. Appl. Stat.
Volume 2, Number 3 (2008),
887915. It is supposed to be statisticianfriendly
Re: Higgs Boson and statistical issues
Thu 12 Jul 2012  9:18
Dear Tony
I have written a bit about the explanation of the Pvalue here
http://understandinguncertainty.org/explaining5sigmahiggshowwelldidtheydo#comment1449
The CERN teams' reports also discuss what they would expect were the
Higgs there, so there seems a real possibility of a likelihood ratio
being computed, which would be a start. Not sure why they don't do this.
d(avid Spigelhalter)
I have written a bit about the explanation of the Pvalue here
http://understandinguncertainty.org/explaining5sigmahiggshowwelldidtheydo#comment1449
The CERN teams' reports also discuss what they would expect were the
Higgs there, so there seems a real possibility of a likelihood ratio
being computed, which would be a start. Not sure why they don't do this.
d(avid Spigelhalter)
Re: Higgs Boson and statistical issues
Thu 12 Jul 2012  9:20
[Moderator's note: this message is from
Harrison B. Prosper
Kirby W. Kemper Professor of Physics
Distinguished Research Professor
Florida State University
harry@hep.fsu.edu
]
Dear Tony,
First some general remarks, then I'll try to answer your questions.
I am in an interesting position regarding the "Higgs" boson discovery: I am
thrilled to be an insider with respect to the discovery and I happen also to
be one of the relatively few particle physicists who actually regard Bayesian
reasoning as "exactly what is needed" to make sense of what we do. The vast
majority of my colleagues believe that pvalues are objective and therefore
"scientific". Therefore, many of my colleagues move mountains, or at any rate
consume prodigious amounts of computing power, to check that some (typically
ad hoc) procedure covers.
For your edification I've attached a (PUBLIC!) plot [Moderator's note:
available at http://bayesian.org/webfm_send/274 ] of (slightly massaged)
binned data from my collaboration (CMS) that shows a spectrum arising from
protonproton collisions that resulted in the creation of a pair of photons
(gammas in high energy argot). The Standard Model predicts that the Higgs
boson should decay (that is break up) into a pair of photons. (The Higgs is
predicted to decay in other ways too, such as a pair of Z bosons.) The bump
in the plot at around 125 GeV is evidence for the existence of some particle
of a definite mass that decays into a pair of photons. That something, as far
as we've been able to ascertain, is likely to be the Higgs boson. These data,
along with data in which protonproton collisions yield two Z bosons are the
basis of our 5sigma claim.
These data can be modeled with the function
f(x) = exp(a0 + a1*x + a2*x^2) + s * Gaussian(xm, w)
where "x" is the mass the diphoton (pair of photons), and the first term
describes the smoothly falling (background) spectrum, while the second term
models the bump. "s" is the total expected signal, "m" is the mass of the new
particle and "w" is the width of the bump. The total background, that is, the
"noise" is just the integral of the first term. "a0", "a1", "a2" are nuisance
parameters. This is therefore a 6parameter problem for which we have no
prior information (or choose to act as if this is so) for the six parameters.
The analysis of this spectrum has caused a lot of angst about the
"lookelsewhereeffect" (multiple hypothesis testing), which I think is a red
herring in this context.
Now for your questions. See below.
Harrison
Quoting ISBA Webmaster :
Dear Bayesians,
A question from Dennis Lindley prompts me to consult this list in search of
answers.
We've heard a lot about the Higgs boson. The news reports say that the LHC
needed convincing evidence before they would announce that a particle had
been found that looks like (in the sense of having some of the right
characteristics of) the elusive Higgs boson. Specifically, the news referred
to a confidence interval with 5sigma limits.
Now this appears to correspond to a frequentist significance test with an
extreme significance level. Five standard deviations, assuming normality,
means a pvalue of around 0.0000005. A number of questions spring to mind.
1. Why such an extreme evidence requirement? We know from a Bayesian
perspective that this only makes sense if (a) the existence of the Higgs
boson (or some other particle sharing some of its properties) has extremely
small prior probability and/or (b) the consequences of erroneously announcing
its discovery are dire in the extreme. Neither seems to be the case, so why
5sigma?
The "5sigma" (pvalue = 3.0e7) is an historical artifact. Over the past
several decades, we have made many a "discovery" that turned out not to be
so. As a consequence, we gradually settled on a pvalue thought to be small
enough to reduce the chance that we are fooling ourselves. In fact, we do
have high standards because in our view we are trying to arrive at "true"
statements about the world in the pragmatic sense that these statements yield
predictions that turn out to be correct. Given that the search for the Higgs
took some 45 years, tens of thousands of scientists and engineers, billions
of dollars, not to mention numerous divorces, huge amounts of sleep
deprivation, tens of thousands of bad airline meals, etc., etc., we want to
be sure as is humanly possible that this is real.
2. Rather than ad hoc justification of a pvalue, it is of course better to
do a proper Bayesian analysis. Are the particle physics community completely
wedded to frequentist analysis? If so, has anyone tried to explain what bad
science that is?
I for one would be delighted to see a Bayesian analysis of these data from
you guys! Unfortunately, however, I am forbidden from emailing you the
50,000 "x"s I have right here on my laptop...very frustrating...
3. We know that given enough data it is nearly always possible for a
significance test to reject the null hypothesis at arbitrarily low pvalues,
simply because the parameter will never be exactly equal to its null value.
And apparently the LNC has accumulated a very large quantity of data. So
could even this extreme pvalue be illusory?
As noted above, small pvaluebased "discoveries" have come and gone.
However, the reason I am convinced this is real is not because of the
pvalue, nor frankly because of the (pseudo)frequentist method of analysis
showcased on July 4th during the announcement at CERN, a method that I have
repeatedly criticized within my collaboration. Rather it is because when I
study the profile likelihood in the variables "s", "m", and "w" for the 2011
dataset and for the 2012 dataset, I find visually convincing structures in
the profile likelihood at m ~ 125 GeV in both independent datasets, obtained
at different proton beam energies (7 TeV and 8 TeV). Of course, I would
rather have preferred to have done a Bayesian analysis, marginalizing over
"a0", "a1", "a2", and "w", and to study the posterior density in the
variables "s" and "m", but constructing a nonevidencebased prior in
4dimensions that would pass muster seems quite a chore. Any advice from You
would be welcome. (I favor the recursive reference prior algorithm of
Bernardo, but this would have to be done numerically and I have not yet
figured out how to do so efficiently, while taking into account the "nested
compact sets". The whole thing seems rather daunting.)
Harrison B. Prosper
Kirby W. Kemper Professor of Physics
Distinguished Research Professor
Florida State University
harry@hep.fsu.edu
]
Dear Tony,
First some general remarks, then I'll try to answer your questions.
I am in an interesting position regarding the "Higgs" boson discovery: I am
thrilled to be an insider with respect to the discovery and I happen also to
be one of the relatively few particle physicists who actually regard Bayesian
reasoning as "exactly what is needed" to make sense of what we do. The vast
majority of my colleagues believe that pvalues are objective and therefore
"scientific". Therefore, many of my colleagues move mountains, or at any rate
consume prodigious amounts of computing power, to check that some (typically
ad hoc) procedure covers.
For your edification I've attached a (PUBLIC!) plot [Moderator's note:
available at http://bayesian.org/webfm_send/274 ] of (slightly massaged)
binned data from my collaboration (CMS) that shows a spectrum arising from
protonproton collisions that resulted in the creation of a pair of photons
(gammas in high energy argot). The Standard Model predicts that the Higgs
boson should decay (that is break up) into a pair of photons. (The Higgs is
predicted to decay in other ways too, such as a pair of Z bosons.) The bump
in the plot at around 125 GeV is evidence for the existence of some particle
of a definite mass that decays into a pair of photons. That something, as far
as we've been able to ascertain, is likely to be the Higgs boson. These data,
along with data in which protonproton collisions yield two Z bosons are the
basis of our 5sigma claim.
These data can be modeled with the function
f(x) = exp(a0 + a1*x + a2*x^2) + s * Gaussian(xm, w)
where "x" is the mass the diphoton (pair of photons), and the first term
describes the smoothly falling (background) spectrum, while the second term
models the bump. "s" is the total expected signal, "m" is the mass of the new
particle and "w" is the width of the bump. The total background, that is, the
"noise" is just the integral of the first term. "a0", "a1", "a2" are nuisance
parameters. This is therefore a 6parameter problem for which we have no
prior information (or choose to act as if this is so) for the six parameters.
The analysis of this spectrum has caused a lot of angst about the
"lookelsewhereeffect" (multiple hypothesis testing), which I think is a red
herring in this context.
Now for your questions. See below.
Harrison
Quoting ISBA Webmaster :
Dear Bayesians,
A question from Dennis Lindley prompts me to consult this list in search of
answers.
We've heard a lot about the Higgs boson. The news reports say that the LHC
needed convincing evidence before they would announce that a particle had
been found that looks like (in the sense of having some of the right
characteristics of) the elusive Higgs boson. Specifically, the news referred
to a confidence interval with 5sigma limits.
Now this appears to correspond to a frequentist significance test with an
extreme significance level. Five standard deviations, assuming normality,
means a pvalue of around 0.0000005. A number of questions spring to mind.
1. Why such an extreme evidence requirement? We know from a Bayesian
perspective that this only makes sense if (a) the existence of the Higgs
boson (or some other particle sharing some of its properties) has extremely
small prior probability and/or (b) the consequences of erroneously announcing
its discovery are dire in the extreme. Neither seems to be the case, so why
5sigma?
The "5sigma" (pvalue = 3.0e7) is an historical artifact. Over the past
several decades, we have made many a "discovery" that turned out not to be
so. As a consequence, we gradually settled on a pvalue thought to be small
enough to reduce the chance that we are fooling ourselves. In fact, we do
have high standards because in our view we are trying to arrive at "true"
statements about the world in the pragmatic sense that these statements yield
predictions that turn out to be correct. Given that the search for the Higgs
took some 45 years, tens of thousands of scientists and engineers, billions
of dollars, not to mention numerous divorces, huge amounts of sleep
deprivation, tens of thousands of bad airline meals, etc., etc., we want to
be sure as is humanly possible that this is real.
2. Rather than ad hoc justification of a pvalue, it is of course better to
do a proper Bayesian analysis. Are the particle physics community completely
wedded to frequentist analysis? If so, has anyone tried to explain what bad
science that is?
I for one would be delighted to see a Bayesian analysis of these data from
you guys! Unfortunately, however, I am forbidden from emailing you the
50,000 "x"s I have right here on my laptop...very frustrating...
3. We know that given enough data it is nearly always possible for a
significance test to reject the null hypothesis at arbitrarily low pvalues,
simply because the parameter will never be exactly equal to its null value.
And apparently the LNC has accumulated a very large quantity of data. So
could even this extreme pvalue be illusory?
As noted above, small pvaluebased "discoveries" have come and gone.
However, the reason I am convinced this is real is not because of the
pvalue, nor frankly because of the (pseudo)frequentist method of analysis
showcased on July 4th during the announcement at CERN, a method that I have
repeatedly criticized within my collaboration. Rather it is because when I
study the profile likelihood in the variables "s", "m", and "w" for the 2011
dataset and for the 2012 dataset, I find visually convincing structures in
the profile likelihood at m ~ 125 GeV in both independent datasets, obtained
at different proton beam energies (7 TeV and 8 TeV). Of course, I would
rather have preferred to have done a Bayesian analysis, marginalizing over
"a0", "a1", "a2", and "w", and to study the posterior density in the
variables "s" and "m", but constructing a nonevidencebased prior in
4dimensions that would pass muster seems quite a chore. Any advice from You
would be welcome. (I favor the recursive reference prior algorithm of
Bernardo, but this would have to be done numerically and I have not yet
figured out how to do so efficiently, while taking into account the "nested
compact sets". The whole thing seems rather daunting.)
Permissions in this forum:
You cannot reply to topics in this forum

