Condensed matter

February 5, 2008


Slowly marking time until spring…

There’s a new paper out (arxiv:0712:3572) which aims to provide a “figure of merit” for proposed experimental programs. It revolves around the concept of information entropy – an old concept from communication/information theory developed by Claude Shannon.

The basics of entropy: a communicated “symbol” – a letter or a word of a text, for example – carries information content that increases as it becomes less likely (more surprising). Intuitively this makes sense: if you know exactly what you’re going to hear (say, an airline safety announcement), you tune out because there’s no information transfer, while you pay the most attention when you can’t anticipate what’s next. Mathematically, the information content of a received symbol x with a probability p(x) of occurring is -\log p(x). Note that this is sweeps the meaning of “information” into p(x); a string of digits may seem completely random (and thus each one has information -\log 0.1 = 1), but if you know it happens to be \pi starting from the 170th decimal place, suddenly you can predict all the digits and the information content is essentially zero.

We would like to get is an expectation value (average) of the transmitted information: you’d like to transmit the maximum content per symbol. The expectation value – entropy – is

H = \sum_x -p(x) \log p(x)

The logarithm factor means that transmitting an occasional highly unlikely symbol is less useful than symbols which appear at roughly equal rates – for two symbols, you get more entropy out of both appearing with a 50% probability than one at 99% and the other at 1%.

How does this relate to physics experiments? The author suggests that the proper figure of merit for an experiment (or analysis) is the expected information gain from it – or, perhaps, the information per dollar. The symbols are replaced by outcomes, like “observation/nonobservation of the Standard Model Higgs boson.” The p(x) function is obtained from our a priori theoretical biases, so for example “confirmation of Standard Model” or “discovery of low-scale supersymmetry” carry relatively high probabilities.

This leads to results he considers at odds with conventional wisdom – for example, the search for single top production, a well-predicted Standard Model process that everyone expects to be there, has low entropy (since there’s one large and one small probability), while a low-energy muon decay experiment which has good sensitivity to supersymmetry has high entropy (people think SUSY has a reasonable chance of being realized).

There’s an additional wrinkle that in general you get more entropy by having more symbols/results (in this case the log factor helps you); so the more possible outcomes an experiment has, the more information content you expect. In particular this means that global analyses of the author’s VISTA/SLEUTH type, where you try to test as many channels as possible for departures from the Standard Model, get a boost over dedicated searches for one particular channel.

It’s an interesting and thought-provoking paper, although I have a few concerns. The main one is that the probabilities p(x) are shockingly Bayesian: they are entirely driven by current prejudice (unlike the usual case in communication theory, where things are frequentist).

Recall that there’s not much entropy in experiments which have one dominantly probable outcome. On the other hand, should an extremely unlikely outcome be found, the information content of that result is large. (The author determines the most significant experimental discoveries in particle physics since the start of the 70s to be those of the τ and J/ψ. I think this implies that Mark I was the most important experiment of the last four decades.) We are thus in the paradoxical situation that the experiments that produced the most scientific content, by this criterion, are also the ones with the least a priori entropy. The J/ψ was discovered at experiments that weren’t designed specifically to search for it!

How does one compare merit between experiments? We hope the LHC can provide more than a binary yes/no on supersymmetry, for example; if it exists, we would try to measure various parameters, and this would be much more powerful than rare decay experiments that would essentially have access to one or two branching fractions. The partitioning of the space of experimental outcomes has to be correctly chosen for the entropy to be computed, and the spaces for two different experiments may be totally incommensurable. (It’s a bit simpler if you look at everything through “beyond the Standard Model” googles; with those on, your experiment either finds new physics, or it doesn’t.)

My last major complaint is that the (practical) scientific merit of certain results may be misstated by this procedure (though this is a gut feeling). The proposed metric may not really account for how an experiment’s results fit into the larger picture. Certain unlikely results – the discovery of light Higgs bosons in Υ decays, electroweak-scale quantum gravity, or something similar – would radically change in our theoretical biases, and hence expectations for other experiments. This is a version of the \pi digit problem above; external information can alter your p function in unanticipated ways. It’s unclear to me whether this can be handled in a practical manner, though I can’t claim to be an expert in this statistical realm.

In short: interesting idea, but I would be wary of suggesting that funding agencies use it quite yet.

The Charmed Future

November 13, 2007

Bruce Yabsley has posted a summary/transcript of the question-and-answer panel session he chaired which closed the Charm 07 workshop. It’s a little technical, but gives a nice idea of where we’re going, in both physics results and experimental facilities.

Excitement from Auger

November 8, 2007

From C — Cosmic rays, enigmatic particles that flit through the universe, are intriguing for many reasons. The fastest ones are much more energetic than anything that an earthbound accelerator, or indeed any astronomical process that we can convincingly model, can produce. They also have to come from nearby, in intergalactic terms, due to space becoming opaque to them above a certain energy (the GZK cutoff). So what’s making them?

One way to try and find out is to observe a number of them and point them back to their origins, checking to see if they match up with any conceivable source objects. The Pierre Auger Cosmic Ray Observatory has done just that, and has a new result out on the distribution of the highest energy cosmic ray events it has detected (above 5.7 × 1019 eV), showing an anisotropic distribution of the directions of the particles:

(In the plot, the blue is the part of the sky that Auger can see, the black ovals are individual cosmic ray events, and the red stars are the locations of active galactic nuclei (AGN) — galaxies with active supermassive black holes in their centers — within 75 megaparsecs.) In particular, when cross-correlated with a database of AGN, they see many more events within three degrees of known objects than they would expect from a flat distribution. The choice of object to correlate with was made on a pilot sample and their final statistical significance comes from a second, independent dataset, making this sort of a grey box analysis.
Now, apparently, the question is how the AGNs can generate the extreme energies involved…

Read more at the Auger press release and the Science summary (subscription required for the latter — sorry!) Also, for your musical diversion, Muse.

Wonkette gets all film-nerdy.

Peak Finding

September 22, 2007

The CLAS Collaboration is unusual in having observed and then later seen no evidence for the same particle, an oddity called the Θ+(1540). If it existed, it would have been the first known “pentaquark” state, composed of four quarks and an antiquark (uudds, to be precise), which would have made it the first time quarks had been seen combining in ways other than pairs (mesons) or triplets (baryons, such as the proton and neutron). The initial excitement made it as far as the BBC, but since then, most attempts to find the Θ+ again have come up negative, and is generally considered to have been a statistical fluctuation combined with some wishful thinking.

To confirm their earlier (5σ!) positive result, CLAS repeated the analysis on six times more data, and drew a blank. In the face of this, they’ve put out a rather interesting statistics paper, in which they try out a general technique to ask the question “is there a statistically significant peak somewhere in this data?” To do this, you have to account for the fact that fluctuations will produce fake peaks, a fact neatly demonstrated in the first figure of the paper, where they split the original sample in five, one of which has a rather convincing “signal” which vanishes in the total dataset.

The method essentially compares two classes of models for data (e.g. “all smooth background” versus “smooth background plus Gaussian peak”). A Bayesian procedure discounts differences in the number of parameters between the models (the priors chosen here are probably the most iffy part). The procedure results in “evidence ratios” that give the preferred model, and the strength of that preference.

They find, in fact, that all their data (including the first set which they used to claim observation) weakly prefer the no-signal model. If, on the other hand, the signal seen in the first dataset had held up in the second, this analysis would have found “decisive” evidence for it. (They also find absolutely conclusive evidence for the existence of the Λ(1520), which is a Good Thing.)

The method looks quite interesting; the question it attempts to resolve comes up in any low-statistics claim of an unexpected state, and some kind of sensible algorithm for quantifying the evidence would be very useful to have.

That didn’t take long

August 31, 2007

Three theoretical discussions of the Z(4430) state have shown up on hep-ph in the last couple of days:

  • Rosner suggests it’s a rescattering effect in threshold production of D*(2010) D1(2420);
  • Maiani, Polosa, and Riquer suggest it’s a true tetraquark state, the first radial excitation of the X(3872)/X(3876);
  • Meng and Chao suggest it’s an actual resonance in D*(2010) D1(2420).

Belle claims to see a “Z(4430)” as a π±ψ’ structure in BKπ±ψ’ decays. This joins the X(3872), Y(3940), Y(4260), Y(4350), and Y(4660) in the zoo of incomprehensible things above DD threshold that decay to charmonia. Unlike the other objects, this one is charged, which means it can’t be a hybrid or conventional charmonium state.

At the start of the week, we had a little conference of our own, which went very well, and hopefully this weekend I’ll put up a post on how CLEO sort-of-but-not-really measured the D0 mixing parameter y. But first this: the Cornell Cinema early fall calendar is up. Highlights:

It’s the middle of June, and the Cornell Cinema summer season rapidly approaches — a fantastic time when all the cinema staff work twice as often to provide you with filmic entertainment even more obscure than usual. We reopen on this Sunday (the 17th) and run through the first week of August. Readers who live in Ithaca should come, and those who don’t, well, sorry.

Highlights:

  • Cinema under the Stars: Two special outdoor screenings on the Willard Straight terrace — The Triplets of Belleville (June 28) and To Have and Have Not (July 12). A lovely way to spend a languid summer evening. I hear there is some form of cash bar, but bring your own snacks, and arrive early for good seats. Video purists note: these will be video projections, not film. Also, if you deeply care about this fact, you suck.
  • Old-ish Things: beyond the aforementioned To Have and Have
    Not
    , there will be Psycho, Lattuada’s Mafioso, Godard’s Two or Three Things I Know About Her, and, uh, Thelma and Louise.
  • In Case You Missed It: Pan’s Labyrinth, Letters From Iwo Jima, 300, Hot Fuzz.
  • This Sounds Interesting: Reviewers seem to have gone gaga over Black Snake Moan, and the review has the phrase “cure her hysterical nymphomania.” Red Road is a noir thriller involving the British closed-circuit camera obsession. The Lives of Others makes the point that life wasn’t really all that fantastic in East Germany. Any comedy described as “Jim Jarmusch meets Aki Kaurismaki” (Whiskey) sounds perfect, if you’re into empathetic cringing.