It’s 9:03 am, and the propaganda informational film has just started…

9:05 – I’m here in the back row of the main auditorium, which is absolutely full. There are supposed to be two viewing rooms in building 40, the main ATLAS/CMS building, but unfortunately those don’t open until 9:30 apparently.

9:07 – eww, relativistic mass.

9:09 – bizarre low frequency sound effects. Reminds me of those supernovas in planetarium shows which always scared me as a kid.

9:10 – live feed begins.

9:13 – they will try to send beam 1 (clockwise) around, one sector at a time.

9:17 – Lyn Evans doesn’t know how long it will take, but he hopes less than 12 hours.

9:19 – Robert Aymar thanks everyone.

9:22 – It’s like a Mars landing! But we get to try again and again.

9:29 – in building 40, where the chairs are comfy.

9:34 – some beam went in! Everyone laughs when a commentator asks “did you see it???”

9:38 – beam went to point 3.

9:40 – instant replay!!!

9:42 – they’ve actually managed to go to point 5 before, so so far nothing completely new.

9:45 – “It’s not just like switching on your mobile phone.” Beam at point 5.

9:51 – “They’re very young! 30s and 40s!”

9:53 – CMS seems to be showing a trigger rate. They are now going to test the beam dump at point 6.

9:59 – LHC status page here.

10:01 – Lyn Evans tempts fate by saying everything might be done in an hour.

10:07 – Point 7. Orbit has lots of excursions between 6 and 7 so they need to fix it.

10:12 – Point 8. One more before ATLAS!

10:18 – Point 1!!!! Loud clapping in ATLAS viewing room.

10:23 – next step is full circle.

10:25 – full circle.

The two red dots in the leftmost image show successive passages of a proton pulse through a monitor at point 2.

10:34 – They’re talking with ALICE. The ATLAS logbook shows that we did indeed see signals in the liquid argon calorimeters as the beam went through.

10:42 – we have ATLAS event displays! And very ugly they are, too.

10:43 – current plan seems to be to switch to beam 2 in an hour and a quarter.

10:49 – Everyone’s gone to do other things; room is much emptier now.

10:52 – Maiani dreams of a linear collider at CERN.

10:57 – Commenter points out that cost of LHC ~ cost of Beijing olympics. But only one will tell you about dark matter.

11:25 – Will Young-Kee Kim wear pajamas??? Only videoconferencing will tell!

11:29 – so enthusiastic, the commentators are. And so soothing.

11:30 – Fermilab promo movie.

11:33 – Umm… it’s unclear if the FNAL directorate are wearing pajamas, although they’re certainly color-coordinated. They also seem to have large stickers of some kind.

11:35 – Pier Oddone brings insane US liquor licensing laws to the attention of the world.

11:37 – The DOE is proud of us! Yay!

11:41 – as is the NSF.

11:43 – in the great American tradition, many people talk.

13:05 – after an unannounced lunch break for me, time to get back to work.  Everything on the accelerator side was very impressive this morning, and now the detectors have to follow that act.  I leave you with an ATLAS event display from when the beam was stopped on the collimators just upstream of point 1:

LHC “First Beam”

August 8, 2008

On Friday the LHC accelerator folk succeeded in injecting a proton beam into the LHC and taking it from Point 2 to Point 3 (one eighth the way around).  I believe this is the first beam in the LHC proper – they’ve put beam into the injection lines before, but not into the ring.  From what I can find they got it to work on the first try.  A promising sign of things to come?  (We are promised a beam all the way around on September 10.)

An actual Higgs limit

August 3, 2008

News from ICHEP: Matt Herndon presented a new Tevatron combined limit for Higgs production.  Thanks to a lot of hard work, good luminosity, and a bit of luck on D0’s part, CDF+D0 now exclude a Standard Model Higgs at 170 GeV at exactly 95% confidence level.  (Note: not at 165 or 175.  It’s a very small exclusion window.)  The talk will be available later in the evening.

UPDATE: as promised, available here (Sun 15:30).

lastdt-edit.png

(* last = last confidently-identified fully-reconstructed Ds Ds* event)

CLEO-c stopped taking data earlier this week.  We looked in the last data run (7:38 am to 8 am) of just over 61 thousand events for collisions that produced Ds mesons, and we actually found one.  Even better, we found an event where you could see both the Ds+ and Ds-, and where the photon from the Ds*+ → γ Ds+ transition was visible.  Above, you can see the event display, with all the tracks labeled; the event is consistent with the following sequence of events:

  • e+ e- → γ → Ds*+ Ds-
    • Ds*+ → γ Ds+
      • Ds+K- K+ π+ π+ π-
    • Ds-KS K-
      • KSπ+ π-

It was actually unlikely that we’d find such a nice event.  For the amount of data in the last run, we would expect roughly 75 Ds* Ds events.  Our full reconstruction efficiency (getting both Ds candidates) is somewhat less than 1%, so we had a good chance of winding up with zero events like this.  It’s nice to be lucky though.

Want to read about several years of progress in open charm physics – lattice QCD tests, standard candle measurements, D0 mixing – in a compact 30-page article? Try “Charm Meson Decays” (Artuso, Meadows, Petrov).

The latest Heavy Flavor Averaging Group results on D0 mixing are to be found in Alan Schwartz’s writeup for the recent BES-Belle-CLEO workshop.  The continual drip of new results has pushed the mixing significance to 6.7σ.

There’s a new paper out (arxiv:0712:3572) which aims to provide a “figure of merit” for proposed experimental programs. It revolves around the concept of information entropy – an old concept from communication/information theory developed by Claude Shannon.

The basics of entropy: a communicated “symbol” – a letter or a word of a text, for example – carries information content that increases as it becomes less likely (more surprising). Intuitively this makes sense: if you know exactly what you’re going to hear (say, an airline safety announcement), you tune out because there’s no information transfer, while you pay the most attention when you can’t anticipate what’s next. Mathematically, the information content of a received symbol x with a probability p(x) of occurring is -\log p(x). Note that this is sweeps the meaning of “information” into p(x); a string of digits may seem completely random (and thus each one has information -\log 0.1 = 1), but if you know it happens to be \pi starting from the 170th decimal place, suddenly you can predict all the digits and the information content is essentially zero.

We would like to get is an expectation value (average) of the transmitted information: you’d like to transmit the maximum content per symbol. The expectation value – entropy – is

H = \sum_x -p(x) \log p(x)

The logarithm factor means that transmitting an occasional highly unlikely symbol is less useful than symbols which appear at roughly equal rates – for two symbols, you get more entropy out of both appearing with a 50% probability than one at 99% and the other at 1%.

How does this relate to physics experiments? The author suggests that the proper figure of merit for an experiment (or analysis) is the expected information gain from it – or, perhaps, the information per dollar. The symbols are replaced by outcomes, like “observation/nonobservation of the Standard Model Higgs boson.” The p(x) function is obtained from our a priori theoretical biases, so for example “confirmation of Standard Model” or “discovery of low-scale supersymmetry” carry relatively high probabilities.

This leads to results he considers at odds with conventional wisdom – for example, the search for single top production, a well-predicted Standard Model process that everyone expects to be there, has low entropy (since there’s one large and one small probability), while a low-energy muon decay experiment which has good sensitivity to supersymmetry has high entropy (people think SUSY has a reasonable chance of being realized).

There’s an additional wrinkle that in general you get more entropy by having more symbols/results (in this case the log factor helps you); so the more possible outcomes an experiment has, the more information content you expect. In particular this means that global analyses of the author’s VISTA/SLEUTH type, where you try to test as many channels as possible for departures from the Standard Model, get a boost over dedicated searches for one particular channel.

It’s an interesting and thought-provoking paper, although I have a few concerns. The main one is that the probabilities p(x) are shockingly Bayesian: they are entirely driven by current prejudice (unlike the usual case in communication theory, where things are frequentist).

Recall that there’s not much entropy in experiments which have one dominantly probable outcome. On the other hand, should an extremely unlikely outcome be found, the information content of that result is large. (The author determines the most significant experimental discoveries in particle physics since the start of the 70s to be those of the τ and J/ψ. I think this implies that Mark I was the most important experiment of the last four decades.) We are thus in the paradoxical situation that the experiments that produced the most scientific content, by this criterion, are also the ones with the least a priori entropy. The J/ψ was discovered at experiments that weren’t designed specifically to search for it!

How does one compare merit between experiments? We hope the LHC can provide more than a binary yes/no on supersymmetry, for example; if it exists, we would try to measure various parameters, and this would be much more powerful than rare decay experiments that would essentially have access to one or two branching fractions. The partitioning of the space of experimental outcomes has to be correctly chosen for the entropy to be computed, and the spaces for two different experiments may be totally incommensurable. (It’s a bit simpler if you look at everything through “beyond the Standard Model” googles; with those on, your experiment either finds new physics, or it doesn’t.)

My last major complaint is that the (practical) scientific merit of certain results may be misstated by this procedure (though this is a gut feeling). The proposed metric may not really account for how an experiment’s results fit into the larger picture. Certain unlikely results – the discovery of light Higgs bosons in Υ decays, electroweak-scale quantum gravity, or something similar – would radically change in our theoretical biases, and hence expectations for other experiments. This is a version of the \pi digit problem above; external information can alter your p function in unanticipated ways. It’s unclear to me whether this can be handled in a practical manner, though I can’t claim to be an expert in this statistical realm.

In short: interesting idea, but I would be wary of suggesting that funding agencies use it quite yet.

The Charmed Future

November 13, 2007

Bruce Yabsley has posted a summary/transcript of the question-and-answer panel session he chaired which closed the Charm 07 workshop. It’s a little technical, but gives a nice idea of where we’re going, in both physics results and experimental facilities.

Excitement from Auger

November 8, 2007

From C — Cosmic rays, enigmatic particles that flit through the universe, are intriguing for many reasons. The fastest ones are much more energetic than anything that an earthbound accelerator, or indeed any astronomical process that we can convincingly model, can produce. They also have to come from nearby, in intergalactic terms, due to space becoming opaque to them above a certain energy (the GZK cutoff). So what’s making them?

One way to try and find out is to observe a number of them and point them back to their origins, checking to see if they match up with any conceivable source objects. The Pierre Auger Cosmic Ray Observatory has done just that, and has a new result out on the distribution of the highest energy cosmic ray events it has detected (above 5.7 × 1019 eV), showing an anisotropic distribution of the directions of the particles:

(In the plot, the blue is the part of the sky that Auger can see, the black ovals are individual cosmic ray events, and the red stars are the locations of active galactic nuclei (AGN) — galaxies with active supermassive black holes in their centers — within 75 megaparsecs.) In particular, when cross-correlated with a database of AGN, they see many more events within three degrees of known objects than they would expect from a flat distribution. The choice of object to correlate with was made on a pilot sample and their final statistical significance comes from a second, independent dataset, making this sort of a grey box analysis.
Now, apparently, the question is how the AGNs can generate the extreme energies involved…

Read more at the Auger press release and the Science summary (subscription required for the latter — sorry!) Also, for your musical diversion, Muse.

Peak Finding

September 22, 2007

The CLAS Collaboration is unusual in having observed and then later seen no evidence for the same particle, an oddity called the Θ+(1540). If it existed, it would have been the first known “pentaquark” state, composed of four quarks and an antiquark (uudds, to be precise), which would have made it the first time quarks had been seen combining in ways other than pairs (mesons) or triplets (baryons, such as the proton and neutron). The initial excitement made it as far as the BBC, but since then, most attempts to find the Θ+ again have come up negative, and is generally considered to have been a statistical fluctuation combined with some wishful thinking.

To confirm their earlier (5σ!) positive result, CLAS repeated the analysis on six times more data, and drew a blank. In the face of this, they’ve put out a rather interesting statistics paper, in which they try out a general technique to ask the question “is there a statistically significant peak somewhere in this data?” To do this, you have to account for the fact that fluctuations will produce fake peaks, a fact neatly demonstrated in the first figure of the paper, where they split the original sample in five, one of which has a rather convincing “signal” which vanishes in the total dataset.

The method essentially compares two classes of models for data (e.g. “all smooth background” versus “smooth background plus Gaussian peak”). A Bayesian procedure discounts differences in the number of parameters between the models (the priors chosen here are probably the most iffy part). The procedure results in “evidence ratios” that give the preferred model, and the strength of that preference.

They find, in fact, that all their data (including the first set which they used to claim observation) weakly prefer the no-signal model. If, on the other hand, the signal seen in the first dataset had held up in the second, this analysis would have found “decisive” evidence for it. (They also find absolutely conclusive evidence for the existence of the Λ(1520), which is a Good Thing.)

The method looks quite interesting; the question it attempts to resolve comes up in any low-statistics claim of an unexpected state, and some kind of sensible algorithm for quantifying the evidence would be very useful to have.

That didn’t take long

August 31, 2007

Three theoretical discussions of the Z(4430) state have shown up on hep-ph in the last couple of days:

  • Rosner suggests it’s a rescattering effect in threshold production of D*(2010) D1(2420);
  • Maiani, Polosa, and Riquer suggest it’s a true tetraquark state, the first radial excitation of the X(3872)/X(3876);
  • Meng and Chao suggest it’s an actual resonance in D*(2010) D1(2420).