December 2008


Since we appear to be in the business of summer school announcements, it is perhaps worth passing along the date and title for next year’s SLAC Summer Institute:

Revolutions on the Horizon
3 – 14 August 2009

The SLAC Summer Institutes are only half the length of TASI and tend to be at a more introductory level, but they are great ways to dive into a research area. One can get a sample by viewing some of their recorded lectures. (To the best of my knowledge they were the first summer school ot regularly do this.)

The school website does not yet exist, but the following blurb is posted:

The topic is a study of all the upcoming experiments turning on soon and the big discoveries they will make. Website coming soon.

`Upcoming experiments’ certainly includes the LHC. But what else could be included to differentiate the school from their 2006 LHC program? Given SLAC’s shift towards cosmology, GLAST is also a likely `upcoming experiment.’ And then perhaps some flavour physics to round out the discussion?

The TASI 2009 website is up and the program looks very good. Sorry string theorists, this year will be phenomenology (for the second consecutive year), with lots of focus on the LHC, and cosmology. [Perhaps reflecting the apparent hiring shift toward astro-particle physics?]

Particle Physics:

  • Hsin-Chia Cheng  (Davis) – Introduction to extra dimensions
  • Roberto Contino (CERN) – The Higgs as a Pseudo-Goldstone boson
  • Patrick Fox (Fermilab) – Supersymmetry and the MSSM
  • Tony Gherghetta (Melbourne) – Warped extra dimensions and AdS/CFT
  • Eva Halkiadakis (Rutgers) – Introduction to the LHC experiments
  • Patrick Meade (IAS) – Gauge mediation of supersymmetry breaking
  • Maxim Perelstein (Cornell) – Introduction to collider physics
  • Gilad Perez (Weizman Inst.) – Flavor physics
  • David Shih (IAS) – Dynamical supersymmetry breaking
  • Witold Skiba (Yale) – Effective theories and electroweak precision constraints
  • Kathryn Zurek (Fermilab) – Unexpected signals at the LHC

Cosmology:

  • Rachel Bean (Cornell) – Dark Energy
  • Daniel Baumann (Harvard) – Inflation
  • Manoj Kaplinghat (Irvine) – Large Scale Structure
  • Elena Pierpaoli (USC) – Cosmic Microwave Background
  • Richard Schnee (Syracuse) – Dark Matter Experiment
  • Michael Turner (Chicago) – Introduction to Cosmology
  • Neal Weiner (NYU) – Dark Matter Theory

The speakers appear to have been chosen to represent the `next generation’ of young faculty who have already started to shape physics in the ever-extended pre-LHC era. A few especially hot topics include Neal Weiner speaking on Dark Matter theory, Patrick Meade on [general] gauge mediation, and Tony Gherghetta on AdS/CFT “for phenomenologists.”

TASI is one of the ‘big’ summer schools in particle physics. Its primary clientele are later-stage PhD students who can take advantage of relatively broad programs to improve their breadth in physics. It is a fantastic way to get to know many of the up-and-coming people in one’s field.

With a little luck TASI will continue their recent trend of providing video lectures for those who are unable to attend.

A couple of days ago I found that my arXiv RSS feeds were a bit wonky — the author list disappeared! The arXiv feeds had been having trouble with properly displaying the author list for some time, but having it removed annoyed me so much that I e-mailed the good folks at the arXiv.

They responded and told me that the RSS 2.0 arXiv feed has everything fixed. Indeed, I’d been using an older version of the feed. The new versions, I’m happy to announce, works beautifully. Here are the RSS links for hep-ph, hep-th, and hep-ex:

I personally use Google Reader. For more information about the arXiv feeds, see http://arxiv.org/help/rss.

For various reasons I’ve been having fun thinking about renormalization, so I thought I’d try to put together a post about renormalization in words (rather than equations).

The standard canon for field theory students learn is that when a calculation diverges, one has to (1) regularize the divergence and then (2) renormalize the theory. This means that one has to first parameterize the divergence so that one can work with explicitly finite quantities that only diverge in some limit of the parameter. Next one has to recast the entire theory for self-consistency with respect to this regularization.

While there are some subtleties involved in picking a regularization scheme, we shall not worry about this much and will instead focus on what I think is really interesting: renormalization, i.e. the [surprising] behavior of theories as one changes scale.

The details of the general regularization-renormalization procedure can be found in every self-respecting quantum field theory textbook, but it can often be daunting to understand what’s going on physically rather than just technically. This is what I’d like to try to explore a bit.

First of all, renormalization is something that is intrinsically woven into quantum field theory rather than a `trick’ to get sensible results (as was often said in the 1960s). One way of looking at this is to say that we do not renormalize because our calculations find infinities, but rather because of the nature of quantum corrections in an interacting theory.

Recall the Lehmann-Symanzik-Zimmerman (LSZ) reduction procedure. Ha ha! Just kidding, nobody remembers the LSZ reduction formalism unless they find themselves in the unenviable position of teaching it.

Here’s what’s important: we understand the properties of free fields because their Lagrangian is quadratic and the path integral can be solved explicitly. But non-interacting theories are boring, so we usually play with interacting theories as perturbations on free theories. When we do this, however, things get a little out-of-whack.

Statements about free field propagators, for example, are no longer strictly true because of the new field interactions. The two-point Greens function is no longer the simple propagator of a field from one point to another, but now takes into account self-interactions of the field along the way. This leads one to the Lehmann-Kallen form of the propagator and the spectral density function which encodes intermediate bound states.

You can go back and read about those things in your favorite QFT text, but the point is this: we like to use “nice” properties of the free theory ito work with our interacting theory. In order to maintain these “nice properties” we are required to rescale (renormalize) our fields and couplings. For example, we would like to maintain that a field’s propagator has a pole of unity residue at its physical mass, the field operator annihilates the vacuum, and that the field is properly normalized. Assuming these properties, the LSZ reduction procedure tells us that we can calculate S-matrix elements in the usual way.

Suppose we start with a model, represented by some Lagrangian. We call this the bare Lagrangian. This is just something some theorist wrote down. The bare Lagrangian has parameters (masses, couplings), but they’re “just variables” — i.e. they needn’t be `directly’ related to measurable quantities. We rescale fields and shift couplings to fit the criteria of LSZ,

\phi = Z^{-1/2}\phi_{bare}
g = g_{bare} + \delta g.

We refer to these as the renormalized field and renormalized couplings. These quantities are finite and can be connected to experiments.

When we do calculations and find divergences, we can (usually) absorb these quantities into the bare field and couplings. Thus the counter terms \delta g and the field strength renormalization Z are also formally divergent, but in a way that cancels the divergence of the bare field and couplings.

That sets everything up for us. We haven’t really done anything, mind you, just set up all of the clockwork. In fact, the real beauty is seeing what happens when we let go and see what the machine does (the renormalization group). I’ll get to this in a future post.

Further reading: For beginning field theorists, I strongly recommend the heuristic description of renormalization in Zee’s QFT text. A good discussion of LSZ and the Lehmann-Kallen form is found in the textbooks by Srednicki and Ticciati. Finally, for one of the best discussions of renormalization, the Les Houches lectures “Methods in Field Theory” (a paperback version is available for a reasonable price) is fantastic.

For those that might be interested, here is the e-mail announcement:

Announcing the 4th CERN-Fermilab Hadron Collider Physics Summer School

Dear Colleague

The 4th CERN-Fermilab Hadron Collider Physics Summer School will be held at CERN from June 8-17 2009. The CERN-Fermilab Hadron Collider Physics Summer School is targeted particularly at young postdocs in experimental High Energy Physics (HEP), as well as senior PhD students in HEP phenomenology, working towards the completion of their thesis project.

The School will include ten days of lectures and discussions, with one free day in the middle of the period. Scholarship funds will be available to support some participants. Updated information and online applications are available at the school web site:   http://cern.ch/hcpss

The deadline for applications and reference letters is February 21st, 2009.

Please circulate this announcement to whomever could be interested to participate in this school.

Best Regards,

[Local organizing committee]

One can look at previous schools to get a feel for the content of the lectures. Note that this does appear to conflict with TASI09 and part of the SUSY09 conference.

By the way, the deadline for the Spring School on Superstring Theory is in a month. Those of you of the stringy persuasion might want consider applying since there appear to be no other major string schools in 2009. (We’re still waiting to hear whether Perimeter will be hosting a summer school this year.)

This morning brought about another suggestive (if I may be so bold so say that) experimental hint of new physics in the leptonic sector in the form of a paper from the MiniBooNE collaboration: “Unexplained Excess of Electron-Like Events From a 1-GeV Neutrino Beam (arXiv:0812.2243).”

Recall that the past two months have also brought us a speculative “multi-muon anomaly” at CDF (arXiv:0810.5357, see also Tommaso’s summary), the publication of the PAMELA cosmic-ray positron excess (arXiv:0810.4995, ), and related publications by ATIC4 (Nature) and HESS (arXiv:0811.3894) on the electron/positron spectrum. Apparently the leptonic sector has decided to be kind (if coy) to model-builders in light of LHC delays. Now, MiniBooNE joins in on the fun.

http://arxiv.org/abs/0812.2243

MiniBooNE neutrino low-energy excess. Image from arXiv:0812.2243.

For an excellent summary of the MiniBooNE experiment, see Heather Ray’s post on Cosmic Variance. (Unfortunately their TeX didn’t transfer over well since they moved to Discover… hopefully someone over there will fix up all the LaTeX tags that are now garbled?)

As I’m writing this Symmetry Breaking has published a post on the result that summarizes the recent news. Here’s my own quick-and-dirty summary as I understand it:

In April 2007, MiniBooNE published results that showed no signs of the LSND anomaly (hep-ex/0104049), leading many model-builders to immediately jump off the neutrino band-wagon (see Jester’s theory report). They noted, however, a curious excess in their data at lower energies, in an energy region that was not (at least on face value) related to the unrequited LSND hint for new physics. This was left for further investigation and data analysis.

Now after more than a year of said investigation and analysis, the excess is still there. (See image above.) What’s even more interesting, is that the bump does not appear as pronounced in the antineutrino sector, according to a recent report (see image below). LSND and the fresh-on-the-arXiv MiniBooNE paper were analyses based on neutrinos. It’s a bit surprising that the MiniBooNE antineutrino analysis doesn’t have a similar feature

MiniBooNE antineutrino data showing a much weaker signal at low energies compared to the neutrino data.

MiniBooNE antineutrino data showing a much weaker signal at low energies compared to the neutrino data. Image from Fermilab.

I hope to spend some time reading up on this over the holidays, I should then be able to give a more coherent summary.

These are some notes on arXiv:hep-th/0701050 by Denef, Douglas and Kachru.

Flux compactification is an ominous term that often scares people away. Here are some notes I come across can give a simple idea how to do moduli stabilization using flux compactification in 6 dimensions. We choose 6 dimensions because it equals  4(Minkowski spacetime we live in) plus 2(like a torus, the simplest low dimensional Calabi-Yau we can find).

The idea is that we start with Einstein-Hilbert action L=\int d^6 x \sqrt{-g} M_6^4 {\cal_R} and dimensional reduce it to a 4d spacetime.

A compact 2d internal space can be characterized by its genus. (g is the number of holes in the “donut”, g=0 is sphere, g=1 is torus…). An ansatz for the 6d metric can be taken to be ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}+R^2\tilde{g_{mn}}dy^m dy^n, where R^2 is the volume of the 2d manifold M_g. Then the action can be written as M_6^4R^2\int d^4x \sqrt{-g}((\int d^2y\sqrt{\tilde{g}}{\cal_R}_2)+R^2{\cal_R}_4)+....

We realize that \int d^2y\sqrt{\tilde{g}}{\cal_R}_2 is a topology constant which equals \chi(M_g)=2-2g and rescale to Einstein frame(where we have canonical Einstein-Hilbert action \int \sqrt{-g}{\cal_R}):g\rightarrow h=R^2 g, we find the 4d lagrangian to be M_4^2\int d^4 x\sqrt{-h}({\cal_R}_h-V(R) where M_4^2=M_6^4 R^2 and V(R)\sim (2 g-2)\frac{1}{R^4} . Apparently this one term is not enough to help us to stabilize the volume modulus R(x).

Let’s add new ingrediants: suppose there is n units of magnetic flux on the 2d internal space M_g,i.e. \int_{M_g} F=n then the term in 6d action \int d^6 x \sqrt{-g}|F|^2 can give a term proportional to \frac{1}{R^2} given by \int_{M_g}|F|^2=R^2\times (\frac{n}{R^2})^2, after rescale to Einstein frame, we obtain V(R)\sim (2 g-2)\frac{1}{R^4}+\frac{n^2}{R^6}. Now if g=0, it is easy to see that the two terms can compete with each other and stabilize R.

If further more we add m O planes(an ingrediant has negative tension), we have one more term in the potential -m\frac{1}{R^4}, which can stabilize the modulus even when we have g=1, i.e. a torus.

In 10 dimensional string theory, we adopt the same idea: inclusion of fluxes and branes and planes will give us a potential that eventually can stabilize all the moduli.

Today a new paper by Dreiner, Haber, and Martin has me really excited: “Two-component spinor techniques and Feynman rules for quantum field theory and supersymmetry,” arXiv:0812.1594. It appears to be a comprehensive (246 page) guide to working with two-component Weyl fermions.

Most quantum field theory texts of the 90s (i.e. Peskin and Schroeder) used four-component Dirac spinors. The primary motivation for this is that Dirac spinors are the irreducible representations of massive fermions. Weyl spinors, however, are mathematically the fundamental representations of the universal cover of the Lorentz group. (See our previous post on spinors.) The fermionic generators of supersymmetry, for example, are Weyl spinors.

More practically, two-component Weyl spinors are easier to work with than four-component Dirac spinors… by a factor of two—see how I did some tricky math, there? Instead of working with \gamma-matrices in some representation, one can work with \sigma-matrices (Pauli matrices) which have a single standard representation. The “gamma gymnastics” of calculations thus becomes simpler.

Most importantly, each Weyl fermion corresponds to a chiral state. This is arguably (i.e. for everyone but experimentalists) the most sensible representation because the Standard Model is a chiral theory. (We’ve got a previous post on this point, too.) Thus working with the two-component fermions give a better handle for the actual physics at hand; i.e. less confusion about chirality versus helicity, no more need for chiral projectors in Feynman rules, etc.

Of course, this all comes at a cost. One has to relearn the spinor formalism. Feynman rules have to be rewritten with different conventions for arrows and mass terms. (Recall that the mass terms are what mix the left- and right-chiral Weyl spinors into a propagating Dirac spinor.) Fortunately, the guide has ample worked examples to help get students up-and-running.

I’ve often heard more senior physicists bemoan the fact that we teach QFT using four-component spinors just because that’s what everyone else uses. They quote the absence of a thorough and unambiguous treatment of QFT using two-component spinors to once-and-for-all encourage the community to make a phase transition to a new vacuum. Well, I’m not sure if this will be the treatment that leads the revolution, but I know that I’ll certainly keep it handy.

The guide comes with a webpage that contains an alternate version using the GR/East-coast/mostly-plus metric. (Because if particle physicists are finaly breaking free of Dirac spinors, we might as well get rid of our metric as well…?) Stephen Martin has also updated his famous SUSY Primer to utilize Weyl spinors and the ‘subversive’ metric, along with a few content updates.

Boy, the department printer is sure going to get a work out this afternoon.

“Imagine the cow is a sphere…” is the punchline for many versions of a popular allometric physics joke. Allometry, by the way, is the study of how organisms scale. The canonical example is the 1950s horror film Them!, where giant mutant ants threaten a New Mexico town. The real horror is that the film writers didn’t understand that ants could not possibly grow to the size Shaquille O’Neal in the off-season.

Imagine an ant is a sphere...

Imagine an ant a set of spheres...

Why? Consider the heuristic ant above, which we imagine is composed of three roughtly spherical sections with rod-like connections. Now what happens when we double the scale, \ell of the ant? The mass of the ant goes as its volume, i.e. m \propto \ell^3. Most of this mass is concentrated in the head, thorax, and gaster (the three round sections) which are held together with rod-like connections (neck and petiole). The shear strength of these rod-like bits to hold up the massive parts go as their cross sectional area, s \propto \ell^2. Or, said in another way, the ant’s exoskeleton scales roughly as the area.

Thus at some scale \ell_0 the ant becomes too massive for its support structure to keep it together. Lawrence Krauss opens his book Fear of Physics with this parable, explaining that (1) one cannot expect to grow arbitrarily large cows for uber-milk efficiency and that (2) this is why brontosauruses had such small heads relative to their bodies.

This kind of analysis known to physicists as dimensional analysis. While one might think that dimensional analysis is only useful for making back-of-the-envelope estimates, we will see in a subsequent post how it can be used rigorously to understand the renormalization group. Undergrads will already be familiar with `rigorous’ dimensional analysis, however, in the context of mechanics, where the use of similarity transformations. As a quick reminder, we can take the force law:

m \frac{d^2\mathbf{r}}{dt^2} = - \frac {\partial U}{\partial \mathbf{r}},

and note that if we scale t \rightarrow t'=\alpha t and m \rightarrow m' = \alpha^2 m then the above equation is still true. Thus we can conclude that the velocity of a particle’s motion in a central force field is halved when its mass is quadrupled. (Update, 22 Dec: this, of course, only holds when U is independent of m!)

The above example comes from chapter 2.11 of Mathematical Methods in Classical Mechanics by Arnold, which contains three of the neatest problems to be found in any physics textbook, which I reproduce here. (It’s worth noting that Arnold cites Smith’s Mathematical Ideas in Biology for these problems.)

1. A desert animal has to cover great distances between sources of water. How does the maximal time the animal can run depend on the size L of the animal?

To be explicit, one can imagine the animal is a sphere (though the scaling holds even if it weren’t a sphere). The animal fills up on water at one lake and must run to the next lake, which is just at the limiting distance that it can run before dehydrating iteslf. The amount of water the animal can store goes as its volume, W = \alpha L^3 while the rate at which water is perspired away is proportional ot the animal’s surfac (e area, R=\rho L^2. We assume that the rate of perspiration \rho is constant and hence over a time t the animal perspires a volume Rt = \rho L^2 t. Setting this equal to the volume of water stored W, we see that the maximum time t_m goes as its length L. To be exact, t_m = \frac{\alpha}{\rho}L, but we’re not interested in overall constants.

2. How does the running velocity of an animal on level ground and uphill depend on the size L of the animal?

The trick here is to think about the power (energy per time) used. On level ground, the main resistance to motion is air resistance, which goes as the cross sectional area times the squared velocity F \propto v^2L^2. The power is obtained by multiplying by velocity, so that we have P \propto v^3 L^2. What is the power output of the animal? This goes as the heat output, which is proportional to the animal’s surface area, so that P\propto L^2. Setting these two equal we see that v \propto L^0. The running velocity of an animal on level ground does not depend on its size! (Well, it can be proportional to the logarithm of its size… as anyone who has done “Naive Dimensional Analysis” on divergent integrals would remind you.)

In the uphill case, however, the main resistance comes from gravity, with F=mgh. Taking the slope of the hill to be constant, the power is then P \propto mgv \propto L^3 v. Setting this equal to P \propto L^2 from the surface area argument above, we get v \propto L^{-1}. Arnold notes that a dog will easily run up a hill while a horse will slow its pace.

3. How does the height of an animal’s jump depend on its size?

The energy required to jump to a height h is mgh, thus has the proportionality E \propto L^3 h. Muscles can produce a force proportional to F\propto L^2 (e.g. the strength of bones is proportional to their cross section), while the work accomplished by this force goes as W \propto FL \propto L^3. Thus setting the energy equal to the work, we find h \propto L^0. Arnold notes that “a jerboa and a kangaroo can jump to apporximately the same height.” (Tall and short basketball players have roughly the same leaping ability, but being taller makes it easier to dunk.)

I hope you enjoyed that as much as I did.

One of the difficulties for chalk board jockeys (i.e. most theorists) is transferring our diagrams into our tex files. One of the top priorities is effectively drawing Feynman diagrams. There are two schools of thought on this:

  1. TeX purist: Generate Feynman diagrams within LaTeX using TeX packages and some software front end.
  2. Pragmatist: Generate diagrams any way you want, then import them as eps or pdf images into a TeX document.

The first option has the benefit of elegance and portability. A good — if not comprehensive — list of options are available at InsectNation. (I prefer JaxoDraw myself). The trade-offs are that these options usually have a bit of a learning curve and tend to be a bit limited when you want to do something “outside of the box.”

The second option allows one to use a full-fleged graphics program, but it’s tedious to make some Feynman components with very general tools [1]. Luckily, there’s a fantastic pair of how-to videos by AjabberWok for using Adobe Illustrator to create Feynman diagrams:

The handy feature is that one defines Illustrator brushes to implement the particular type of propagator: scalar, vector, gluon, etc. Thus all one has to do is draw the topology of the diagram and apply the appropriate brushes.

But Illustrator is ‘high end’ graphic design software. What is a student to do? If you’re really lucky, your adviser will give it to you [2].  In a pinch, many universities have site licenses for Adobe software for use in computer labs. A third option that you might not be aware of, however, is student licensing.

I recently discovered that the “academic discount” software that my university’s bookstore sells off its shelves are not the best deals students can get. Student licensed software are typically boxed sets with no fancy packaging or manuals, but that are even further discounted. One might also be able to get even further discounted deals on old versions of this software. I was able to get a copy of the full Adobe Design Premium CS 3.3 suite for the cost of two or three hardback textbooks.

Now I can draw funky Feynman diagrams with brane-localized fields. 🙂

Notes

[1] This is like trying to use a lock picking kit to open a locked door: you have many tools that do many different things, but it’s more complicated than having a particular key for the particular door. This analogy, in turn, reminds me of an old barometer joke.

[2] I know of one lucky PhD student who got a copy of Illustrator from his adviser. I tried asking my adviser for a copy and he laughed at me.

Next Page »