Today I’ll be reviewing P.M. Stevenson, “Dimensional Analysis in Field Theory,” Annals of Physics 132, 383 (1981). It’s a cute paper that helps provide some insight for the renormalization group.

A theory is a black box

A theory is a black box that we can shake to make predictions of physical observables.

We’ve already said a few cursory words on dimensional analysis and renormalization. It turns out that we can use simple dimensional analysis to yield some insight on the nature of the renormalization group without having to think about the technical ‘heavy machinery’ required to do actual calculations.

First let us define a theory as a black box that is characterized by a Lagrangian and its corresponding parameters: coupling constants, masses, fields, etc. All these things, however, are contained within the black box and are in some sense abstract objects. One can ask the black box to predict physical observables, which can then be measured experimentally. Such observables could be cross sections, ratios of cross sections, or potentials, as shown in the image above.

Let’s now restrict ourselves to the case of a `naively-scale-invariant’ or `naively-dimensionless’ theory, i.e. one where there are no couplings with mass dimensions. For example, \lambda\phi^4 theory or massless QCD. We shall further restrict to dimensionless observables, such as the ratio of cross sections. Let’s call a general observable \rho(Q), where we have inserted a dependence on the energy Q with foresight that such things renormalize with energy scale.

Dimensional Analysis

But one can immediately take a step back and realize that this is ridiculous. How could a dimensionless observable from a dimensionless theory have a nontrivial dependence on a dimensionful quantity, Q? Stevenson makes this more explicit by quoting a theorem of dimensional analysis:

Thm. A function f(x,y) which depends only on two massive variables x,y and which is

  1. dimensionless
  2. uniquely defined
  3. defined without any dimensionful constants

must then be a function of the ratio of x/y only, f(x,y)=f(x/y).

Cor. If f(x,y) is independent of y, then f(x,y) is constant.

Then by the corollary, \rho(Q) must be constant. This is a problem, since our experiments show a Q dependence.

Evading Dimensional Analysis

The answer is that the theorem doesn’t hold: the theory inside the black box is not `uniquely defined,’ violating condition 2. This is what we meant by the stuff inside the black box being `abstract,’ the Lagrangian is actually a one-parameter family of theories with different bare couplings. That is to say that the black box is defined up to a freedom in the renormalization conditions.

Now that we see that it is possible to have Q-dependence, it’s a bit of a curiosity how our dimensionless theory manages to define a dimensionful dependence of \rho without any dimensonful quantities to draw upon. The simplest way to do this is to have the theory define the first derivative:

\frac {d\rho}{dQ} = \frac{1}{Q} \beta(\rho),

where \beta is the usual beta function calculated in perturbation theory. It is dimensionless and is uniquely defined by the theory. Another way one can define Q dependence is to do so recursively; one can read Stevenson’s paper to see that this is equivalent to defining the \beta function.

One can integrate the equation for \beta to write,

\log Q + constant = \int^\rho_{-\infty} \frac{d\rho'}{\beta(\rho)} \equiv K(\rho).

The constant of integration now characterizes the one-parameter ambiguity of the theory. (The ambiguity can be mapped onto the lack of a boundary condition.) We may parameterize this ambiguity by writing

constant = K_0 - \log \mu,

for some arbitrary \mu of mass dimension 1. (This form is necessary to get a dimensionles logarithm on the left-hand side.) The appearance of this massive constant is something of a `virgin birth’ for the naively-dimensionless theory and is called dimensional transmutation. By setting Q=\mu we see that K_0 = K(\rho(\mu)). Thus we see finally that the integral of the \beta equation is

\log(Q/\mu) + K(\rho(\mu)) = K(\rho(Q)).

All of the one-parameter ambiguity of the theory is now packaged into the massive parameter \mu. K is an integral that comes from the \beta function, which is in turn specified by the Lagrangian of the theory. On the left-hand side we have quantities which depend on the arbitrary scale \mu while the right-hand side contains only quantities that depend on the energy scale Q.

If K(\rho(\mu)) vanishes for some \mu=\Lambda, then we can write our observable in terms of this scale,

\rho(Q) = K^{-1}(\log(Q/\Lambda)).

Note that \mu is arbitrary, while \Lambda is fixed for a particular theory. This latter quantity is rather interesting because even though it is an intrinsic property of the black box, it is not predicted by the black box, it must be fixed by explicit measurement of an observable. (more…)

I recently put up my first paper on the arXiv and have been dealing with the torrent of e-mails asking for citations. This is normal and part of the publication process, though I’ve been amused by some of the e-mails I’ve been getting…

  • One person decided to e-mail my adviser even though it was my e-mail address that was associated with the paper. There is a reason why my e-mail associated with the paper: the senior collaborators don’t want to have to deal your “please cite me” e-mails! Don’t worry, I discuss everything with my collaborators,  but let’s keep things organized, yes?
  • E-mails that start with “I read your paper with great interest…” This is a very nice thing to say, but of course when you send it just 30 minutes after the paper is made public, then I know that you really mean: “I quickly searched your bibliography for my name… with great interest.
  • All these e-mails make me wonder if anything I’ve done is original at all.
  • There is something to be said about being a competent writer. When I skim some of these papers begging for citations, it is clear why they’re not part of the ‘standard’ set of cited papers: they’re unreadable. Yes, being a native English speaker is a huge advantage here and yes, that’s unfair for those who aren’t native speakers, but that’s the way it is.
  • I’ll cite papers even though they don’t really have to be cited. This is partly to avoid confrontation, but also because I can sympathize with other grad students who keep an eye on their citations on SPIRES.

Since we appear to be in the business of summer school announcements, it is perhaps worth passing along the date and title for next year’s SLAC Summer Institute:

Revolutions on the Horizon
3 – 14 August 2009

The SLAC Summer Institutes are only half the length of TASI and tend to be at a more introductory level, but they are great ways to dive into a research area. One can get a sample by viewing some of their recorded lectures. (To the best of my knowledge they were the first summer school ot regularly do this.)

The school website does not yet exist, but the following blurb is posted:

The topic is a study of all the upcoming experiments turning on soon and the big discoveries they will make. Website coming soon.

`Upcoming experiments’ certainly includes the LHC. But what else could be included to differentiate the school from their 2006 LHC program? Given SLAC’s shift towards cosmology, GLAST is also a likely `upcoming experiment.’ And then perhaps some flavour physics to round out the discussion?

The TASI 2009 website is up and the program looks very good. Sorry string theorists, this year will be phenomenology (for the second consecutive year), with lots of focus on the LHC, and cosmology. [Perhaps reflecting the apparent hiring shift toward astro-particle physics?]

Particle Physics:

  • Hsin-Chia Cheng  (Davis) – Introduction to extra dimensions
  • Roberto Contino (CERN) – The Higgs as a Pseudo-Goldstone boson
  • Patrick Fox (Fermilab) – Supersymmetry and the MSSM
  • Tony Gherghetta (Melbourne) – Warped extra dimensions and AdS/CFT
  • Eva Halkiadakis (Rutgers) – Introduction to the LHC experiments
  • Patrick Meade (IAS) – Gauge mediation of supersymmetry breaking
  • Maxim Perelstein (Cornell) – Introduction to collider physics
  • Gilad Perez (Weizman Inst.) – Flavor physics
  • David Shih (IAS) – Dynamical supersymmetry breaking
  • Witold Skiba (Yale) – Effective theories and electroweak precision constraints
  • Kathryn Zurek (Fermilab) – Unexpected signals at the LHC


  • Rachel Bean (Cornell) – Dark Energy
  • Daniel Baumann (Harvard) – Inflation
  • Manoj Kaplinghat (Irvine) – Large Scale Structure
  • Elena Pierpaoli (USC) – Cosmic Microwave Background
  • Richard Schnee (Syracuse) – Dark Matter Experiment
  • Michael Turner (Chicago) – Introduction to Cosmology
  • Neal Weiner (NYU) – Dark Matter Theory

The speakers appear to have been chosen to represent the `next generation’ of young faculty who have already started to shape physics in the ever-extended pre-LHC era. A few especially hot topics include Neal Weiner speaking on Dark Matter theory, Patrick Meade on [general] gauge mediation, and Tony Gherghetta on AdS/CFT “for phenomenologists.”

TASI is one of the ‘big’ summer schools in particle physics. Its primary clientele are later-stage PhD students who can take advantage of relatively broad programs to improve their breadth in physics. It is a fantastic way to get to know many of the up-and-coming people in one’s field.

With a little luck TASI will continue their recent trend of providing video lectures for those who are unable to attend.

A couple of days ago I found that my arXiv RSS feeds were a bit wonky — the author list disappeared! The arXiv feeds had been having trouble with properly displaying the author list for some time, but having it removed annoyed me so much that I e-mailed the good folks at the arXiv.

They responded and told me that the RSS 2.0 arXiv feed has everything fixed. Indeed, I’d been using an older version of the feed. The new versions, I’m happy to announce, works beautifully. Here are the RSS links for hep-ph, hep-th, and hep-ex:

I personally use Google Reader. For more information about the arXiv feeds, see

For various reasons I’ve been having fun thinking about renormalization, so I thought I’d try to put together a post about renormalization in words (rather than equations).

The standard canon for field theory students learn is that when a calculation diverges, one has to (1) regularize the divergence and then (2) renormalize the theory. This means that one has to first parameterize the divergence so that one can work with explicitly finite quantities that only diverge in some limit of the parameter. Next one has to recast the entire theory for self-consistency with respect to this regularization.

While there are some subtleties involved in picking a regularization scheme, we shall not worry about this much and will instead focus on what I think is really interesting: renormalization, i.e. the [surprising] behavior of theories as one changes scale.

The details of the general regularization-renormalization procedure can be found in every self-respecting quantum field theory textbook, but it can often be daunting to understand what’s going on physically rather than just technically. This is what I’d like to try to explore a bit.

First of all, renormalization is something that is intrinsically woven into quantum field theory rather than a `trick’ to get sensible results (as was often said in the 1960s). One way of looking at this is to say that we do not renormalize because our calculations find infinities, but rather because of the nature of quantum corrections in an interacting theory.

Recall the Lehmann-Symanzik-Zimmerman (LSZ) reduction procedure. Ha ha! Just kidding, nobody remembers the LSZ reduction formalism unless they find themselves in the unenviable position of teaching it.

Here’s what’s important: we understand the properties of free fields because their Lagrangian is quadratic and the path integral can be solved explicitly. But non-interacting theories are boring, so we usually play with interacting theories as perturbations on free theories. When we do this, however, things get a little out-of-whack.

Statements about free field propagators, for example, are no longer strictly true because of the new field interactions. The two-point Greens function is no longer the simple propagator of a field from one point to another, but now takes into account self-interactions of the field along the way. This leads one to the Lehmann-Kallen form of the propagator and the spectral density function which encodes intermediate bound states.

You can go back and read about those things in your favorite QFT text, but the point is this: we like to use “nice” properties of the free theory ito work with our interacting theory. In order to maintain these “nice properties” we are required to rescale (renormalize) our fields and couplings. For example, we would like to maintain that a field’s propagator has a pole of unity residue at its physical mass, the field operator annihilates the vacuum, and that the field is properly normalized. Assuming these properties, the LSZ reduction procedure tells us that we can calculate S-matrix elements in the usual way.

Suppose we start with a model, represented by some Lagrangian. We call this the bare Lagrangian. This is just something some theorist wrote down. The bare Lagrangian has parameters (masses, couplings), but they’re “just variables” — i.e. they needn’t be `directly’ related to measurable quantities. We rescale fields and shift couplings to fit the criteria of LSZ,

\phi = Z^{-1/2}\phi_{bare}
g = g_{bare} + \delta g.

We refer to these as the renormalized field and renormalized couplings. These quantities are finite and can be connected to experiments.

When we do calculations and find divergences, we can (usually) absorb these quantities into the bare field and couplings. Thus the counter terms \delta g and the field strength renormalization Z are also formally divergent, but in a way that cancels the divergence of the bare field and couplings.

That sets everything up for us. We haven’t really done anything, mind you, just set up all of the clockwork. In fact, the real beauty is seeing what happens when we let go and see what the machine does (the renormalization group). I’ll get to this in a future post.

Further reading: For beginning field theorists, I strongly recommend the heuristic description of renormalization in Zee’s QFT text. A good discussion of LSZ and the Lehmann-Kallen form is found in the textbooks by Srednicki and Ticciati. Finally, for one of the best discussions of renormalization, the Les Houches lectures “Methods in Field Theory” (a paperback version is available for a reasonable price) is fantastic.

For those that might be interested, here is the e-mail announcement:

Announcing the 4th CERN-Fermilab Hadron Collider Physics Summer School

Dear Colleague

The 4th CERN-Fermilab Hadron Collider Physics Summer School will be held at CERN from June 8-17 2009. The CERN-Fermilab Hadron Collider Physics Summer School is targeted particularly at young postdocs in experimental High Energy Physics (HEP), as well as senior PhD students in HEP phenomenology, working towards the completion of their thesis project.

The School will include ten days of lectures and discussions, with one free day in the middle of the period. Scholarship funds will be available to support some participants. Updated information and online applications are available at the school web site:

The deadline for applications and reference letters is February 21st, 2009.

Please circulate this announcement to whomever could be interested to participate in this school.

Best Regards,

[Local organizing committee]

One can look at previous schools to get a feel for the content of the lectures. Note that this does appear to conflict with TASI09 and part of the SUSY09 conference.

By the way, the deadline for the Spring School on Superstring Theory is in a month. Those of you of the stringy persuasion might want consider applying since there appear to be no other major string schools in 2009. (We’re still waiting to hear whether Perimeter will be hosting a summer school this year.)