December 22, 2008
Posted by Sparticles under Announcements
Comments Off on SSI 2009
Since we appear to be in the business of summer school announcements, it is perhaps worth passing along the date and title for next year’s SLAC Summer Institute:
Revolutions on the Horizon
3 – 14 August 2009
The SLAC Summer Institutes are only half the length of TASI and tend to be at a more introductory level, but they are great ways to dive into a research area. One can get a sample by viewing some of their recorded lectures. (To the best of my knowledge they were the first summer school ot regularly do this.)
The school website does not yet exist, but the following blurb is posted:
The topic is a study of all the upcoming experiments turning on soon and the big discoveries they will make. Website coming soon.
`Upcoming experiments’ certainly includes the LHC. But what else could be included to differentiate the school from their 2006 LHC program? Given SLAC’s shift towards cosmology, GLAST is also a likely `upcoming experiment.’ And then perhaps some flavour physics to round out the discussion?
December 20, 2008
Posted by Sparticles under Announcements
The TASI 2009 website is up and the program looks very good. Sorry string theorists, this year will be phenomenology (for the second consecutive year), with lots of focus on the LHC, and cosmology. [Perhaps reflecting the apparent hiring shift toward astro-particle physics?]
- Hsin-Chia Cheng (Davis) – Introduction to extra dimensions
- Roberto Contino (CERN) – The Higgs as a Pseudo-Goldstone boson
- Patrick Fox (Fermilab) – Supersymmetry and the MSSM
- Tony Gherghetta (Melbourne) – Warped extra dimensions and AdS/CFT
- Eva Halkiadakis (Rutgers) – Introduction to the LHC experiments
- Patrick Meade (IAS) – Gauge mediation of supersymmetry breaking
- Maxim Perelstein (Cornell) – Introduction to collider physics
- Gilad Perez (Weizman Inst.) – Flavor physics
- David Shih (IAS) – Dynamical supersymmetry breaking
- Witold Skiba (Yale) – Effective theories and electroweak precision constraints
- Kathryn Zurek (Fermilab) – Unexpected signals at the LHC
- Rachel Bean (Cornell) – Dark Energy
- Daniel Baumann (Harvard) – Inflation
- Manoj Kaplinghat (Irvine) – Large Scale Structure
- Elena Pierpaoli (USC) – Cosmic Microwave Background
- Richard Schnee (Syracuse) – Dark Matter Experiment
- Michael Turner (Chicago) – Introduction to Cosmology
- Neal Weiner (NYU) – Dark Matter Theory
The speakers appear to have been chosen to represent the `next generation’ of young faculty who have already started to shape physics in the ever-extended pre-LHC era. A few especially hot topics include Neal Weiner speaking on Dark Matter theory, Patrick Meade on [general] gauge mediation, and Tony Gherghetta on AdS/CFT “for phenomenologists.”
TASI is one of the ‘big’ summer schools in particle physics. Its primary clientele are later-stage PhD students who can take advantage of relatively broad programs to improve their breadth in physics. It is a fantastic way to get to know many of the up-and-coming people in one’s field.
With a little luck TASI will continue their recent trend of providing video lectures for those who are unable to attend.
December 19, 2008
Posted by Sparticles under Announcements
Comments Off on New arXiv RSS 2.0 feeds
A couple of days ago I found that my arXiv RSS feeds were a bit wonky — the author list disappeared! The arXiv feeds had been having trouble with properly displaying the author list for some time, but having it removed annoyed me so much that I e-mailed the good folks at the arXiv.
They responded and told me that the RSS 2.0 arXiv feed has everything fixed. Indeed, I’d been using an older version of the feed. The new versions, I’m happy to announce, works beautifully. Here are the RSS links for hep-ph, hep-th, and hep-ex:
I personally use Google Reader. For more information about the arXiv feeds, see http://arxiv.org/help/rss.
December 19, 2008
Posted by Sparticles under Field Theory
For various reasons I’ve been having fun thinking about renormalization, so I thought I’d try to put together a post about renormalization in words (rather than equations).
The standard canon for field theory students learn is that when a calculation diverges, one has to (1) regularize the divergence and then (2) renormalize the theory. This means that one has to first parameterize the divergence so that one can work with explicitly finite quantities that only diverge in some limit of the parameter. Next one has to recast the entire theory for self-consistency with respect to this regularization.
While there are some subtleties involved in picking a regularization scheme, we shall not worry about this much and will instead focus on what I think is really interesting: renormalization, i.e. the [surprising] behavior of theories as one changes scale.
The details of the general regularization-renormalization procedure can be found in every self-respecting quantum field theory textbook, but it can often be daunting to understand what’s going on physically rather than just technically. This is what I’d like to try to explore a bit.
First of all, renormalization is something that is intrinsically woven into quantum field theory rather than a `trick’ to get sensible results (as was often said in the 1960s). One way of looking at this is to say that we do not renormalize because our calculations find infinities, but rather because of the nature of quantum corrections in an interacting theory.
Recall the Lehmann-Symanzik-Zimmerman (LSZ) reduction procedure. Ha ha! Just kidding, nobody remembers the LSZ reduction formalism unless they find themselves in the unenviable position of teaching it.
Here’s what’s important: we understand the properties of free fields because their Lagrangian is quadratic and the path integral can be solved explicitly. But non-interacting theories are boring, so we usually play with interacting theories as perturbations on free theories. When we do this, however, things get a little out-of-whack.
Statements about free field propagators, for example, are no longer strictly true because of the new field interactions. The two-point Greens function is no longer the simple propagator of a field from one point to another, but now takes into account self-interactions of the field along the way. This leads one to the Lehmann-Kallen form of the propagator and the spectral density function which encodes intermediate bound states.
You can go back and read about those things in your favorite QFT text, but the point is this: we like to use “nice” properties of the free theory ito work with our interacting theory. In order to maintain these “nice properties” we are required to rescale (renormalize) our fields and couplings. For example, we would like to maintain that a field’s propagator has a pole of unity residue at its physical mass, the field operator annihilates the vacuum, and that the field is properly normalized. Assuming these properties, the LSZ reduction procedure tells us that we can calculate S-matrix elements in the usual way.
Suppose we start with a model, represented by some Lagrangian. We call this the bare Lagrangian. This is just something some theorist wrote down. The bare Lagrangian has parameters (masses, couplings), but they’re “just variables” — i.e. they needn’t be `directly’ related to measurable quantities. We rescale fields and shift couplings to fit the criteria of LSZ,
We refer to these as the renormalized field and renormalized couplings. These quantities are finite and can be connected to experiments.
When we do calculations and find divergences, we can (usually) absorb these quantities into the bare field and couplings. Thus the counter terms and the field strength renormalization are also formally divergent, but in a way that cancels the divergence of the bare field and couplings.
That sets everything up for us. We haven’t really done anything, mind you, just set up all of the clockwork. In fact, the real beauty is seeing what happens when we let go and see what the machine does (the renormalization group). I’ll get to this in a future post.
Further reading: For beginning field theorists, I strongly recommend the heuristic description of renormalization in Zee’s QFT text. A good discussion of LSZ and the Lehmann-Kallen form is found in the textbooks by Srednicki and Ticciati. Finally, for one of the best discussions of renormalization, the Les Houches lectures “Methods in Field Theory” (a paperback version is available for a reasonable price) is fantastic.
December 16, 2008
For those that might be interested, here is the e-mail announcement:
Announcing the 4th CERN-Fermilab Hadron Collider Physics Summer School
The 4th CERN-Fermilab Hadron Collider Physics Summer School will be held at CERN from June 8-17 2009. The CERN-Fermilab Hadron Collider Physics Summer School is targeted particularly at young postdocs in experimental High Energy Physics (HEP), as well as senior PhD students in HEP phenomenology, working towards the completion of their thesis project.
The School will include ten days of lectures and discussions, with one free day in the middle of the period. Scholarship funds will be available to support some participants. Updated information and online applications are available at the school web site: http://cern.ch/hcpss
The deadline for applications and reference letters is February 21st, 2009.
Please circulate this announcement to whomever could be interested to participate in this school.
[Local organizing committee]
One can look at previous schools to get a feel for the content of the lectures. Note that this does appear to conflict with TASI09 and part of the SUSY09 conference.
By the way, the deadline for the Spring School on Superstring Theory is in a month. Those of you of the stringy persuasion might want consider applying since there appear to be no other major string schools in 2009. (We’re still waiting to hear whether Perimeter will be hosting a summer school this year.)
December 15, 2008
Posted by Sparticles under BSM
This morning brought about another suggestive (if I may be so bold so say that) experimental hint of new physics in the leptonic sector in the form of a paper from the MiniBooNE collaboration: “Unexplained Excess of Electron-Like Events From a 1-GeV Neutrino Beam (arXiv:0812.2243).”
Recall that the past two months have also brought us a speculative “multi-muon anomaly” at CDF (arXiv:0810.5357, see also Tommaso’s summary), the publication of the PAMELA cosmic-ray positron excess (arXiv:0810.4995, ), and related publications by ATIC4 (Nature) and HESS (arXiv:0811.3894) on the electron/positron spectrum. Apparently the leptonic sector has decided to be kind (if coy) to model-builders in light of LHC delays. Now, MiniBooNE joins in on the fun.
MiniBooNE neutrino low-energy excess. Image from arXiv:0812.2243.
For an excellent summary of the MiniBooNE experiment, see Heather Ray’s post on Cosmic Variance. (Unfortunately their TeX didn’t transfer over well since they moved to Discover… hopefully someone over there will fix up all the LaTeX tags that are now garbled?)
As I’m writing this Symmetry Breaking has published a post on the result that summarizes the recent news. Here’s my own quick-and-dirty summary as I understand it:
In April 2007, MiniBooNE published results that showed no signs of the LSND anomaly (hep-ex/0104049), leading many model-builders to immediately jump off the neutrino band-wagon (see Jester’s theory report). They noted, however, a curious excess in their data at lower energies, in an energy region that was not (at least on face value) related to the unrequited LSND hint for new physics. This was left for further investigation and data analysis.
Now after more than a year of said investigation and analysis, the excess is still there. (See image above.) What’s even more interesting, is that the bump does not appear as pronounced in the antineutrino sector, according to a recent report (see image below). LSND and the fresh-on-the-arXiv MiniBooNE paper were analyses based on neutrinos. It’s a bit surprising that the MiniBooNE antineutrino analysis doesn’t have a similar feature
MiniBooNE antineutrino data showing a much weaker signal at low energies compared to the neutrino data. Image from Fermilab.
I hope to spend some time reading up on this over the holidays, I should then be able to give a more coherent summary.
December 11, 2008
These are some notes on arXiv:hep-th/0701050 by Denef, Douglas and Kachru.
Flux compactification is an ominous term that often scares people away. Here are some notes I come across can give a simple idea how to do moduli stabilization using flux compactification in 6 dimensions. We choose 6 dimensions because it equals 4(Minkowski spacetime we live in) plus 2(like a torus, the simplest low dimensional Calabi-Yau we can find).
The idea is that we start with Einstein-Hilbert action and dimensional reduce it to a 4d spacetime.
A compact 2d internal space can be characterized by its genus. (g is the number of holes in the “donut”, g=0 is sphere, g=1 is torus…). An ansatz for the 6d metric can be taken to be , where is the volume of the 2d manifold . Then the action can be written as .
We realize that is a topology constant which equals and rescale to Einstein frame(where we have canonical Einstein-Hilbert action ):, we find the 4d lagrangian to be where and . Apparently this one term is not enough to help us to stabilize the volume modulus .
Let’s add new ingrediants: suppose there is n units of magnetic flux on the 2d internal space ,i.e. then the term in 6d action can give a term proportional to given by , after rescale to Einstein frame, we obtain . Now if , it is easy to see that the two terms can compete with each other and stabilize R.
If further more we add m O planes(an ingrediant has negative tension), we have one more term in the potential , which can stabilize the modulus even when we have , i.e. a torus.
In 10 dimensional string theory, we adopt the same idea: inclusion of fluxes and branes and planes will give us a potential that eventually can stabilize all the moduli.
Next Page »