VJhS
MCZ
library
JUN ! 4 2011
Volume 97
HARVARD Number 1
UNIVERSITY Spring 2011
Journal of the
WASHINGTON
ACADEMY OF SCIENCES
Editor’s Comments J. Maffucci i
Instructions to Authors Hi
Affiliated Institutions Iv
Grains, Pulses, and Olives.. .Diet in Ancient Rome M. Brown 1
The Cosmic Microwave Background S. Howard. 25
Genders and International Collaborations L. FrehiH and K. Zippe! 49
Minutes; Science is Murder R. Hietala 71
Membership Application 83
ISSN 0043-0439
Issued Quarterly at Washington DC
Washington Academy of Sciences
Founded in 1898
Board of Managers
Elected Officers
President
Mark Holland
President Elect
Gerard Christman
Treasurer
Larry Millstein
Secretary
James Cole
Vice President, Administration
Lisa Frehill
Vice President, Membership
Sethanne Howard
Vice President, Junior Academy
Dick Davies
Vice President, Affiliated Societies
E. Eugene Williams
Members at Large
Denise Ingram
Terrell Erickson
Frank Haig, S.J.
Alianna Maren
Daryl Chubin
Michael Cohen
Past President: Kiki Ikossi
Affiliated Society Delegates:
Shown on back cover
Editor of the Journal
Jacqueline Maffucci
Associate Editor:
Sethanne Howard
Academy Office
Washington Academy of Sciences
6^" Floor
1200 New York Ave NW
Washington, DC 20005
Phone: 202/326-8975
The Journal of the Washington Academy of
Sciences
The Journal \s the official organ of the Academy.
It publishes articles on science policy, the history
of science, critical reviews, original science
research, proceedings of scholarly meetings of
its Affiliated Societies, and other items of interest
to its members. It is published quarterly. The last
issue of the year contains a directory of the
current membership of the Academy.
Subscription Rates
Members, fellows, and life members in good
standing receive the Journal free of charge.
Subscriptions are available on a calendar year
basis, payable in advance. Payment must be
made in U.S. currency at the following rates.
US and Canada $30.00
Other Countries $35.00
Single Copies (when available) $15.00
Claims for Missing Issues
Claims must be received within 65 days of
mailing. Claims will not be allowed if non-
delivery was the result of failure to notify the
Academy of a change of address.
Notification of Change of Address
Address changes should be sent promptly to
the Academy Office. Notification should
contain both old and new addresses and zip
codes.
POSTMASTER:
Send address changes to WAS, 6*^ Floor,
1200 New York Ave. NW
Washington, DC. 20005
Journal of the Washington Academy of
Sciences (ISSN 0043-0439)
Published by the Washington Academy of
Sciences 202/326-8975
email: was@washacadsci.org
website: www.washacadsci.org
MCZ
^iRRARV i
Editor’s Comments
Having been editor for this Journal for over a year now, one thing that I
can’t get over is the diversity of topics that cross my desk. As a scientist
with a specific specialty field, I find that it’s often easy to take for granted
how ubiquitous science is. But it is exactly for this reason that I was first
attracted to science.
Recently I’ve been seeing more advertisements for citizen science
projects. I have to say that I love this idea. What better way to get the
public involved and interested in science by making it accessible and
asking for their participation?!?! This is particularly important to engage
the younger generations.
As the summer months approach, I challenge all of our members to try to
engage the public, and particularly younger students, in science and
research. This can be accomplished through informal talks, blog posts,
newspaper articles, mentoring programs, science fairs, and the list goes on.
If you are looking for some good citizen science projects, I recommend
referencing www.scienceforcitizens.net. This is a fairly comprehensive list
of research projects that are recruiting citizens to assist. This could be a
great way to engage students during their summer break.
I encourage you to get creative in engaging your students, family
members, friends, and neighbors. This is an opportunity to share the
passion that we all have for the science world.
This issue of the Journal is a great demonstration of the diverse nature of
scientific research. The first article. Grain, Pulses, and Olives: An Attempt
toward a Quantitative Approach to Diet in Ancient Rome, authored by
Madeline Brown, takes a comprehensive approach to determining the diets
of Romans in Ancient Antiquity. Ms. Brown conducted this research
during a ten week internship at the National Museum of Natural History
following her graduation from Brown University. Following this is an
article by Sethanne Howard exploring those ‘wrinkles in space time’
referred to as the Cosmic Microwave Background. The Cosmic Microwave
Background, Songs in the Universe explains CMB experiments and what
they tell us about how the universe came to be. Lisa Frehill and Kathrin
Zippel then present Gender and International Collaborations of Academic
Scientists and Engineers: Findings from the Survey of Doctorate
Recipients, 2006, where they focus on gender differences among doctoral
scientists and engineers when examining the extent to which they
collaborate internationally. Finally, we present the minutes from the WAS
Spring 2011
2"^ annual Science is Murder event, during which authors Lawrence
Goldstone, Ellen Crosby, Louis Bayard, and Dana Cameron talked about
their use of science and incorporation of research into their murder
mysteries.
Enjoy!
Jacqueline Maffucci, PhD
Editor, Journal of the Washington Academy of Sciences
Washington Academy of Sciences
Ill
INSTRUCTIONS TO AUTHORS
1. Manuscripts should be in Word (Office 03/07) and not PDF.
2. They should be 6,000 words or fewer (exceptions may be made by
the Editor). If there are 7 or more graphics, reduce the number of
words.
3. Graphics (photographs, drawings, figures, tables) must be in
graytone only (no color accepted), and be easily resizable by the
editors to fit the Journal’s page size. Do not wrap text around the
graphics.
4. References (and bibliography, if included) may be in the format
generally acceptable for the disciplinary or professional field
represented by the manuscript. They must be accurate, complete,
and consistent in format throughout the paper.
5. Include both an e-mail address and a postal address for the author
(or primary author) including title and institutional affiliation if
any.
6. Papers are peer reviewed.
7. Send Manuscripts by e-mail as an attachment, or on a CD, to
Joumal'g^washacadsci.org or directly to the editor. Dr Jacqueline
Maffucci - jamaffucci^; gmail.com. Hard copy cannot be accepted.
Manuscripts can be accepted by any of the Board of Discipline
Editors.
Emanuela Appetiti - anthropology at eappetiti^f hotmail.com
Elizabeth Corona - systems science at elizabethcorona q gmail.com
Jim Eigenreider - science education at iim^g;deepwater.org
Terrell Erickson - environmental natural sciences at
teiTell.erickson 1 '^a^wdc. nsda.gov
Mark Holland - botany at maholland@salisbun .edu
Kiki Ikossi - engineering at ikossi@ieee.org
Carol Lacampagne - mathematics at clacampagne@earthlink.net
Raj Madhaven - engineering at rai.madhaven@nist.gov
Kent Miller - computer sciences at kent.l.miller@alumni.cmu.edu
Jean Mielczarek - physics and biology at mielczar@phvsics.gmu.edu
Robin Stombler - health at rstombler@auburnstrat.com
Alain Touwaide - history of medicine at atouwaide@ hotmail.com
Steve Tracton - atmospheric studies at straction@hotmail.com
Spring 201 1
AFFILIATED INSTITUTIONS
The National Institute For Standards and Technology
Meadowlark Botanical Gardens
The John W. Kluge Center of the Library of Congress
Potomac Overlook Regional Park
Koshland Science Museum
American Registry of Pathology
Living Oceans Foundation
Washington Academy of Sciences
Grain, Pulses and Olives: An Attempt toward a
Quantitative Approach to Diet in Ancient Rome
1
Madeline Brown
Smithsonian Institution, NHRE Summer 2010 Intern
Forward
The paper that follows is the first article by Madeline Brown. In May
2010, she obtained a degree in Anthropology from Brown University and,
exactly two days later, she was at the National Museum of Natural History
of the Smithsonian Institution. With seventeen other college students, she
had been selected for a ten week NHRE Internship (Natural History
Research Experience Internship). Based on her major in Botany and
Ethnobotany, and her interest in Classical Culture (including a class on
Roman food), Madeline Brown was directed to my unit, which specializes
in the study of medicine, botany, and medicinal plants in the
Mediterranean world from the most remote antiquity to the dawn of
modem science. I suggested that she explore Roman diet as a possible
source of the Mediterranean alimentary tradition. The present article is a
presentation of her research, which was entitled “What did ancient
Romans eat, and why?” This is the first essay by a freshly graduated
student who will probably become a member of the future scientific
community. It results from ten weeks of hard work, often late in the
evening in the empty US National Herbarium, taking advantage of the
Historia Plantarum collection and the documentation and knowledge
accumulated in the Institute for the Preservation of Medical Traditions.
However limited it will probably be considered, this essay opens the way
for future research and is the first announcement of a scientist in-the-
making.
I am grateful to the Washington Academy of Sciences and the
Smithsonian Institution, and also to the Institute for the Preservation of
Medical Traditions, for opening their doors to the next generation of
scientists and offering them an opportunity to communicate their work,
their ideas, and their enthusiasm for the scientific enterprise.
Alain Touwaide
Smithsonian Institution
Spring 201 1
2
Introduction
The dietary habits of Romans in Classical Antiquity have been
discussed and qualitatively reconstructed in a number of previous studies,
but none of these prior efforts have approached both its nutritional and
biological properties in a comprehensive or systematic way. In addition,
modem dieticians and scholars alike have been interested in the
contemporary Mediterranean diet ever since Ancel Keys began his
landmark research in the 1950s on the potential health benefits
experienced by those who eat traditional Mediterranean diets (Keys,
1970). Yet despite popular interest in this “traditional” Mediterranean diet,
little work has been conducted that attempts to uncover the tme origins or
the cultural, biological, and nutritional properties of this professed
traditional diet. The aforementioned paucity of nutritional and biological
quantification in previous studies on the ancient Roman diet may, in part,
be responsible for the fact that few connections have been made between
the diets of the ancient Romans and those of contemporary Mediterranean
people. This pilot study suggests that research examining the actual
biological and nutritional properties of the ancient Roman diet may enable
us to better understand both the origins of the contemporary
Mediterranean diet as well as how this diet has changed over the past two
millennia.
In this study, the ancient Roman diet is defined as the range of
foodstuffs likely eaten in the Mediterranean region from the second
century BC up through the fifth century AD. These ranges of time, region,
and foodstuffs have been determined by the time period, geographic
coverage, and food items that can be traced through the seven primary
texts referred to throughout this study. In addition, while the Roman
Empire spanned a wide range of areas in Europe, Africa, and the Middle
East, this pilot study focuses on the Mediterranean parts of the Roman
Empire, specifically Rome and greater Italy. Regional variations in the
types and quantities of available foodstuffs undoubtedly occurred
throughout the Roman Empire and warrant examination by further studies
on the quantification of ancient nutrition.
Prior studies on the nutritional properties of the ancient Roman diet
have tended to focus on the few food groups, such as cereals and vegetable
oils, for which there is fairly concrete textual evidence regarding the
extent of their dietary contributions. Gamsey (1998) has made perhaps the
most thorough attempt thus far to quantify the caloric and nutritional
properties of Roman diets. Based on classical literary references regarding
Washington Academy of Sciences
3
the different amounts of grain allotted to various members of Roman
society, he calculated the caloric contribution of cereals to the diets of
members of each strata of Roman society (Gamsey, 1998). For example,
Gamsey concludes that the approximately 33 kg, or five modii (modius
being an ancient Roman unit of measure, with one modius equal to about
6.6 kg) of grain per person per month provided under the Roman state’s
frumentatio (the allotment of grain provided to Roman citizens by the
state) during the 1^^ century AD would have been enough grain to provide
about 3,700 kcals per person per day. Gamsey (1998: 229-230) notes that
this amounts to almost twice the daily energy needs of an average human
being. It must be noted however, that much of this grain would have been
both inedible (due to prolonged storage and slow distribution) and
contaminated with rocks, dirt or other heavy debris, which would have to
be removed from the grain during the winnowing and cleaning process
before it could be consumed. Therefore, it can be assumed that ancient
Roman citizens receiving the frumentatio would not have actually had
access to the full amount of grain (and therefore calories) in their 33 kg
allotment.
In addition, Gamsey (1998: 236-237) found that Cato’s
recommended three modii (c. 19.8 kg) of grain per month for shepherds
and estate domestic staff would provide c. 2,200 calories per day, while
the four modii (c. 26.4 kg) for agricultural laborers (or slaves) provide c.
2,960 kcals per day. In another notable study, Foxhall and Forbes (1982)
suggest that the ancient Romans obtained around 75% of their daily
caloric needs from cereals. Other scholars maintain however, that this
calculation is merely an estimate, and probably a high one (Gamsey, 1998;
see also Schneider, 2006: 916).
While these previous studies provide useful frameworks for
thinking about the role of grain as a source of food energy for the Roman
people, none of them take their quantification methods further in an effort
to understand the nutritional properties of the Roman diet as a whole. This
study attempts to reconstruct the diet of early imperial period Romans, that
is, the inhabitants of Rome and Italy from around the second century BC
to the fifth century AD, by expanding on the quantitative methods from
previous studies on cereal consumption in ancient Rome and instead
quantifying not only the nutritional properties of its cereal components,
but also the nutritional properties of the Roman diet as a whole. This is
accomplished by determining both the individual and combined nutritional
properties of all of the foods that were likely part of the Roman menu.
Necessarily, this research relies on literary, archaeological, and botanical
Spring 201 1
4
data in order to both develop a comprehensive list of plant and animal
species that are likely to have been eaten by the ancient Romans as well as
to determine how these specific foods contributed to the overall health of
the Roman people.
Moreover, by combining modem Mediterranean dietary data with
what little is known about consumption patterns in ancient Rome, this
pilot study employs a more comprehensive method of quantifying the
general nutritional properties of ancient diets than has been used in the
past. The results of this analysis suggest that the high quantities of cereals
(namely wheat and barley) eaten by ancient Romans may have been
sufficient enough to provide the majority of their nutritive needs, with the
exception of vitamins A, C, and D, which they instead must have obtained
from fruits, vegetables, and exposure to the sun. This pilot study provides
a model for one method of quantifying the nutritional and biological
properties of the Roman diets and will hopefully help begin to lay the
groundwork for introducing further quantification and more holistic
evaluation methods into studies on a variety of ancient diets.
Methods
Primary Sources
This study relies on a review of both primary sources and
secondary literature discussing the Roman diet, from which a preliminary
list of 321 different foods that the ancient Romans likely consumed was
developed. Seven primary sources were surveyed to obtain quantitative
data on Roman literary references to food (see Table 1). While this cross-
section of Roman literature covers only a small percentage of the total
classical texts available, it represents a useful and informative selection
that is ideally situated for this initial quantitative investigation into the
Roman diet. These texts were selected for their focus on ancient Roman
food, agriculture, and dietetics, with the acknowledgment that they are a
limited selection and of the potential for expansion in both the number and
types of literary and other (archaeological, inscriptions, artistic) lines of
evidence in future studies. This study drew methodological inspiration
from Alain Touwaide’s database of medicinal plants found in classical
literature. Following in the methodological footsteps of the Touwaide
database, this study expanded upon the preexisting database by
determining and focusing on the various food plants and animals (rather
than strictly medicinal plants) that are mentioned by ancient Roman
authors. Specifically, I referred to the indices and concordances for each
Washington Academy of Sciences
5
of these texts (both in their original languages and translation for clarity)
and documented how frequently the texts mentioned each of the 321 food
items that were likely eaten by the ancient Romans (Briggs, 1983;
Dioscorides, 2005; Striegan-Keuntje, 1992).
Table 1: Primary Sources Surveyed
In addition, numerous secondary literature texts were surveyed in
order to gain a more informed perspective on the state of research on the
dietary properties of the ancient Roman diet (Couplan, 1994; Toynbee,
1973). Several particularly informative texts from this survey are
highlighted in Table 2.
In order to accurately analyze the number of times each ancient
author mentions a particular food source, the total number of food names
analyzed in the dataset corresponds to. the total number of unique Latin or
Greek food names, rather than to the number of different biological
species that these names may represent. This prevents the over-
representation of certain foods such as acorns, for which there are four
possible species, but only a couple of more generic names in both Latin
{glans) and Greek (balanos or drus), which may refer to any of the four
different oak species. Therefore, the number ol times that the word glans
appears in the primary sources is recorded as only a single data point (for
“acorns”) rather than four (one for each oak species) in order to prevent an
overrepresentation of acorns in the literature analysis.
Spring 201 1
6
Table 2: Notable Contributions from Secondary Literature
Biological Species Verification
Floras of Italy, botanical texts and archaeological evidence from
Pompeii were used to identify the plant species found in the Roman diet
according to contemporary Linnaean taxonomy (Pignatti, 1982; Van Wyk,
2005; Jashemski, 2002). Determining the scientific names of the otherwise
generically named food items was a crucial step in ensuring that the
nutritional data used in this study were drawn from geographically
appropriate species whenever possible. In addition, verifying the
biological identities of foods in the Roman diet allowed us to determine
where the plant food items would have originated. Binomial designations
,of plants have been verified for accuracy using the Flora Europaea (Tutin
etal, 1980).
Washingt.on Academy of Sciences
7
Nutritional Information
Quantitative nutritional data for the plant food items included in
this analysis were gathered from the “USDA National Nutrient Database
for Standard Reference,” which draws its information from the “Nutrient
Data Laboratory” of the USDA’s Agricultural Research Service.
Nutritional information on the amount of calories, iron, sodium, protein,
fat, vitamin A, vitamin D, vitamin C, calcium and sugar per 100 g was
compiled for 309 different foods that were likely eaten by the ancient
Romans.
In gathering these data however, it was not always possible to find
the exact nutritional information for each species. In these cases,
nutritional data from a closely related species was included instead as a
substitute for the missing information according to a procedure that was
also frequently used in antiquity (that is, substituting one species for
another based on availability). For instance, since there is no publicly
available quantitative nutritional data for wild radish {Raphanus
raphanistrum L.), the nutritional data from the closely related cultivated
radish species {Raphanus sativus L.) was substituted as a nutritional
analog. In general, most substitutions of plant or animal species were
limited to other species within the same genus (or in some cases, the same
family) as the desired species. In those instances when no nutritional
information was available for all closely related species of a given food,
data from a species with similar growth habits, exploited anatomical parts,
or secondary metabolites to those of the species with missing nutritional
data were inserted instead. This substitution method was employed
primarily in constructing the original dataset, and does not alter the
presentation of this study’s results.
Nutritional information for the meats was taken from the USDA
data on those meat products that most closely resemble the form in which
the meat came off of the animal. For instance, the USDA’s nutritional data
for “Pork, fresh, carcass, separable lean and fat, raw” was chosen over
other potential nutrient datasets such as “Pork, cured, ham, center slice,
country-style, separable lean only, raw” or “Pork, ground, 96% lean / 4%
fat, raw,” as the nutritional information from a less processed meat
product is more likely to resemble the nutritional profile of the meats that
were available to the ancient Romans.
In addition, grass-fed or wild varieties of animals and plants were
used whenever possible in an attempt to further mimic the plant and
animal varieties found in ancient Rome, which were undoubtedly both less
Spring 201 1
8
domesticated and less selectively genetically engineered compared to most
of our contemporary commodity crops and livestock.
Foods that are composed of a variety of ingredients (such as cake,
bread, beer and aphye) are not included in the nutritional analysis because
the exact ingredients, processing methods, and proportions of ingredients
for each ancient recipe are currently unknown, and therefore the more
general nutritional properties of these food items are also largely
unknown. In , addition, wine was not considered in this study, as the
classical texts give no clear indication of its frequency of consumption by
the ancient Romans, and therefore no indication of its possible nutritional
role. Future studies expanding upon this initial quantitative analysis of the
Roman diet could significantly improve our understanding of ancient
Roman nutrition by considering the dietary role of these additional
processed foods such as beer, wine, cakes, and garum.
Nutrition Guidelines
The nutritional analysis in this study focuses on 10 of the most
important and essential nutrients required for proper human nutrition:
calories/energy, protein, fat, sugars, calcium, iron, sodium, vitamin C,
vitamin A, and vitamin D.
This study referred to the World Health Organization (WHO) and
the Food and Agriculture Organization (FAO) of the United Nations’ joint
recommended daily intake data for vitamins and minerals as the baseline
recommended daily amount for the following nutrients: calcium, iron,
vitamin C, vitamin A, and vitamin D (WHO and FAO, 2004). In order to
compare the Roman diet’s nutritional adequacy for both men and women,
our analysis included information on the daily intake required for both
adult women (19-50) and adult men (19-65). In calculating the daily
recommended amount of iron based on the WHO’s data, the
recommended amounts for the middle values of iron bioavailability (12%
and 10%) were averaged, as information on the bioavailability of the types
of iron in the Roman food sources was unavailable.
Besides using FAO and WHO guidelines, this study referred to the
recommended daily intake values for protein, fat, sugars, sodium, and
calories given by the Confederation of the Food and Drink Industries of
the European Union’s (CIAA) Guideline Daily Amounts (GDA) (2010).
Washington Academy of Sciences
9
Nutritional Analysis of Diet
Because definitive quantitative data on the amount of food
consumed by ancient Romans is unavailable, studies on Roman nutrition
must instead rely on the information that known, such as the amount of
grain provisioned to slaves or soldiers, and relate it to data about
contemporary food consumption patterns. Along these lines, this study
analyzed the ancient Roman diet using data from the FAO about the
amount and proportions of foodstuffs annually consumed by contemporary
Mediterranean people. Because the FAO divides diets into categories such
as cereals, meats, and vegetable oils, it was possible to estimate the
relative amount of nutrients that ancient Romans would have gained from
each of these food groups by inserting food items from ancient diets rather
than those from modem diets into the appropriate culinary categories.
Specifically, this study referred to the FAO’s data regarding the
total amount of food available for consumption in Spain, Italy, Greece,
and Turkey during the year 1961 (which is the earliest year in which the
FAO collected food consumption data). This information was further
broken down into the total amount of cereals, starchy roots, sugar crops,
sugar and sweeteners, pulses, tree nuts, oil crops, vegetable oils,
vegetables, fmits, stimulants, spices, alcoholic beverages, meat, offal,
animal fats, eggs, milk, fish and seafood, other aquatic products, and
miscellaneous foods. The total kilograms of food available per person per
year for each of these countries was then averaged in order to get a general
idea of the amount and types of foods consumed by people in the greater
Mediterranean. This contemporary information on food consumption
provides a baseline idea of the total quantity of food consumed by a single
person in a year, as well as how much of each type of food (cereal, meat,
fruit, etc.) people in the Mediterranean consume relative to other types of
food.
There are several key problems inherent in analyzing ancient diets
using modem dietary data. While human diets exhibit strong cultural
traditions and resilience in their basic components, they also are known to
rapidly change to incorporate new foodstuffs made possible by new
technology, trade, or other factors. In addition, during the last century, and
particularly the last fifty years, human diets have undergone remarkably
drastic changes as a result of the increased industrialization of food
production and innovative food technologies, as well as unprecedented
levels of global trade in agricultural commodities. Therefore, few people
today continue to eat the “traditional” foodstuffs that they did several
Spring 201 1
10
hundred years or even several decades ago. A second difficulty arises from
the differences between the dietary emphases of different food categories
(such as meat, dairy, fruits, etc.) of modem and ancient peoples. For
instance, contemporary people tend to eat more meat and sugar than those
of the past, as unlike in ancient times, both of these food items are now
generally no longer expensive, rare, or considered to be luxury foods.
These considerations had to be taken into account while conducting this
analysis of ancient nutrition using contemporary data about consumption
levels.
Developing a Model Diet: Grain Consumption as a Baseline
Fortunately, the dietary shifts that have occurred since antiquity
can be corrected for using the few known quantities of foods that were
likely consumed by the ancient Romans, such as the amount of grain. The
baseline amount of cereals consumed in the modem Mediterranean diet
(according to FAO statistic) was adjusted in order to match the levels of
ancient Roman grain consumption suggested by previous studies
(Schneider, 2006; Gamsey, 1998; Foxhall and Forbes, 1982). As grain
consumption is the only food category with clear ancient textual evidence
of its consumption levels, this known quantity of ancient grain
consumption provides the foundation upon which this study builds the rest
of its nutritional reconstruction of the ancient Roman diet.
Based on past analyses of the Roman diet using Cato’s
aforementioned writings (Hooper and Ash, 1934) on grain allotments, it
has been suggested that the amount of grain allocated to a typical Roman
soldier or slave probably fell somewhere between 230 - 330 kg per year
(Schneider, 2006: 916). These known quantities of high and low limits of
grain consumption from antiquity were incorporated into the
contemporary food-intake dataset as the high and low estimates of the
relative contribution of grains to the Roman diet.
While these amounts of grain may seem high compared to modem
levels of consumption, they must be considered within the context of the
heavily grain-based ancient diet. According to Pearson (1997:15), during
the later Carolingian era, one Anglo-Saxon guideline suggested providing
1.5 - 2 kg of bread per day, as well as additional meat and veggies. These
amounts of grain rations are far higher than the intake amounts suggested
by classical literature (Cato; Varro; Columella; Pliny; Apicius; Palladius;
and Anthimus) and yet, there is clear documentation from throughout the
early Middle Ages of a variety of monastic and lay rations of bread
Washington Academy of Sciences
11
allotments ranging from 330 g per day up to 1,700 g per day (Pearson,
1997). These medieval bread rations may help to better contextualize the
low and high amounts of ancient Roman grain consumption suggested in
this study. Two-hundred and thirty kilograms of wheat per year averages
out to about 630 g per day, while 330 kg per year averages out to 870 g
per day. According to the USDA Nutrient Data Laboratory, 100 g of
durum wheat contains 339 kcal. Therefore, these high and low amounts of
grain consumption would provide between about 2136 kcal and 2929 kcal
per day, which are acceptable amounts of caloric intake for active males
relying on a cereal-based diet.
Since the ancient Roman diet contained a higher proportion of
cereal than the modem Mediterranean diet, this substitution increases, and
thereby skews the total mass of food consumed per person per year. To
correct for this, the amounts of all non-cereal food items were decreased
proportionally so that the total kilograms of food consumed per year
remained consistent with the original pre-adjustment amounts. In addition,
after making this correction, the proportions of all non-cereal foods in the
model diet remained the same as in the original pre-corrected model. This
created a nutritional model for the ancient Roman diet based on a realistic
amount of total food consumed per year in the contemporary
Mediterranean diet. In addition, this method conserved realistic
proportions between the different types of non-cereal foods while also
enabling the simulation of a cereal-heavy diet that more closely resembles
that of the ancient Romans (rather than the modem Mediterranean people).
An analysis of our results after these initial calculations revealed
that the introduction of higher amounts of cereals (which are calorie-dense
foods) actually altered the dataset to be unrealistically high in calories. To
adjust for this, the total amount of food consumed per person per year was
recalculated, this time including only those items that are known from
primary sources and archaeological evidence to have been commonly
consumed in the ancient Roman diet. This left us with a dataset containing
information on the annual consumption of the ‘‘core” food categories in
ancient Roman diets, that is, cereals, pulses, tree nuts, oil crops (olives,
sesame seeds, etc.), vegetable oil, vegetables and starchy roots, fish and
seafood, spices, and eggs. Notably, milk, meat and fruit were taken out of
the “core” dataset, as the classical texts (Cato; Varro; Columella; Pliny;
Apicius; Palladius; and Anthimus) suggest that the ancient Romans seem
to have not consumed these foods other than on an irregular or seasonal
basis. In addition, while it is well known that all Roman citizens,
regardless of class, had access to meat during public festivals throughout
Spring 201 1
12
the year, it is not well known exactly how many animals were slaughtered
or how the meat was distributed at each of these festivals, and therefore
what the nutritional role of meat would have been for the Roman people.
Therefore, this initial study does not consider meat as being a significant
part of the Roman diet, as it is yet unclear how regularly or in what
quantities meat was actually consumed.
Once the core categories of food in the Roman diet were
established, the amount of food likely consumed by a Roman person from
each of these categories in an average year was calculated. By combining
this information (on total mass of food consumed) with information on the
average nutritional properties (that is the amount of calcium, fat, etc. per
100 grams) of each food, it was possible to create a general model of the
dietary profiles of people who consume the types of foods that the Romans
ate in the same relative amounts and proportions (as determined using the
previously discussed model). In addition, the high and low bounds of grain
consumption also provide analogs for the nutritional profiles of both the
lower and middle-to-upper classes in Roman society, as lower class
individuals would have relied more heavily on grain, while those of upper
and middle classes likely enjoyed a more varied diet.
Results
Foods Mentioned in Primary Sources
Based on this survey of seven primary sources that discuss food
and agriculture in the Roman Empire, it is clear that ancient Roman
authors mentioned some types of food far more frequently than others.
This information has been essential for determining and confirming the
types of foods eaten by the ancient Romans, and thus the types of foods
included in this study’s nutritional analysis.
Table 3: Mentions of Unique Foods in Primary Sources
(listed chronologically)
Washington Academy of Sciences
13
The total number of different foods mentioned by the classical
Roman authors surveyed in this study varies greatly (see Table 3).
Altogether they mention 225 different foods, though as previously
mentioned, some of these Greek and Latin names of foods may correspond
to multiple biological species. The top ten foods mentioned in the classical
texts that were surveyed in this study are indicated in Table 4. The wide
variation in the frequency of literary references per food is apparent even
in these top ten items. Furthermore, only 26 of the 225 different food
items found in the ancient texts are mentioned over 100 times; by contrast
44 different foods are mentioned five times or less. This indicates that
while ancient authors clearly had knowledge of a wide repertoire of
culinary options, they in fact focused their discussion of food and dietetics
on a more limited number of foods.
Table 4: Top 10 Food Items Mentioned in Seven Classical Texts
Nutrition
Using both high and low estimates of ancient Roman cereal
consumption (230 and 330 kg per year respectively), this study generated
two different profiles of the nutritional properties of ancient Roman diets
depending on whether one assumes it contained higher or lower
proportions of cereal compared to the overall amount of food consumed
(Table 5). Assuming the Romans consumed the higher level of wheat
intake (330 kg per year), and using the quantitative methods previously
outlined, this study found that the total mass of food in the Roman diet
following this model would have been 65% cereals, 28% vegetables and
starchy roots, 2% fish and seafood, 1.8% vegetable oils, 1.1% pulses,
0.9% eggs, 0.8% tree nuts, 0.4% oil crops, and 0.1% spices. Assuming a
lower level of wheat consumption (230 kg per year), these values shift so
Spring 201 1
14
that the dietary mass is 50.4% cereals, 39.6% vegetables and starchy roots,
2.8% fish and seafood, 2.5% vegetable oils, 1.6% pulses, 1.2% tree nuts,
1.2% eggs, 0.6% oil crops, and 0.2% spices.
Table 5: Amount of Nutrients in Roman Diet Based on Daily
*Low and High designations refer to models created based on the low and high estimates of the
contribution of wheat in the Roman diet, 230kg and 330kg per year respectively.
Discussion
Composition of Roman Diet
The Romans primarily ate cereals and legumes, often
supplemented with vegetables, cheese, or meat and covered with sauces
made out of fermented fish, vinegar, honey, and various herbs and spices
(Schneider, 2006: 919). While they had some refrigeration, much of their
diet depended on which foods were locally and seasonally available. Meat
and fish were luxuries primarily reserved for the upper and upper-middle
classes, although lower class Romans sometimes obtained low-quality
meat from either public sacrifices or urban cookhouses (Herz, 2006).
These results are visually reflected in pyramids (Figures la and lb, see
Appendix), which show the relative importance and nutritional
contributions of various foods to the ancient Roman diet (the most
frequently consumed foods are shown at the bottom of each pyramid,
while the least frequently consumed foods are shown near the top). Since
meats and fish were more often eaten as luxuries than as everyday foods,
the protein from pulses played a fairly significant nutritional role for the
ancient Romans. Gamsey (1998) even suggests that lentils, broad beans,
and chickpeas provided most of the non-cereal calories and protein for the
Roman people.
While the classical texts do not clearly indicate the amount of meat
commonly consumed in ancient Rome, archaeological evidence suggests
Washington Academy of Sciences
15
that Roman people across most socioeconomic classes consistently
consumed at least a limited quantity of meat. Cucina and colleagues
analyzed 77 skeletons from the Necropolis of Vallerano, a suburb of Rome
and found a low frequency of oral pathologies, which they correlate with a
diet that includes meat and “low-calorific” crops, but which is “seemingly
low in refined carbohydrates” (Cucina et al, 2006: 104). In addition,
Cucina and colleagues suggest that based on the presumed social classes
of these skeletons, “their diet mainly consisted of cereals and low-cost
goods,” while the more valuable foods (such as meat, dairy, spices, etc.)
were instead reserved to sell at the nearby markets (Cucina et al, 2006:
106). In conclusion, these authors suggest that the lack of oral pathologies
in the skeletons examined at Vallerano is not inconsistent with a primarily
vegetarian diet, as long as grains made up a lower proportion of the diet
compared to other vegetable foods (Cucina et al, 2006: 115).
Results from Cucina and colleagues (2006) rely on osteological
data to support the supposition that populations in Roman suburbs, which
primarily grew perishable crops rather than grain, would have eaten less
grain than their urban and rural counterparts. Although Cucina and
colleagues’ (2006) osteological study provides important evidence of the
nutritional profiles of one community of Roman people, their results
cannot be generalized to explain Roman nutrition as a whole. As this pilot
study attempts to quantify the nutritional properties of Roman diets in
general, it relies less on specific osteological case studies and instead more
on both classical literature and contemporary nutritional information.
Future studies on ancient nutrition should incorporate both archaeological
and osteological evidence in order to more clearly examine how the
nutritional profiles of ancient Romans described in classical texts differs
from those suggested by alternative lines of evidence.
In addition to the staple foodstuffs of grains, pulses, and
occasionally meats, the ancient Romans also enjoyed a wide variety of
fruits, vegetables, and exotic spices and condiments. While fresh fruits
were probably only seasonally available in ancient Rome, Columella
speaks to the important nutritional role of dried fruits (such as apples, figs,
and pears) in rural people’s winter diets (Columella Book 12, XIV; noted
in Brothwell and Brothwell, 1998: 146). These dried fruits likely provided
both essential vitamins and small amounts of sugar for the Roman people
who had access to them. Contrary to Schneider’s (2006: 918) assertion
that Cato provided his slaves with both figs and preserved olives, Cato
does not in fact mention that figs were given to his slaves. Instead, he
clearly discusses providing cereals and preserved olives to his slaves (Cato
Spring 201 1
16
56, 58), and limits his remarks on figs to methods of cultivation,
harvesting, and preserving them,, rather than to their role in the diets of
different members of Roman society (see esp. Cato 99, 143, 8.1, 94). Cato
leaves us unsure about whether or not ancient Roman landowners
provided figs for their slaves.
While fruits may have been nutritionally important to certain
classes of ancient Roman people, the results of this study’s classical
literature review support Schneider’s (2006: 918) assertion that neither
wild animals nor wild plants are likely to have played large roles in the
nutritional status of the ancient Romans. Cultivated tree nuts however,
may have played a small but significant role in the Roman diet. This is
because while they were usually only eaten as condiments or as dessert,
they are exceptionally high in calories, fat, and protein, meaning that they
can still affect human nutrition even when consumed in small amounts
(Brothwell and Brothwell 1998: 149; USDA Nutrition Database).
A variety of sodium-filled condiments may also have had a
significant impact on ancient Roman nutrition. The ancient Romans seem
to have enjoyed covering their foods in complex sauces, made with many
different ingredients and oftentimes possessing fairly distinct flavors. One
sauce for oysters and shellfish from Apicius for example, calls for
“pepper, lovage, parsley, dry mint, bay leaf, malabathrum [leaves of
Cinnamomum tamala], plenty of cumin, honey, vinegar, and liquamen
[fermented fish sauce]” (Brothwell and Brothwell 1998: 66). In general
however, the main sauce enjoyed by the ancient Romans was garum, a
fermented and salted fish sauce, which they applied liberally to savory and
sweet dishes alike. Smriga and colleagues (2010) analyzed the nutritional
properties of garum found in residues left on pots from the “Garum Shop”
in Pompeii. They found that the ancient garum residue contained amino
acids in amounts comparable to those found in modem Southeast Asian
and southern Italian fish sauces, with free glutamate, glycine, and alanine
providing most of the flavor and amino acid content of the sauce (Smriga
et al, 2010: 442). While the high levels of amino acids and salt in gamm
undeniably played an important role in Roman nutrition and digestive
processes, classical texts do not clearly discuss the quantities in which
garum was consumed by ancient Roman peoples, which therefore prevents
us from quantifying its nutritional role in this initial study.
Washington Academy of Sciences
17
Nutrition
This study outlines nutritional properties of a Roman diet in which
individuals maintain a continually high level of consumption of both
cereals and supplementary non-cereal foods. It is unlikely however, that
all (or even most) Romans had access to this wide variety of food items in
such quantities at all times of year (Schneider 2006: 917). Furthermore,
Gamsey (1998: 240) suggests that while most lower class Romans could
expect to supplement their grains with some poor quality wine, legumes
and olive oil, and even sometimes with vegetables, fish, or fish-sauce,
other animal products such as meat, eggs and dairy were rarely obtainable
for these people. In addition, while Table 6 includes information on the
nutritional intake for both men and women, these numbers assume that
men and women had access to the same dietary quality and quantities. Due
to various cultural considerations and ancient medical theories however, it
is likely that ancient Roman women would not have eaten as much food as
men on a daily basis. Finally, this study does not take longevity into
consideration as a factor in and byproduct of ancient nutritional intake.
Neither lifespan nor differences in nutritional intake across the various
phases of life are indicated in the classical texts, and thus the relationship
between nutrition and longevity could not be taken into account in this
study. Future studies that examine osteological evidence in conjunction
with ancient literary sources may be able to better understand and examine
these issues.
The high proportion of cereals and legumes in the ancient Roman
diet provided them with substantial amounts of calories, protein, calcium,
and iron. Because their staple foods are deficient in vitamins A, C, and D,
the Romans likely obtained these nutrients from seasonally available fruits
and vegetables, although spices may also have played a role in providing
these nutrients. This analysis indicates that their diet was fairly low in
vitamin D, sodium, and sugar. It has to be expected however, that the
ancient Roman people’s high sun exposure and proximity to the sea also
had positive health effects, conferring both vitamin D and iodine
respectively.
One unanticipated conclusion of this study is that in the
quantitative model, pulses make up a much lower percentage of the
ancient Roman diet than might otherwise be expected based on qualitative
information from primary literary sources. This disparity may be
attributable to the fact that concrete evidence regarding the actual amount
of legumes consumed by the ancient Romans is largely uncertain and
Spring 201 1
18
unavailable. Therefore, this initial study instead relied on the known
amount of legumes consumed in modem Mediterranean diets in order to
calculate the nutritional role of legumes in the ancient Roman diet. This
difference may have created the disparity in this study’s conclusions, as
legumes were likely not eaten in the same dietary proportions by both the
ancient Romans and modem Mediterranean people. As quantification
methods continue to be explored and improved upon in future studies,
disparities such as this one seen with legumes will hopefully be more
clearly resolved and understood.
Despite the aforementioned problems associated with using
contemporary food consumption data as a model for ancient diets, this
quantitative method of inquiry can be effectively employed to create a
general model of the core nutritional profile of ancient Roman diets. The
data gathered using this type of quantitative modeling should be
considered only while keeping these conceptual issues in mind and
carefully considering the available qualitative information regarding both
ancient and contemporary diets.
Conclusions
As has been suggested by previous studies, the overall nutritional
properties of the Roman diet are largely dependent on its three main
components: grains, wine, and olives (or olive oil) (Schneider, 2006: 919).
This study takes our prior knowledge of the three basic components in the
Roman diet and broadens our analytical focus to encompass the biological
and nutritional properties of all of the possible food items consumed by
ancient Romans. This method has not been free of problems however, as
other than quantities for the amount of grain that was given to slaves and
soldiers per year, there is little quantitative data from ancient Rome on
how much of a given food item people tended to consume in a given time
period (Gamsey, 1998). While quantitative methods for investigating
ancient nutrition may still be somewhat imprecise and rely on various
estimates and assumptions about ancient food consumption, they do
provide us with a useful overview and general profile of the probable
nutritional and biological properties of the Roman diet as a whole.
This study found that the core constituents of the Roman diet
(cereals and legumes) meet many of the daily nutritional needs of men and
women when consumed in the amounts presumably eaten in ancient
Rome. The Romans likely met their additional nutritional needs using a
wide variety of more sporadically consumed foods such as meats, fruits.
Washington Academy of Sciences
19
and spices. The nutritional impacts of meat, wine, garum and other
specialty food items in the Roman diet should be further explored in future
studies, perhaps by incorporating additional archaeological and
osteological evidence. In addition, further examining how ancient diets
varied throughout the Roman Empire could perhaps lead us to better
understand both the origins and spread of Mediterranean dietary traditions
as well as how nutritional profiles may have varied throughout the empire.
This investigation also found that neither the nutritional properties
nor the frequency of consumption for a given food seem to have been the
determining factor for how often it was mentioned by classical authors.
This initial finding of a lack of correlation between these variables is
intriguing and warrants further investigation into uncovering why the
ancient Roman authors mentioned different food items in such varying
frequencies.
Although we may never be able to know exactly what the ancient
Romans ate, we should continue to attempt to quantify and better
reconstruct the nutritional properties and composition of their diet through
further investigations. This initial study provides one model for
introducing such quantification methods into research on ancient diets.
20
References
Andre, Jacques. Apicius, L'art culinaire. Texte etabli, traduit et commente
- (Collection des Universites de France). Paris: Les Belles Lettres,
1987.
Andre, Jacques. L 'alimentation et la cuisine a Rome. Paris: Les Belles
Lettres, 1981.
Andre, Jacques. Le noms de plantes dans la Rome antique. Paris: Les
Belles Lettres, 1985.
Briggs, W.W. Jr. Concordantia in Varronis Libros De Re Rustica.
Hildesheim: Georg 01ms Verlag, 1983.
Brothwell, Don and Patricia Brothwell. Food in Antiquity: A Survey of the
Diet of Early Peoples. Expanded edition. Baltimore: Johns
Hopkins University Press, 1998.
Confederation of the food and drink industries of the EU (CIAA).
Guideline Daily Amounts (GDAs) - GDAs Explained, 2010.
http://gda.ciaa.eu/asp2/gdas_portions_rationale. asp?doc_id=l 27.
Couplan, Francois and Eva Styner. Guide des plantes sauvages
comestibles et toxiques. Paris: Delachaux et Niestle, 1994.
Cucina, Andrea, Rita Vargiu, Domenico Mancinelli, R. Ricci, Elena
Santandrea, Paola Catalano, and Alfredo Coppa. “The Necropolis
of Vallerano (Rome, 2”^ - Century AD): An Anthropological
Perspective on the Ancient Romans in the Suburbium. in
International Journal of Osteoarchaeology 16: 104-117, 2006.
Dalby, Andrew. Food in the Ancient World: From A to Z. Eondon:
Routledge, 2003.
Dioscorides, Pedanius. De materia medica. Translated by Eily Y. Beck.
Hildesheim: 01ms - Weidmann, 2005.
Food and Agriculture Organization of the United Nations (FAO).
FAOSTAT Food Balance Sheets (Italy, Turkey, Greece, and
Spain). http://faostat.fao.Org/site/368/default.aspx#ancor.
Foxhall, Ein and Forbes, Hamish A. “Sitometreia: the role of grain as a
staple food in classical antiquity.” in Chiron 12: 41-90, 1982.
Gamsey, Peter. Cities, Peasants and Food in Classical Antiquity: Essays
in Social and Economic History. Walter Scheidel, ed. Cambridge:
Cambridge University Press, 1998.
Herz, Peter. “Meat, consumption of,” in Hubert Cancik and Helmuth
Schneider (eds.). Brill's New Pauly. Encyclopedia of the Ancient
World, vol. 8. Eeiden and Boston: Brill, 2006, cols. 535-537.
Hooper, William Davis. Marcus Porcius Cato, On Agriculture. Marcus
Terentius Varro, On Agriculture. With an English Translation.
Washington Academy of Sciences
21
Revised by Harrison Boyd Ash (Loeb Classical Library 283).
Cambridge MA: Harvard University Press, and London: William
Heinemann, 1934.
Jashemski, Wilhelmina and Frederick Meyer eds. The Natural History of
Pompeii. Cambridge: Cambridge University Press, 2002.
Keys, Ancel. Coronary heart disease in seven countries. Circulation 41
(suppl. 1): 1-211, 1970.
Paolucci, Paula. Anthimi epistulae de observatione ciborum ad
Theodoricum regem Francorum Concordantiae. Hildesheim: 01ms
- Weidmann, 2003.
Pearson, Kathy. “Nutrition and the Early-Medieval Diet.” in Speculum
72(1): 1-32, 1997.
Pignatti, Sandro. Flora dTtalia. Bologna: Edagricole, 1982.
Rackham, Harris, William Henry Samuel Jones, and David Eichholz,
Pliny, Natural History, with an English Translation 10 vols. (Loeb
Classical Library 330, 352-353, 370-371, 392-394, 418-419).
Cambridge MA and London: Harvard University Press, 1938-
1962.
Rodgers, Robert Howard. Columellae res rustica, Incerti auctoris, Liber
de arboribus. Recognovit brevique adnotatione critica instruxit -
(Scriptorum classicorum bibliotheca Oxoniensis). Oxonii: E
Typographeo Clarendoniano, 2010.
Rodgers, Robert Howard. Palladius, Opus agricultiirae, de veterinaria
medicina, de insitione. Edidit - (Bibliotheca Scriptorum
Graecorum et Latinorum Teubneriana). Leipzig: B. G. Teubner,
1975.
Schneider, Helmuth. “Nutrition,” in Hubert Cancik and Helmuth
Schneider (eds.), BrilVs New Pauly. Encyclopedia of the Ancient
World, vol. 9. Leiden and Boston: Brill, 2006, cols. 914-921.
Smriga, Miro, Toshimi Mizukoshi, Daigo Iwahata, Sachise Eto, Hiroshi
Miyano, Takeshi Kimura, Robert 1. Curtis. “Amino acids and
minerals in ancient remnants of fish sauce (garum) sampled in the
“Garum Shop” in Pompeii, Italy.” in Journal of Food Composition
and Analysis 23: 442-446, 2010.
Striegan-Keuntje, Ilona. Concordantia et Index in Apicium. Hildesheim:
01ms- Weidmann, 1992.
Thompson, D’Arcy Wentworth. A Glossary of Greek Fishes. London:
Oxford University Press, 1947.
Touwaide, Alain. Medicinal Plants of Antiquity. Unpublished
computerized database.
Spring 2011
22
Toynbee, Jocelyn M.C. Animals in Roman Life and Art. Ithaca, New York:
Cornell University Press, .1973.
Tutin, Thomas Gaskell, Vernon Hilton Heywood, Norman Alan Burges,
David M. Moore, David Henriques Valentine, Stuart Max Walters,
and David Allardice Webb. Flora Europaea. Cambridge:
Cambridge University Press, 1980.
USDA Agricultural Research Service, Nutrient Data Laboratory. USDA
National Nutrient Database for Standard Reference.
http://www.nal.usda.gov/fnic/foodcomp/search/.
Van Wyk, Ben-Erik. Food plants of the world: an illustrated guide.
Portland, Oregon: Timber Press, 2005.
Varro, Marcus Terentius. On Agriculture, with an English translation by
William Davis Hooper, revised by Harrison Boyd Ash, [Loeb
Classical Library No. 283] Cambridge MA: Harvard University
Press, 1935, pp. 159-529.
World Health Organization (WHO) and FAO. Vitamin and mineral
requirements in human nutrition: Second edition, 2004.
http://whqlibdoc.who.int/publications/2004/9241546123.pdf.
Acknowledgments
I wish to thank the Smithsonian Institution, National Museum of Natural
History, for my selection in the first session of Natural History Research
Experience (NHRE) Interns during the summer of 2010. I am grateful to
my supervisor. Dr. Alain Touwaide, Historian of Sciences at the National
Museum of Natural History, for having suggested to do research on the
history of food in antiquity, his scientific direction, ongoing advice,
patience, and support during both the research and writing phases of this
study. His constant assistance and expertise have helped develop this
study into its final form. In addition, Emanuela Appetiti, Scientific
Program Specialist at the National Museum of Natural History,
contributed to make my stay at the Smithsonian productive and enjoyable,
and provided stimulating views on my research, including the writing of
this article. Finally, I express my gratitude to the anonymous reviewers for
their constructive suggestions and thoughtful comments, which have
enabled me to avoid former shortcomings and have helped me better
develop some components of an earlier version of this article. As always,
any remaining imperfections are mine.
Washington Academy of Sciences
23
Appendix:
Figure la: Suggested Relative Role of Foods in Roman Diet by Food
Category
Herbs'
'& Spices'^
Tree
Nuts
'Muisum Wine Must
Honey Vinegar |
Meat
Eggs
'& Dairy
Pulses
Fruit:
Vegetables
and
Starchy Roots'
Cereals
Spring 201 1
j
24
Figure lb: Suggested Relative Role of Foods in Roman Diet by Name
;isum Wine '.'ust
Hone. V
'Milk
Eggs
'Cheese
ClMCkan Pifindga'
Oalnch CnraOog^
Ouck FMfTWigo ^
HCKM Goat Oatmau*i
Qoom FfUKSkn The.
tow* Poni Ptgson'
<Xiai RabM PhMsani Boa^
Fava Bean Cowpea
Field Pea Astragalus
Chickpea Lablab bean
Chickling vetch Pea Lentil
Bitter vetch Common vetch
''Black-eyed pea Red Pea Lupin
Jin ’Tiesor.
•fiTf Pofnfj .
■ -“fyc-. 'vTjeitr,
C*^r,
:«>r Sen Tl
AieiaiKJefs Bfoadteaved Peppetvneed Artichoke Morel
Asphodel Beet Black bryony Black morel Black truffle Orchis^
Cachrys kbanobs Cardoon Carrol Ctfs Ear Arum Desert t
ChoncMa juncea Costus or putchuk Cress Cucumber Chicory
Elecampane Enrtve Enngo Field eryngo Field Mushroom. Garkc ’
Goideh chamarefle Golden thistle Groundsel Horse Mushroom
Houtxfs Tongue Kale Leek Lettuce WM lettuce Horseraiksh Lasura '
Lovage Mallow Settle. Onun Wild orach Mountam orache Celery
Pvsnp PeWory Poron Purslane Radish Red Truffle Rocket Sainptwe
S.*ia«oi Skirret Soapwort Sorrel. Patience Dock Sow Thode Sp*enard
Spotted Golden Thistle Teasel Truffle Tum^r Water pepper Watercress Tarol
White mustard White Truffle Whitetop WM Radsh Winter Truffle Asparagus
Grape Hyacmth Grass Wy MetKerraiean giaaoius Satytlon Cabbage Gourd ^
Durum Wheat Barley
Sorghum Einkorn Emmer Wheat
'Millet Oats Purple Amaranth Rice Rye'
Washington Academy of Sciences
The Cosmic Microwave Background
Songs in the Universe
25
Sethanne Howard
USNO, retired
Abstract
The Cosmic Microwave Background gives astronomers a lot of
information about our early universe and explains why it has structure.
Yet the details are a bit esoteric. Reviewed here are the basic concepts
and discoveries of the Cosmic Microwave Background from an
astronomer’s point of view.
Introduction
The Cosmic Microwave Background (CMB) is a hot (so to speak)
topic in astronomy. Sometimes called ‘wrinkles in space-time’, the CMB
tells us about the birth and evolution of the universe we see. It maps the
earliest moments of our universe. Recent CMB experiments have firmly
established the Big Bang Model as the leading theory in cosmology. Yet,
despite its fascination, the CMB can be tricky to decode. So let us look at
some concepts that will help us understand these experiments and what
they tell us.
We call the electromagnetic radiation that still lingers from the
very early universe the cosmic microwave background radiation - cosmic
for its origin in the very early universe, microwave because it shows up in
the microwave section (1.9 mm) of the electromagnetic spectrum, and
background because it fills all of space.
Several people had predicted the existence of the CMB as early as
the 1940s. In 1941 the chemist Gerhard Herzberg almost discovered it in a
stellar spectrum. The CMB’s serendipitous discovery in 1964 by radio
astronomers Amo Penzias and Robert Wilson earned them the 1978 Nobel
Prize. They were using a radio telescope (antenna) to scan for signals
bouncing off echo balloon satellites. They went to great lengths to track
down a ubiquitous 1.9 mm ‘noise’ in their data. They even removed the
“white dielectric material” left in the antenna horn by nesting pigeons.
They also removed the pigeons.
These CMB photons are all around us. They are so weak, however,
that it takes very sensitive microwave detectors to detect and measure
Spring 201 1
26
them. Optical telescopes will not do the job. The space between stars and
galaxies (the background) is dark to an optical telescope. But a sufficiently
sensitive radio telescope shows a faint background glow, almost exactly
the same in all directions, that is not associated with any star, galaxy, or
other object. This glow is strongest in the microwave region of the radio
spectrum (that 1.9 mm wavelength).
The CMB fills the universe and can be detected everywhere we
look. In fact, if we could see microwaves, the entire sky would glow with
a brightness that is astonishingly uniform in every direction. The
temperature (associated with the peak in the energy distribution at 1.9
mm)^ of this background is uniform to better than one part in a hundred
thousand (z.e., the temperature variation on different angular patches of the
sky is less than 1 part in 100,000). This uniformity is one compelling
reason to interpret the radiation as remnant heat from the Big Bang; it is
very difficult to imagine a local source of radiation that is this uniform.
A perfectly uniform background distribution will not explain why
our universe looks the way it does. The universe contains clumps of matter
of all sizes from atoms to galaxies. Such rich structure could not form
from a perfectly smooth background. We need to have some wrinkles in
the otherwise smooth background. If we have small wrinkles (bumps,
ripples) or hills and valleys early in the universe, matter will tend to fall
into the valleys, eventually producing dense regions that become the sites
of galaxies. Matter attracts matter, so these ‘valleys’ get denser, eventually
coalescing into galaxies and clusters of galaxies. Figure 1 (an idealization)
indicates the concept.
Figure 1. The top figure shows conceptual hills and valleys. The bottom figure shows the
top view of the same thing where the grey scale coding refers to the density of matter
(dark regions have more matter, light regions less).
Washington Academy of Sciences
27
As we look at results from CMB experiments, then, we will expect
to see the observed smooth distribution and hope for some tiny amount of
rippling that will resemble the bottom of Figure 1.
To interpret these CMB experiments properly, we need to look at
the origin of the CMB. The action begins at the very early universe. The
Big Bang Model predicts (trust me on this - the actual theory is really
messy) that the CMB originates from a time just a mere 380,000 years
after the Big Bang.” Let us just say that everything starts with the Big
Bang (of course). The vast majority of astronomers use the Big Bang
Model to describe the origin and evolution of our universe. To learn how
the CMB came to be, we, like the astronomers, need to start with the Big
Bang.
The Big Bang
The Big Bang Model rests on two theoretical pillars:
1. The General Theory of Relativity. In 1916 Einstein proposed his
General Theory of Relativity (GTR) as a new theory of gravity.
Gravity became, not the gravitational field of Isaac Newton, but a
distortion of space and time itself. Physicist John Wheeler put it
well when he said “Matter tells space how to warp, and warped
space tells matter how to move.”'” The theory continues to pass a
series of ever more rigorous tests.
2. The Cosmological Principle. We assume matter in the universe is
homogeneous and isotropic when averaged over very large scales.
This assumption is called the Cosmological Principle. It is the
simplest assumption to make - that if you viewed the contents of
the universe with sufficiently poor vision, they will appear roughly
the same everywhere and in every direction. A homogeneous
universe contains the same stuff regardless in which direction you
look. It is well mixed (aka homogenized milk). The physical
conditions are the same at every place. An isotropic universe looks
the same regardless of which direction you look. You cannot
distinguish one direction from another. If the universe looks fiat
over there then it must look flat over here. This assumption is
tested continuously as we observe the distribution of galaxies on
ever larger scales.
When we look at the universe with galaxy sized eyes we see a
clumpy universe with galaxies scattered about and clustered into groups.
On smaller scales we see individual stars, some that cluster into groups.
Spring 201 1
28
and some that stand alone. And, of course, on even smaller scales we see
individual people. It is only when we look at the universe as a total system
that we can assume homogeneity and isotropy.
After the introduction of GTR a number of scientists, including
Einstein, applied the new gravitational dynamics to the universe as a
whole. In 1927 Georges Lemaitre used GTR to develop the Big Bang
Model. This model predicts that the universe, originally in an extremely
hot and dense state, has since cooled by expanding to the present diluted
state, and continues to expand today. Two years later Edwin Hubble made
one of the profound observations of the early 20^ century - the universe is
expanding - supporting the Big Bang prediction of an expanding
universe.''^ To interpret the expansion properly requires an assumption
about how the matter in the universe is distributed, hence the cosmological
principle. The cosmological principle and GTR form the basis for Big
Bang cosmology and lead to very specific predictions for observable
properties of the universe.
For example, given the assumption that the matter in the universe
is homogeneous and isotropic it can be shown (again, trust me) that the
corresponding distortion of space-time (due to the gravitational effects of
this matter) can have one of only three forms, shown schematically in
Figure 2. It can be positively curved like the surface of a ball and finite in
extent; it can be negatively curved like a saddle and infinite in extent; or it
can be flat and infinite in extent - our ordinary conception of space. A
limitation of Figure 2 is that we can only portray the curvature of a 2-
dimensional plane of an actual 3-dimensional space. Note that in a closed
universe you could start a journey off in one direction and, if allowed
enough time, ultimately return to your starting point; in an infinite
universe, you would never return. Which form the universe adopted will
await experiments.
Of course there is a problem with the cosmological principle -
called the horizon problem - the puzzle that the universe looks the same
on opposite sides of the sky even though there has not been enough time
since the Big Bang for light (or anything else) to signal across the universe
and back. Light can only travel so fast. So how do the opposite horizons
“know” how to keep in step with each other, to maintain isotropy? In other
words, the universe is too big for its horizons. This cosmological problem
has a solution. It is called the era of inflation. We shall meet the era of
’ inflation in the next section when we travel backward in time.
Washington Academy of Sciences
29
n„<i
Qo“l
MAP990(
Figure 2. The top image shows a positively curved universe; the middle image a
negatively curved universe; and the bottom image is a flat universe.
The Big Bang had to begin some time/where. If the universe had a
beginning, when/where was it? We know the ‘when’. Several ingenious
astronomical experiments have measured the ‘when’ and they all agree
i with an age of about 13.7 billion years with an error about ±200 million
I years. Because the universe has a finite age we can only see a finite
I distance out into space (~13.7 billion light years). This is our horizon. The
’ Big Bang Model does not attempt to describe that region of space
significantly beyond our horizon, nor can optical telescopes see the
i horizon.
1
I We also know the ‘where’ - it was everywhere. The Big Bang did
• not occur at a single point in space as an “explosion” of matter moving
I outward to fill an empty universe. It is better thought of as the
■i simultaneous appearance of space everywhere in the universe. The region
li of space within our present horizon was indeed no bigger than a miniscule
|i point in the far distant past; however, there is no “center of expansion” -
|| no point from which the universe is expanding. If we picture the universe
I as the surface of a ball, then the radius of the ball grows as the universe
expands, but all points on the surface of the ball (the universe) recede from
each other in an identical fashion. The interior of the ball is not part of the
universe.
By definition, the universe encompasses all of space and time as
we know it, so it is beyond the realm of the Big Bang Model to postulate
what the universe is expanding into. In either the open or closed universe,
the only “edge” to space-time occurs at the Big Bang, so it is not logically
I
j
Spring 201 1
30
necessary (or sensible) to consider this question. Likewise it is beyond the
realm of the Big Bang Model to say what gave rise to the Big Bang.
The question then becomes can we look far enough back to reach
the origins of the CMB?
When we look out in the sky, we are actually looking backwards in
time. Light from distant objects takes longer to reach us than the light
from nearby objects, and thus we are observing now how they appeared in
the past. The light from galaxies, however distant, permits us to see back
only a few billion years, not 13.7 billion. This few billion years is our
look-back time. There probably were no galaxies 13.7 billion years ago, so
it is not surprising that galaxy light is insufficient to show us the CMB.
Optical telescopes will not do the trick. We need to get creative to see that
distant horizon.
If we can somehow reach the epoch of the CMB, then we can use
the CMB to tell us how the universe developed to become what we see
today. Let us try a trip back in time to meet the early universe, where we
might detect those CMB 'wrinkles in space-time’.
Moving Backward in Spacetime in Search of the CMB
From Here-and-Now to the Epoch of Last Scattering
We start the trip with the physical conditions of the here-and-now.
Our current temperature is just about three cold degrees above absolute
zero. There are about 400 photons per cubic centimeter. The total mass in
our observable universe is about 10^^ kg'" (give or take a few powers of
ten) which is equivalent to about 10^^ hydrogen atoms.'^^ That sounds like a
lot until you fold in the volume. We can estimate the volume (©^ F)
because we know the radius, r, and the speed of light, cf^ We know the
radius because we have Hubble’s Law which gives us Ho, the Hubble
constant: r = cIHq, where r is the radius of the Hubble sphere. The mass
density of the universe, then, is only about lO'"’^ gm per cubic centimeter,
or about one hydrogen atom per cubic meter. This means we live in a
rarefied universe. Fortunately for us there are locally dense spots like our
planet.
Hubble’s observations showed that space itself expands with time
everywhere and increases the physical distance between two co-moving
. points. If the universe is getting bigger now, then it had to be smaller,
denser, and hotter in the past. Let’s go backwards and see what we can
find.
Washington Academy of Sciences
31
Stepping backwards in space-time we watch (from our privileged
imaginary position) the universe shrink down in size and heat up in
temperature. See Figure 3 for a conceptual timeline. That 10^* kg of matter
must squeeze into ever decreasing volumes. When the visible universe
decreases to half its present size, the density of matter increases until it is
eight times higher and the temperature is twice as hot as it is today. Recall
that when one compresses an object or a gas (or the universe), the object
heats. For example, the hand operated air pump one uses to pump up a flat
tire gets hotter as one uses it. The universe does the same thing as it
contracts.
When the visible universe reaches one hundredth of its present
size, its temperature is a hundred times hotter (273 K or 32°F, the
temperature at which water freezes on the Earth’s surface).
We are moving closer to the Big Bang event. When we are a mere
380,000 years away from the Big Bang the universe is about one eleven
hundredth its present size. The temperature is about 3,000 K. We have
arrived at the important epoch of last scattering or time of recombination.
-S S
o QQ
§ size & ^
1/1 1/2 1 100 1/1100 1 10*8 ®
< ^ \ ^ ^
3 6 273 3000 273 x 10'"6
temperature
Figure 3. A conceptual timeline, time increases right to left from the Big Bang to now.
The range in size appears above the timeline. The matching range in temperature in
degrees Kelvin appears below the timeline.
From Time Zero to the Epoch of Last Scattering
Things now get complicated (yes, I know, but stay with me).
Instead of continuing to move down in time from the epoch of last
scattering, let us leap over that epoch to reach the Big Bang, turn around,
and move outward in time to meet that epoch of last scattering from the
other side.
We cannot discuss the Big Bang itself - we do not know the
physics. So we pick up the story a mere whisper after the Big Bang (all of
10’^^ seconds away) when the era of inflation has just ended. The era of
inflation starts just after the Big Bang and ends 10 seconds later when
the temperature has dropped to a whopping 10^^ K. The era of inflation is
Spring 2011
32
the exponential expansion of space-time by an enormous factor of 10^^ in
volume. Our entire observable universe originates in a small, causally
connected region. Before inflation, it is small enough to “know” what
happens at each horizon. Then it inflates drastically maintaining this initial
knowledge. Miniscule when it began inflating, at the end of the era of
inflation there is a universe that will grow to be the one we see. Inflation
answers the problem I mentioned earlier. It answers the horizon problem
and the origin of the large-scale structure of the cosmos. Quantum
fluctuations in the microscopic inflationary region, magnified to cosmic
size, become the seeds for the growth of structure in the universe. Alan
Guth developed this approach in 1980.
After the era of inflation the universe continues to expand, but not
exponentially. About a minute and a half after the Big Bang, there are no
atoms; however, all the protons and neutrons have formed.
Continuing outward, when the visible universe is only one hundred
millionth its present size, its temperature has dropped to 273 million
degrees above absolute zero and the density of material is comparable to
the density of air at the Earth’s surface. At these high temperatures,
hydrogen is completely ionized into free protons and electrons, although
neutrons have combined with protons to form the deuterium and helium
nuclei in a process called Big Bang nucleosynthesis. The result gives a
ratio of about 12 hydrogen nuclei to 1 helium nucleus.
At this size and temperature everything is coupled together into a
hot, opaque, dense primordial soup (plasma). This hot soup has a smooth
consistency and consists of fundamental particles like electrons, protons,
helium nuclei, deuterium nuclei, neutrinos, and of course, photons. The
opaqueness is important. If we were there we would not ‘see’ anything.
Radiation and matter are coupled together. There are no atoms as separate
objects. What we think of as matter does not exist.
In this very hot dense soup the photons easily scatter off the free
electrons. The photons are densely packed enough to respond like
bouncing molecules in a gas. Thus, photons wandered through the soup of
the early universe (bouncing off electrons), just as optical light wanders
through a dense fog. This process of multiple scattering produces what is
called a thermal or blackbody spectrum^”* of photons. It essentially
randomizes the photons.
The universe continues to expand and cool. Figure 4 gives a
conceptual view of those earliest moments.
Washington Academy of Sciences
i
33
PRESENT
13.7 Billion Years
after the Big Bang
The cosmic microwave background Radiation's
"surface of last scatter" is analogous to the
light coming through the clouds to our
eye on a cloudy day.
We can only see
the surface of the
cloud where light
was last scattered
Figure 4. Two views of the universe - the left shows details of the early universe; the
right shows what we can observe.
About 380,000 years after the Big Bang the temperature has fallen
to around 3,000 K. It is now that the electrons and nuclei are able to
combine into atoms (mostly hydrogen). Once atoms could form as
separate objects, then radiation and matter could decouple, and radiation
could move through space largely unimpeded. The soup clarifies. The
universe becomes transparent. Now we can ‘see’. With electrons locked
up in atoms, radiation (photons) can no longer scatter off free electrons.
Photons can move freely through space. We have arrived back at the
epoch of last scattering. We call the radiation (those randomized photons)
from this epoch the Cosmic Microwave Background. The CMB is a
snapshot of that last scattering epoch, i.e. it is an image of that moment
when matter and photons decoupled. This epoch is the barrier to our
observations about the early universe, where the epochs behind this barrier
are not accessible to us.
Spring 201 1
34
At The CMB
Whew! Now that we have crisscrossed the universe to find the
origins of the CMB let us concentrate on the CMB itself.
Just as the universe heated as we compressed it, it will cool as it
expands. As the universe expands, the temperature of the CMB photons
drops; today, the CMB radiation is very cold and invisible to the naked
eye. Now down to 2.725 K, the temperature will continue to fall as the
universe continues to expand. For comparison, human beings radiate
around 300 K (98.6° F) in the infrared - this is why infrared night goggles
work.
The CMB photons kept scattering until the epoch of last scattering
- clever, this is why it is called the epoch of last scattering. After that they
move freely maintaining their thermal form through the transparent
universe we see today. Thus, we expect their distribution of energy (the
spectrum) to maintain its blackbody shape. Figure 5 plots CMB frequency
(wavelength) versus CMB intensity. It matches rather well to the
theoretical blackbody curve.
Frequency (GHz)
100 200 300 400 500
Figure 5. The spectrum of the CMB - This is what the blackbody curve looks like at the
temperature (2.725 K) of the CMB, thus the spectrum peaks in the microwave range
frequency of 160.2 GHz, corresponding to a 1.9 mm wavelength.
To study the CMB astronomers will look through much of the
universe (as if it were clear air) back to when it becomes opaque: a view
back to 380,000 years after the Big Bang. This is the “wall of light” or the
epoch of last scattering. Maps of the temperature of the CMB are maps of
this surface of last scattering, and astronomers hope to see the collection
of spots (wrinkles) in space at which the decoupling (recombination)
occurred.
Washington Academy of Sciences
35
Recall that the universe ‘cleared’ when hydrogen atoms first
formed. This is usually called “time of recombination.” Thus the
temperature of the CMB at any given spot on the sky is a relic of this time.
During the 1960s the interpretation of the CMB was controversial.
Some proponents of the steady state theory of the universe (no Big Bang)
argued that the microwave background was the result of scattered starlight
from distant galaxies. However, during the 1970s the consensus grew that
the CMB was a remnant of the Big Bang. This was largely because
measurements at a range of frequencies showed that the spectrum was
probably a thermal, blackbody spectrum, a result that the steady state
model was unable to reproduce. It was time for a real experiment.
The Cosmic Background Explorer
Many astronomers predicted the blackbody spectrum and the
wrinkles (anisotropies) in the CMB, but it took the work of two
astronomers, George Smoot and John Mather, to clinch the issue. They
designed and launched the satellite called the Cosmic Background
Explorer (COBE) to study the CMB. They received the 2006 Nobel Prize
for their efforts.
COBE^^ was launched November 18, 1989 and carried three
instruments, one to search for the cosmic infrared background radiation,
one to map the cosmic radiation sensitively, and one to compare the
spectrum of the cosmic microwave background radiation with a precise
blackbody. Eaunched into a near-Earth orbit, the satellite had an altitude
of 900 km, making 14 orbits per day.
COBE showed that the CMB spectrum is that of a nearly perfect
blackbody with a temperature of 2.725 ± 0.002 K. COBE measured the
spectrum at 34 equally spaced points along the blackbody curve. The error
bars on the data points are so small that they cannot be seen under the
predicted curve in the figure (Figure 5)! The CMB spectrum by the COBE
satellite is the most precisely measured blackbody spectrum in nature.
When the COBE team presented their results at a 1992 meeting of the
American Astronomical Society they received a rare standing ovation.
Even Steven Hawking gave it a rave review in 1992 when he said it was
the discovery of the century if not of all time.^
As good a match as it was, an exact match to the curve would be
worrisome because if the background radiation is absolutely smooth, then
how do we get galaxies to form? We need wrinkles, a tiny anisotropy, in
Spring 201 1
36
the smooth background to form the condensations of matter (seeds) from
which galaxies could grow - those hills and valleys of Figure 1.
Fortunately, CORE also showed that the CMB has a tiny intrinsic
anisotropy (ripples) at a level of one part in 100,000: the rms temperature
variations are only 18 pK or put another way: 5T IT ~10~^ where T is
temperature. This means the CMB is not perfectly isotropic; it has ripples
(wrinkles). It is in these ripples that the structures we see today will form.
How did the COBE team find the miniscule ripples in their data? It
took work. They had to remove interfering ripples in the data before the
very small remaining variations could be seen. One rather large interfering
wrinkle is the dipole anisotropy (discovered in 1969). The CMB appears
slightly warmer in the direction of one’s movement than in the opposite
direction. Photons arriving from the direction of motion are boosted in
energy; those from behind lose energy. Known as the great cosine in the
sky, the dipole anisotropy is the motion of the Earth relative to the CMB,
measured as the 24 hour anisotropy in the background. The relative
velocity of the Earth will result in a temperature distribution across the
sky:
no) = T,
V
1 — cos <9
K ^ /
hence the term ‘great cosine’. T is temperature, v is velocity, c is the speed |
of light, and 0 is the angle of view across the sky. COBE finally pinned
this down to the 6a level and determined that our local group of galaxies
(the galaxy cluster that includes our Milky Way^^) appears to move
relative to the CMB at 627 ± 22 km/s towards the constellation of Virgo.
Why the motion? A proposed answer is that it is due to the
gravitational pull of a “Great Attractor” (a source of gravitational pull)
that was postulated to explain our motion. A search was started, and |
eventually it was found that such large clumps of matter as the
hypothesized Great Attractor are found regularly through the universe. It '
is now believed that by summing over the mass in our Milky Way j
neighborhood (within a 100 million light years) one finds the net j
unbalanced attraction that explains our motion. The CMB is then the |
standard frame of reference for cosmology work. j
Astronomers illustrate the CMB using a special type of map - an
equal-area Mollweide projection. To understand these maps first consider
how the Earth appears in this projection (Figure 6). We can see the
Washington Academy of Sciences
37
continents and oceans seen looking down from above. Astronomers use
the same type of projection of the sky only looking up instead of down.
Figure 6. A map showing the Earth spread out in an equal-area Mollweide projection.
Figure 7 shows three data cuts from COBE. The maps show the
entire sky as seen in Galactic coordinates (similar to looking at the Earth
from above in Figure 7 only we are looking out towards the sky). The
orientation of the maps is such that the plane of the Milky Way runs
horizontally across the center of each image. The Milky Way appears as a
thin strip. The top map shows all of the data - it looks very smooth.
Hidden in these data are very small variations. The middle map shows the
dipole anisotropy - what is left after the smooth part is removed. The
bottom map shows what is left after the top and middle pieces are
removed. The grey streak through the equator of the bottom map is the
Milky Way signal.
There are two sources for the remaining ripples seen in the bottom
map of Figure 7: (1) Emission from the Milky Way (not the CMB)
dominates the equator of the map but is quite small at areas away from the
equator. We understand this and need to remove it. (2) Ripples in the
CMB from the edge of the visible universe dominate the regions away
from the equator. We need to keep this source because this is what we are
looking for. When all the interfering items are removed one gets Figure 8.
There is also residual noise in the maps from the satellite
instruments themselves, but this noise is quite small compared to the
signals in these maps. To get a glimpse of the errors, consider what the
Earth looks like when COBE’s instrumental errors are added to Figure 6 to
produce Figure 9. The underlying continental structure can still be seen.
Spring 201 1
38
DMR 53 GHz Maps
Figure 7. The top image shows the temperature of the microwave sky in a scale in which
grey represents the completely uniform temperature on this scale. The actual temperature
of the cosmic microwave background is 2.725 Kelvin. The middle image shows the same
map displayed in a scale such that dark patches correspond to 2.721 Kelvin and light
patches to 2.729 Kelvin. The “yin-yang” pattern is the dipole anisotropy that results from
the motion of the Sun relative to the rest frame of the CMB. The bottom image shows the
microwave sky after the dipole anisotropy has been subtracted from the map. This
removal eliminates most of the fluctuations in the map: the ones that remain are thirty
times smaller. On this map, the hot regions, shown in light patches, are 0.0002 Kelvin
hotter than the cold regions, shown in dark patches. The grey streak through the middle
represents the Milky Way.
Washington Academy of Sciences
39
DMR’s Two Year CMB Anisotropy Result
Figure 8. Following subtraction of the dipole anisotropy and components of the detected
emission arising from dust (thermal emission), hot gas (free-free emission), and charged
particles interacting with magnetic fields (synchrotron emission) in the Milky Way, the
CMB anisotropy can be seen as a mottling (rippling) in the COBE data.
Figure 9. We can faintly see the outlines of the continents of Figure 6 underneath the
added noise.
You can see what a complex data reduction process the COBE
team had to use. The detailed analysis of CMB data to produce maps, an
angular power spectrum, and ultimately cosmological parameters, is a
complicated, computationally difficult problem. The COBE team did not
release their data until they had finished many months of intensive error
analyses.
Spring 201 1
L
40
After COBE
Astronomers divide the sky into angular degrees, so that 90° is the
distance from the observer’s horizon to the zenith. COBE, with its slightly
fuzzy vision, measured temperature ripples in the 10° to 90° range, which
means COBE could measure no smaller than the equivalent of an angular
distance twice the distance from the Earth to the Sun. Those hills and
valleys (Figure 1) of the universe are shallow but quite large. COBE was
not able to resolve spots as small as clusters or even superclusters of
galaxies. Hence COBE saw the initial conditions of the universe. Maps
such as that shown in Figure 8 are amazing pictures of the early universe.
The small CMB temperature fluctuations trace real wrinkles in the
density of matter in the early universe as they were imprinted shortly after
the Big Bang. Thus, they can reveal a great deal about the early universe,
the origin of galaxies, and large scale structure in the universe.
Of course, once astronomers had the COBE data, they wanted to
see more and finer detail. Astronomers today are interested in small scale
fluctuations; i.e., they need to find the one degree sized wrinkles in which
seeds gather to grow structure. To do this astronomers add another type of
representation to measure these tiny wrinkles, one analogous to sound.
The Sound of Wrinkles
What astronomers detect on angular scales (the wrinkles) is
actually ‘sound’. Photons, if they are packed densely enough (as they are
in the plasma of the early universe - that hot primordial soup), can behave
as a gas just as air molecules do. Ordinary sound waves are just travelling
compressions and rarefactions of the air which we hear as sound as they
strike our ear drum (Figure 10). The hot photons of the early universe
carry their version of sound waves due to gravity acting to compress the
photon^" gas and radiation pressure^”’ acting to resist it. This battle
between compression and rarefaction produces acoustical ringing (like the
ding-dong of a struck bell). The reason why we see it rather than hear it is
that when we compress the photon gas it becomes hotter (that air pump
thing). We see the photon sound waves as hot and cold spots on the sky -
ripples in the universe. The theory of inflation predicts that there should be
as many hot spots as cold spots. These are conceptually the bottom figure
in Figure 1. What sets off the battle? It is seeded by random quantum
fluctuations from the very beginning of the Big Bang.
Washington Academy of Sciences
41
tuning fork
arn^s spread apart
(compression)
• • •
• • •
• • •
• • •
: : :
: : :
• • •
• • •
• • •
• • •
tuning fork
arms pushed together
(rarefaction)
Figure 10. Sound from a tuning fork showing compression (hot spot) and rarefaction
(cool spot) of air molecules
Actually, this is not the kin(d of sound wave we hear on Earth. The
CMB wavelength is very long, on the order of 1 to 1000 mega-parsecs,
and its medium is not the air but hot plasma with a mixture of photons and
other elementary particles. One parsec is a distance equal to 3.08568025 x
10^^ km, so the wavelength is v-e-r-y long.
The patches in Figure 8 are due to this acoustical ringing, the size
of the patch measured in angles. As I said, COBE’s patches are in the 10°
to 90° range.
In music, the pattern of overtones (ringing) helps us distinguish
one instrument from another; it is a kind of signature of the instrument that
makes the sound. In the same way, the pattern of overtones in the sound
spectrum of the CMB ripples follows a pure harmonic series with
frequency ratios of 1:2:3. Astronomers expect to see a series of acoustic
peaks (overtones) on top of the smooth CMB spectrum. If astronomers can
compare the CMB oscillations with the distribution of galaxies at different
stages of the universe’s history, they can measure the rate of the expansion
of the universe.
The First Overtone Peak of the CMB
Many ground based and balloon experiments have now measured
the CMB on the one degree scale. What CMB experimentalists do is take a
power spectrum of the temperature maps (Figure 8), much as one would if
one wanted to measure background noise. In essence, they take the
temperature of angular patches of the universe. The peaks of the acoustic
oscillations represent regions that were slightly denser than the rest ot the
universe.
Spring 201 1
42
The details (which you can ignore and skip to the next paragraph if
you choose) are that they use a spherical harmonic expansion of the CMB
sky where Tis temperature and Tare the usual spherical harmonics:
/=0 m=-l
For a given signal, a power spectrum gives a plot of the portion of a
signal’s power (energy per unit time) falling within given frequency bins.
Essentially, the power spectrum is a plot of the amount of temperature
fluctuation against the angular size. The fluctuation is the difference in the
two temperature measurements at the corresponding points. The angular
wavenumber, called a multipole I , of the power spectrum is related to the
inverse of the angular scale.
The overtone value ^ = 100 (the i of the above equation)
corresponds to approximately one degree on the sky. Recent experiments
(Figure 11) have shown that the overtone power spectrum exhibits a sharp
peak of exactly the right form to be the ringing or acoustic phenomena
long awaited by cosmologists. The section in the Figure 1 1 marked ‘initial
conditions’ represents the smooth blackbody curve. Then at ^ = 100 the
first peak appears where it was expected.
Figure 1 1. The multipole expansion of the wrinkles in the early universe. ^ runs along
the logarithmic x axis, and AT in pK runs along the axis. The first peak is indicated by
the arrow. There are faint error bars from the various experiments.
The rich structure in the plot after the initial conditions is the direct
consequence of the acoustic oscillation driven by repulsive radiation
pressure and attractive gravity (that battle I mentioned). The main peak is
the oscillatory mode that went through 1/4 of a period (reaching maximal
Washington Academy of Sciences
1
43
compression) at the time of recombination (when electrons and protons
formed neutral atoms). The lower peaks correspond to the harmonic series
of the main peak frequency (those ringing overtones). An additional effect
comes from geometrical projection such that the angular position of the
peaks is sensitive to the spatial curvature of the universe.
A series of ground and balloon-based experiments have measured
CMB anisotropies on smaller angular scales. The primary goal of these
experiments was to measure the scale of the first acoustic peak, which
COBE did not have sufficient resolution to resolve. The first peak in the
anisotropy was tentatively detected by the Toco experiment (a balloon that
flew in 1996), and the result was confirmed by the BOOMERanG and
MAXIMA balloon experiments. BOOMERanG^^'" (Figure 12) flew out of
the South Pole at an altitude of 42,000 meters. Its last flight was in 2003.
Figure 12. BOOMERanG ready for launch
When I first saw the BOOMERanG results from 1998 my Jaw
dropped as I realized I was looking at actual spatial fluctuations in the
early universe. I could see ripples ringing across time. These
measurements demonstrated that the geometry of the universe is
approximately flat, rather than curved. They ruled out cosmic strings as a
major component of cosmic structure formation and suggested the era
inflation was the right theory of structure formation.
Two of the greatest successes of the Big Bang theory are its
prediction of its almost perfect blackbody spectrum and its detailed
prediction of the anisotropies in the cosmic microwave background. The
recent Wilkinson Microwave Anisotropy Probe (WMAP — another
Spring 201 1
44
satellite) has precisely measured these anisotropies over the whole sky
down to angular scales of 0.2 degrees. Launched 30 June 2001 into a
distant orbit, the WMAP satellite ended science observations on 20
August 2010. Figure 13 shows the total of WMAP data after all known
effects are removed.
Figure 13.7 years of CMB data from WMAP - the rippling (hot and cold spots) is clear.
Angular scala
80* 2* 0.5* 0.2*
Mul1lpol« moment I
Figure 14. Acoustical data from several experiments - The power spectrum of the cosmic
microwave background radiation temperature anisotropy in terms of the angular scale (or
multipole moment). The data shown come from the WMAP (2006), Acbar (2004)
Boomerang (2005), CBI (2004), and VS A (2004) instruments. Also shown is a
theoretical model (solid line).
Washington Academy of Sciences
45
And More Overtones Ring In
The second acoustical peak was tentatively detected by several
experiments before being definitively detected by WMAP, which has also
tentatively detected the third peak. As of 2010, several experiments to
improve measurements of the polarization and the microwave background
on small angular scales are ongoing. These include DASI, QUaD, Planck
spacecraft, Atacama Cosmology Telescope, South Pole Telescope, and the
QUIET telescope. Figure 14 shows the acoustical spectrum from many
experiments. The primary peak is definitive. The secondary peak is clear.
The errors increase for the third peak.
The Future
Where do we go from here? Peak by peak the experiments beat
down the errors. Part of this is due to improvements in the techniques for
such calculations that make them much easier, and part has been
motivated by a desire to know how experiments will complement each
other. No one experiment is equally sensitive to all cosmological
parameters^'", and that there are combinations of certain parameters that
will yield statistically identical observations for a given experiment. This
is known as parameter degeneracy.
To leave you with a taste of what is coming - in the case of the
CMB, there is a tremendous degeneracy between matter and the
cosmological constant, A (what is left over after we account for all the
matter in the universe). The position of the first acoustic peak in the CMB
power spectrum is sensitive to their sum, but doesn’t tell us much about
either one independent of the other. Fortunately, there are other
experiments that don’t share this degeneracy (e.g., high redshift
supemovae). This degeneracy can also be broken by looking at the
gravitational lensing of the CMB.
Gravitational lensing is what happens when the signal from some
object (or CMB) encounters a lot of matter between it and you, the
observer. The signal is lensed by the gravitational effects of the
intervening matter just as the outside world is lensed by the plastic in your
eyeglasses. However, as opposed to glasses, gravity lenses can produce
multiple images. Figure 15 shows the effects of a real gravity lens.
Since the CMB photons that are coming to us from the epoch of
last scattering have to travel through all manner of intervening matter to
get to us, we would expect that there is going to be some lensing. We are
Spring 201 1
46
going to be looking for distortions in the CMB power spectrum that would
be indicative of lensing. This is a tiny effect, however, only a 3% (10 |iK)
effect on the main acoustic peaks. If we can detect it, then we can gather
information on the large scale structure of the universe and gather clues to
the current cosmological enigma - dark energy. Dark energy is that
hypothetical energy that fills the universe and tends to increase the rate of
expansion of the universe.
Figure 15. A gravitational lens - the arc like structures are the images of the very distant
source. It is distorted by the matter between you and it.
In the meantime, people are actively measuring the Sunyaev-
ZeVdovich effecf^^ (abbreviated as the SZ effect), which is the result of
high energy electrons distorting the CMB spectrum (in which the low
energy CMB photons receive energy boosts during collision with the high
energy electrons). When a CMB photon interacts with hot gas in a galaxy
cluster it experiences a slight increase in energy. The SZ effect causes a
change in the apparent brightness of the CMB radiation when looking
towards a cluster of galaxies or any other reservoir of hot plasma. Inverse
Compton scattering by the hot gas will boost the energy of the CMB
photons and shift the spectrum. The effect is redshift-independent, and so
provides a unique probe of the structure of the universe on the largest
scales.
Are we headed for a Big Crunch or a lingering fadeout? There are
exciting times ahead.
Washington Academy of Sciences
47
Endnotes
' Wein’s Law relates the peak temperature to the corresponding wavelength. ; ^ _L
^ma\ y-.
" Fred Hoyle coined the term Big Bang during a BBC 1949 radio broadcast.
*" Misner, Charles W. Thome, Kip. S.; Wheeler, John A. (1973), Gravitation, W. H.
Freeman, ISBN 0-7167-0344-0 - trust me, this is not a book to read lightly.
This is Hubble’s Law - that relates redshift to distance: redshift velocity «: distance
where Hq is the Hubble constant, the constant of proportionality.
'' Actually, determinations vary from 10^ 4o 10^° kg depending on the assumptions -
assuming a flat universe near the critical density one gets 1 0^^ kg.
Based on the assumption that the universe is approximately flat.
Remember that c is the fastest any signal can go.
Planck’s law produces a blackbody spectrum - a blackbody is a perfect absorber. The
radiation is homogeneous and isotropic. , _ 2hv'^ 1
^ ~ 2 hv
e'^T’ - 1
http://lambda.gsfc.nasa.gov/ is the resource for COBE and other NASA CMB missions.
The London Times, 25 April 1992.
The Milky Way is the Galaxy that contains our solar system.
GTR predicts that light (photons) is affected by gravity.
Outward pressure due to electromagnetic radiation - the pressure against a surface
exposed in a space traversed by radiation uniformly in all directions is equal to one
third of the total radiant energy per unit volume within that space.
http://www.astro.caltech.edu/~lgg/boomerang/boomerang_front.htm is the resource for
BOOMERanG.
Parameters like the amount of matter, the rate of expansion, and the critical density.
Rashid Sunyaev and Yakov Zel’dovich predicted the effect in 1970s.
Spring 201 1
This page intentionally left blank
Washington Academy of Sciences
49
Gender and International Collaborations of Academic Scientists
and Engineers*: Findings from the Survey of Doctorate
Recipients, 2006
Lisa M. Frehill and Kathrin Zippel
Energetics Technology Center & Northeastern University
National Action Council for Minorities in Engineering, Inc.
Introduction
Science and engineering, as with many other enterprises in today’s world,
have become increasingly global. Companies conduct business in multiple nations
and, in the past couple of decades, have expanded research facilities outside the
United States to take advantage of a globally diverse workforce. Labor markets
for scientists and engineers are increasingly less geographically bounded; talented
scientists and engineers are recruited by employers without regard for their
citizenship. Anecdotal evidence reported by engineers at various U.S. firms, for
example, highlights the “shrinking” globe, as project teams in some companies
have been collaborating on projects across international borders for 20 years or
more. Consistent with the globalization of science and engineering, collaboration
across international borders has become more common too.
In academic settings, international experience has been growing in
importance. Just as corporations have become multinational enterprises, so too are
many universities becoming global, establishing campuses and recruitment offices
outside of the United States to educate international students both here and
abroad. Further, U.S. graduate programs in the sciences and engineering continue
to attract substantial numbers of international students. U.S. students, too, are
encouraged to study abroad in the traditional areas of languages and the
* Acknowledgements: This research was supported by a grant from the National Science Foundation, OISE
0936970. Additional work was supported by Frehill Advanced Research, LLC. Any findings, conclusions or
recommendations are those of the authors and do not reflect the opinions of the National Science Foundation.
Earlier drafts were reviewed by John Tsapogas and Sorina Vlaicu: the authors are grateful for their
comments, as well as those of two anonymous reviewers and the editor at [\\q Journal of the Washington
Academy of Sciences, which substantially improved the work. Lisa M. Frehill is a Senior Analyst at
Energetics Technology Center (lfrehill@etcmd.com) and Director of Research, Evaluation and Policy at the
National Action Council for Minorities in Engineering, Inc. Kathrin Zippel is an Associate Professor of
Sociology at Northeastern University (k.zippel@neu.edu).
Spring 201 1
50
humanities, and increasingly do so in the sciences and engineering. Finally, for
faculty, an international reputation is becoming an increasingly important
criterion in post-tenure academic advancement decisions.
However, there are many ways in which men and women face different
worlds with respect to working outside of the United States. In the corporate
sector, in some cases, companies claim to protect women by not sending them to
locations that might be considered too dangerous, or presume that women would
not wish to go to these locations due to safety concerns. Additionally, if not to
protect women from potential harm, some companies have prevented women
from traveling abroad due to presumed discrimination that the women would
experience in the foreign country.""’"^ As more women have become heads of state
or traveled as diplomatic envoys, even nations that, from the outside, appear to
have very strict rules regulating women’s behavior, have shown that these rules
can be flexible when necessary.^’
National Science Foundation data are presented in Figure 1 to show the
extent to which those who hold U.S. doctoral degrees in science and engineering
collaborate internationally. Two important findings from these data are: (1)
women, regardless of employment sector, lag men within that sector in
international collaboration and (2) those in educational institutions lag scientists
and engineers employed in government and business/industry in international
collaboration. Indeed, women in business and industry settings report a level of
international collaboration that reveals a wider sex gap than in any other sector.
Yet, the 27 percent of women scientists and engineers in business and industry
who did collaborate internationally was similar to the representation of men who
collaborated internationally in educational institutions.
Q
Figure 2 shows that doctoral degree field and sex interact to produce
different outcomes with respect to international collaboration. Within each of the
five science and engineering fields, within the educational employment sector,
women’s reported rate of international collaboration lags that of men - but the sex
gap across the fields varies greatly. The widest gaps are in those fields in which
women have made greater inroads in relative participation in recent years: life and
related sciences and social and related sciences, as well as in the physical and
related sciences. The gap is only 5 percent or less for women and men in
computer and mathematical sciences and engineering. These disciplinary
differences may be at the heart of work by Melkers and Kiopa, who found that
even though there was no difference in the likelihood of women and men
collaborating internationally, the nature of the support men and women received
differed. Women report that they received benefits such as paper review and
Washington Academy of Sciences
51
assistance with grant proposals, while men were more likely to obtain access-
based rewards like nominations for awards.^
Figure 1. International Collaboration by Employment Sector and Sex, U.S. Doctoral-Degreed
Scientists and Engineers, 2006
Source: Author's weighted analysis of National Science Foundation Survey of Doctorate
Recipients, Restricted-use file. The use of NSF data does not imply NSF approval of the
research, research methods or conclusions.
Note: * indicates statistical significance at alpha = 0.01.
Figure 2. International Collaboration, U.S. Doctoral-Degreed Scientists and Engineering at
Academic Institutions, by Sex and Broad Field 2006
35%
Computer science Life and related Physical and related Social and related Engineering
and mathematics sciences sciences sciences
Source: Author's weighted analysis of National Science Foundation Survey of Doctorate Recipients, Restricted-use file. The
use of NSF data does not imply NSF approval of the research, research methods or conclusions.
Note: * indicates statistical significance at alpha =0.01.
Spring 201 1
52
This paper makes use of nationally-representative data from the Survey of
Doctorate Recipients (SDR) to answer a series of questions about international
collaboration by U.S. academics with doctoral degrees to better understand the
differences shown in the two figures above:
• Among those who collaborate internationally, to what extent are men
and women similar or different in terms of the travel they do for these
collaborations?
• To what extent does international collaboration vary by race/ethnicity
and sex?
• How do tenure status and rank impact international collaborations?
• In what ways are international collaborations impacted by family
status issues (such as marital status and children)?
• To what extent does international collaboration differ by citizenship?
Data and Methods
Data
The SDR is a nationally-representative survey with data collected by the
National Opinion Research Center under contract to the National Science
Foundation every two to three years with both longitudinal and cross-sectional
features. Data are longitudinal in that, once impaneled, respondents are tracked
and complete the survey in each administration after earning their doctoral degree.
New respondents are added to the program via a sample from the Doctorate
Records File, which includes information on graduates of research doctorate
programs at U.S. colleges and universities. For this paper we used only one year
of data in the SDR, specifically the 2006 SDR, in which a set of questions was
asked about international collaboration in a new module, as discussed below. The
2006 SDR administration had 30,800 respondents representing 711,800 doctoral-
degreed scientists and engineers.
Analytical Approach
As will be discussed below, all dependent and independent variables are
categorical, often with simple yes/no response categories. The principal analytical
strategy was cross-tabulation, often with multiple variables or with selection of a
limited group for analysis. SPSS- Windows was used to analyze the 2006 SDR
restricted-use dataset. Further, due to the complex stratified sampling plan,^^the
National Science Foundation, Science Resource Statistics division recommends
the use of sampling weights in analysis of the SDR data. Weighted analysis
permits the generalization of results to the population from which the sample was
Washington Academy of Sciences
53
drawn. The large populations, then, produce many statistically significant results,
even when group differences are quite small. We, therefore, call attention to those
differences that are meaningful, setting as our standard for this as sex gaps where
the difference of proportions is greater than 5 percentage points.^ ^ Because the
weighted analysis yields population estimates that are quite large, routine
statistical procedures indicate significance even for very small subgroup
differences. For hypothesis testing — i.e., testing whether the responses of women
and men differed significantly — we established alpha = 0.01 for test statistics. We
used chi-square tests and difference of proportions tests, as mathematically
appropriate.
Dependent Variables
The specific questions of interest here (the dependent variables of
interest), both with simple yes/no answers, were:
(1) International Collaboration: In performing the principal job you held
during the week of April 1, 2006, did you work with individuals located in
other countries? (defined as international collaboration).
(2) International Travel: In your work with individuals located in other
countries, did you travel to a foreign country for collaborative activities?
(defined as travel abroad).
12
Other items included in the module were not used for this paper. We recognize
that this is a very broad definition of international collaboration.
Independent Variables
Sex was the critical independent variable: most analyses compare women
and men. Another key demographic variable of interest was race/ethnicity, which
was coded as a three-category variable:
• Asian Americans (included Pacific Islanders)
• Underrepresented minority included African Americans, American
Indians/Alaska Natives, and Hispanics (and noted as URM)
• White (non-Hispanic whites).
Anyone for whom race/ethnicity was either unknown or marked as “other” was
omitted from most analyses, except as noted. To date, the literature has little
information about the impact of race/ethnicity on the likelihood of collaborating,
in general, or of collaborating internationally, in particular.^^ Data on participation
in study abroad programs indicates that African Americans and American Indians
Spring 201 1
54
are less likely than White students to engage in study abroad activities and are
often “first generation international travelers.”
The respondent’s doctoral degree was used to code fields, which is
provided at various levels of detail within the SDR dataset. Using the broadest
field coding level, we analyzed international travel and collaboration for scientists
and engineers who reported one of the following five categories as their doctoral
degree field of study:
• Computer and mathematical sciences;
• Life and related sciences (note: health and medical fields are not
included in this category);
• Physical and related sciences;
• Social and related sciences (includes psychology);
• Engineering.
There are two other broad fields at this level, which were omitted from the
analysis: “Science and engineering related” and “Non-S&E” fields, which i
together had just 1,465 cases.
This paper examines issues for U.S. academics in particular, so analyses
were also restricted to only those who reported being employed in an educational
institution. While this includes K-12 (n=412) and two-year colleges (n=510), the
overwhelming majority of cases (n=12,128) were people employed in four-year
colleges and universities, medical schools and university-affiliated research
centers. With both the field of study and the employment sector restrictions, the
overall number of cases in the analyses was 12,351, which represents 276,541
U.S. doctoral-degreed scientists and engineers employed in academic institutions.
Several family status variables were used to capture gendered impacts
described in previous work by George, Malcom and Frehill (2009).^^ First, the ;
SDR asked about marital status, which was consolidated into three categories, '
consistent with those used in most social science reporting:
• Married or in a marriage-like relationship (which we refer to as
married/partnered);
• Widowed, separated, or divorced;
• Never married.
Also examined was whether spouse’s work status played a role in individuals’
likelihood of engaging in international collaboration and then traveling associated
with that collaboration. Respondents who reported that they were married or in a
marriage-like relationship were asked if their spouse worked and if so, was this
Washington Academy of Sciences
55
work full or part time. Women in professional fields, academia included, who are
married were more likely than men to indicate that their partner works at a full-
time position, while men were more likely to report a spouse who has not job in
the paid labor force. This suggests that academic men are more likely than their
female peers to have a spouse who is in a more support-function role, thereby
enabling them to travel and engage in international collaborative relationships.'^’
17,18,19 ^
There were quite a few items that asked about the presence of children in
the respondent’s home, to identify those respondents who did have children of
various ages. Therefore the following categories were used, which are non-
mutually exclusive, to capture the potential impacts of children upon international
collaboration and travel abroad:
• No children living at home;
• Children under 2 living at home;
• Children aged 2-5 living at home;
• Children aged 6-1 1 living at home;
• Children aged 12-18 living at home;
• Children 19 and older living at home;
• Any children of any age living at home.
The literature to date suggests that the presence of children, especially young
children, prevents women from accepting positions for which travel is
necessary. ’ ’ Therefore, the total number of children was not computed, nor
was a new variable formed by crossing marital status with the children variables,
which is sometimes done in the literature on the impact of family status on
occupational outcomes. Hence, the analyses presented here are somewhat
simplified in terms of the potential impacts that having children of various ages
may have on international collaboration and travel abroad.
Individuals’ geographic origin can also play an important role in the
likelihood of engaging in international collaboration and/or traveling abroad
associated with collaborations. Two measures were used to capture possible
impacts associated with prior international experience: citizenship and holding a
bachelor’s degree from a non-U.S. institution. Citizenship was coded using the
four-category variable available in the SDR:
• U.S. citizen (native bom);
• U.S. citizen (naturalized);
• Non-U.S. citizen, permanent resident;
• Non-U.S. citizen, temporary resident.
Spring 201 1
56
The U.S. citizen, naturalized category is the group that poses the most theoretical
challenges when studying how citizenship impacts educational and occupational
outcomes because of the heterogeneity of this group. That is, it includes
individuals who arrived in the United States as children and, fundamentally, were
raised within the U.S. system, as well as individuals who arrived quite a bit later
after having gone through other educational systems. The social forces associated
with each of these groups, obviously, are quite different but age of arrival in the
United States is not a variable available in the SDR. Therefore, we also used
another commonly used variable as a proxy for age of arrival: the world region in
which the individual received her or his bachelor’s degree.^^ The regions were:
• United States;
• Americas (rest of North America, Central and South America and the
Caribbean);
• Europe;
• Asia;
• Africa; and
• Oceania (Australia, New Zealand and other Pacific Islands).
Origin measures are intended to account for the possible impacts of having prior
international networks, mentors, etc. that could be associated with completing a
bachelor’s degree in a country other than the United States, which has been shown
to positively affect the likelihood of international collaboration, especially among
women.^"^ Citizenship may also have a specific effect beyond that associated with
the area of an individual’s bachelor’s degree: individuals who have already
experienced living in an international setting — that is, those who are non-native-
bom U.S. citizens — may be more comfortable and predisposed to engage in
research opportunities that necessitate travel outside the United States than those
who do not have this experience {i.e., U.S. citizens by birth).
Rank and tenure status are also likely to impact international collaboration
and travel abroad. The relevant SDR variables — i.e., rank with its eight categories
and tenure status with its five, produces a 40-cell matrix — but these were
simplified/reduced to the following categories:
• Tenured full professor;
• Tenured associate professor;
• Tenure track senior faculty (associate or full);
• Assistant professor (includes tenured and untenured tenure-track junior
faculty in assistant professor, lecturer, and instmctor positions);
• Non-tenure track (all ranks of faculty not tenured and not on a tenure
track).
Washington Academy of Sciences
57
The literature indicates that an international reputation is becoming more
important for faculty in terms of advancement to the full professor rank but may
not be as critical for those who are working towards tenure and promotion to the
associate professor rank.^^’
Results
Table 1 shows the overall sex distribution within each of dependent and
independent variables used in our analyses. On most of the key demographics,
there were few differences between women and men. The largest differences are
in the way women and men are distributed across the categories of Rank/Tenure
status and fields. Nearly half of the men but just over one-in-five women are in
tenured full professor positions with women much more likely than men to be in
non-tenure track positions. A far higher proportion of women are in the social and
behavioral sciences than men, while men are much more likely than women to be
in the computer and mathematical sciences, engineering, and physical sciences.
Indeed, more than 80 percent of women are in the life and social and behavioral
sciences, while men are less concentrated.
On the family status measures, women and men were somewhat different.
Women were less likely than men to be married and those who were married or in
a marriage-like relationship were far more likely than men to report that their
spouse works full time, consistent with the research literature in this area.
Whereas 8 1 percent of married/partnered women reported that their spouse works
a full-time job, just under half of married/partnered men reported this. Nearly one-
in-three married/partnered men reported that their spouse was not employed.
Race/ethnicity, Sex and Discipline
To what extent does the likelihood of collaborating internationally vary by
race/ethnicity and sex? How does field affect international collaborations? Table 2
shows that when we control for both field and race/ethnicity, there were fewer
substantial sex differences in collaboration among U.S. academics than when we
did not control for these three variables. That is, here we limit ourselves to the
group in the first two bars shown in Figure 1 and then we drill down into
disciplines while simultaneously controlling for race/ethnicity, suggesting that the
explanations for the gender gap noted in Figure 1 are complex.
Spring 2011
58
Table 1. Dependent and Independent Variables by Sex: U.S. Doctoral-Degreed Recipients in
Engineering and Science Employed at U.S. Educational Institutions in STEM Fields, 2006
Source: Author's analysis of National Science Foundation, Survey of Doctorate
Recipients restricted-use data file (2006). The use of NSF data does not imply
NSF approval of the research, reseach methods or conclusions.
Washington Academy of Sciences
59
Table 2 shifts attention to the way that race/ethnicity and sex together
impact international collaboration. Within various ethnic groups, there are some
important sex differences. Among underrepresented minorities (URMs), men
were more likely than women to collaborate in the engineering and computer and
mathematical sciences, while women were more likely than men to indicate that
they were involved in international collaborations in the physical and related
sciences. For Asian Americans, men were more likely than women to report
collaborating in the life and related sciences and engineering. For whites, though,
there is a substantial sex difference in international collaboration among those in
the physical and related sciences with 31 percent of white males but just 21
percent of white females reportedly involved in international collaboration. The
engineering sex difference that is on the order of a 10 percent gap in international
collaboration between men and women in engineering is not evident for whites.^^
Table 2. Percent Reporting International Collaboration and Travel Associated with Collaboration by Broad Field, Sex
and Ethnic Category, U.S. Doctoral-Degreed Academics, 2006
Notes: (1 ) URM = underrepresented minority, includes American Indians/Alaska Natives, African Americans, Hispanics, Native
Hawaiians and Other Pacific Islanders, and Multiple Race. (2) The Travel question was contingent on the Collaboration question, that is,
of those who indicated that they worked with people in other countries, shown here is the percentage who said that they traveled to
another country. (3) Grey cells indicate those on which the difference of proportions was .05 (i.e. a 5 percent gap between men and
women), which is interpreted as a meaningful association. Chi-square tests were significant for all except a handful of sex differences
at the alpha = 0.01 level. The non-significant results were as follows: Collaboration: URMs in Life and related sciences and Whites in
Engineering Travel: Asians in Social and related sciences and URMs in Life and related sciences. Physical and related sciences, and
Social and related sciences
Source: Author's analysis of National Science Foundation, Survey of Doctorate Recipients resthcted-use data file (2006). The use of
NSFdata does not imply NSF approval of the research, reseach methods or conclusions.
Spring 201 1
60
Among those who reported that they engaged in an international
collaboration, one of the key follow-up questions related to whether or not the
respondent traveled abroad. Whereas rnany of the sex differences on the general
variable measuring international collaboration were rather small, the sex
differences in the percentage of respondents who reported that they traveled
abroad are generally larger, as shown in Table 2. For example, Asian males in
three of the five disciplines — computer and mathematical sciences, physical and
related sciences, and engineering — were much more likely than Asian females to
report that they traveled abroad for their international collaborative work. URM
males in computer and mathematical sciences were also substantially more likely
than females in this same ethnic category to indicate that they traveled abroad.
Engineering is interesting because even though URM females were less likely to
indicate that they were involved in international collaboration than URM males,
those who were involved in these collaborations were quite a bit more likely than
men to indicate that they traveled abroad for the collaboration. Likewise, white
females in engineering, who reported a similar chance of being involved in an
international collaboration as white male engineers, were also substantially more
likely than men to report that they traveled abroad. Indeed, three-fourths of
women engineers who were either URM or white and were involved in
international collaboration traveled abroad - some of the highest rates shown in
Table 2.
Rank and Tenure Status
Tenure status and rank had a substantial impact upon rates and the sex gap
in international collaboration (Figure 3). Those who were at the top ranks of
academia reported the highest rates of international collaboration within their
respective sex group: 36 percent of male and 29 percent of female full professors
were involved in international collaborations. This sex gap was replicated for
those who were untenured but in senior-level ranks. It is noteworthy, though, that
among associate (tenured) and assistant professors (tenured and untenured) there
was no sex gap in international collaboration. About one-in-four assistant
professors and just a little more than one-in-four associate professors of both
sexes indicated that they were involved in international collaboration. These
findings suggest that there may be important cohort effects that impact the overall
sex gap in international collaboration.
Washington Academy of Sciences
61
Figure 3. International Collaboration of U.S. Doctoral- Degreed Academics by Rank and Tenure
Status and Sex, 2006
40%
Tenured, full Tenured, Untenured, full Assistant Non-tenure track
associate and associate
Note: Actual percent for women, tenured, associate professors was 27.3%, men, 26.6%. Not statistically
different. * indicates statistical significance at alpha = 0.01 level.
Source: Author's weighted analysis of National Science Foundation Survey of Doctorate Recipients,
Restricted-use file. The use of NSF data does not imply NSF approval of the research, research methods or
conclusions.
Family Status
Marital status had a stronger impact on international collaboration for men
than for women, although the data reported in Table 3 most likely conflates age
effects with marital status effects and the impact of children. That is, those who
had never been married are likely, also, to have a younger average age than those
who report any of the other marital statuses and, indeed, there is a rather narrow
gender gap in international collaboration among those faculty members. Married
men were the most likely group to report international collaboration (29 percent)
and the sex gap in international collaboration was widest among those who
reported that they were married or in a marriage-like relationship. In the literature
on the sex gap in pay in corporate settings, this has come to be referred to as the
“marriage bonus” for men. ’ While marital relationships have undergone
important changes in the past 30 years, it is still the case that men are more likely
to reap a range of health and personal service benefits from marriage in contrast to
single men and married women.
Spring 201 1
62
The likelihood that respondents reported that they traveled abroad, also
shown in Table 3, shows that among those who reported an international
collaboration, single men and women, whether they have been married or not,
were more likely than married men and women to indicate that they traveled
abroad for the collaboration. The least likely group to report travel abroad was
married women (at 44 percent), while the most likely group was widowed,
separated or divorced women (57 percent). It seems that marital status has a
greater impact on women’s likelihood of traveling than on men’s.
Spouse’s employment status had little impact on the likelihood of
international collaboration for both men and women, and of traveling abroad for
collaborations for women. Men who had an employed spouse (either full or part
time) were less likely than those who had a spouse who did not work in the paid
labor force to travel for international collaborations.
Table 3. International Collaboration and Travel Associated with Collaboration of U.S. Doctoral-
Degreed Academic Scientists and Engineers, by Family Status, 2006
Notes: Overall N (weighted) similar for the children variables as marital status, but
since these represent seven different variables rather than categories associated with
one variable (as does marital status and spouse's employment status) answers are
not mutuallyexclusiv^. * indicates significance at alpha = 0.01 .
Source; Author's analysis of National Science Foundation, Survey of Doctorate
Recipients restricted-use data file (2006). The use of NSF data does not imply NSF
approval ofthe research, reseach methods or conclusions.
Washington Academy of Sciences
63
The presence of children had implications for international collaboration
and, especially, travel associated with these collaborations. Childless women were
only ever-so-slightly more likely than women with children to report that they
collaborated internationally (22 percent for childless women and 20 percent for
those with a child of any age), suggesting that the presence of children, per se,
was not a specific deterrent to engaging in an international collaboration.
Likewise, men’s reported participation in international collaborations varied little
for men with versus those without children. The sex gap, though, is quite clear
regardless of children’s ages. That is, the sex gap is smallest among those without
children and is widest for those who report that they have at least one elementary-
school aged child living in their home.
Women with children were less likely than men with children to travel as
shown in Table 3: 42 percent of women and 49 percent of men with children
currently living in their home indicated that they traveled abroad. Ironically,
though, when specifically examining the likelihood of traveling abroad, the
findings are rather different. At the most macro level, indeed, men were more
likely than women to report that they traveled abroad regardless of whether they
had children and regardless of children’s ages. The sex gap in traveling abroad
was lowest for men and women with 6-11 year olds at home than for any other
group. Women who had children aged 2-5 years — too big to carry and of pre-
school age — were least likely to travel. Childless women were about as likely as
those with children under 2 years or between the ages of 12 and 18 to travel
abroad. For both men and women, though, those with children 19 years or older
living at home were the most likely to travel abroad - again, though, this finding
may likely be associated with an age effect.
International Origins: Citizenship and Bachelor s Degree Region
As shown in Table 4, women’s citizenship status had minimal impact on
the percentage of women who indicated that they were involved in international
collaboration, ranging from 20 percent among native-born U.S. citizens to 24
percent among naturalized U.S. citizens. For men, though, citizenship had a larger
impact on international collaboration. Just one-in-five men who were temporary
residents reported an international collaboration but 32 percent of men who were
naturalized U.S. citizens reported they were collaborating internationally. With the
exception of non-U. S. temporary residents, men were much more likely than
women within the same citizenship status to report that they were involved in an
international collaboration.
Spring 201 1
64
Table 4. International Collaboration and Travel Associated with Collaboration of U.S. Doctoral-
Degreed Academic Scientists and Engineers, by Origin, 2006
Source; Author's analysis of National Science Foundation, Survey of Doctorate
Recipients restricted-use data file (2006). The use of NSF data does not imply NSF
approval of the research, reseach methods or conclusions. Note: * indicates significant
sexdifference at alpha = 0.01.
Further, while there were few differences in women’s likelihood of
engaging in international collaboration based on citizenship status, there were
broad differences in women’s reporting that they traveled internationally. The sex
gaps in the likelihood of traveling abroad were generally smaller than those
shown for international collaboration. The most likely group to report traveling
abroad for an international collaboration were U.S. naturalized men (66 percent),
which was true for women too - i.e. naturalized U.S. women were most likely
among women to indicate that they had traveled abroad for international
collaboration (56 percent). The least likely groups were non-U.S. temporary
resident men (45 percent) and U.S. native-born women (44 percent).
European-origin men and those from Oceania were most likely to report
that they engaged in international collaboration. Within each of the regional
groups based on bachelor’s degree origin, men were more likely than women to
report that they collaborated internationally. Women who had earned their
bachelor’s degrees in the Americas (35 percent) followed by Oceania (33 percent)
and Europe (30 percent) were the most likely to report that they engaged in
international collaboration. Women who earned their bachelor’s degree in the
Washington Academy of Sciences
65
United States were among the least likely to engage in international collaboration.
Men and women who had earned bachelor’s degrees from institutions in Africa or
Asia were among the least likely to report international collaborations. To some
extent this finding may be related to the earlier stage of the tertiary education
systems in nations on those continents, particularly the largest (in terms of
population) nations of China and India. For example, interviewees and findings
from workshops with international collaboration participants^^’^^ noted that an
individual’s connections to professors at their foreign undergraduate institution
sometimes formed the basis for an international collaboration among those who
held degrees from European universities. Therefore, in the future it is possible that
those who hold bachelor’s degrees from institutions in Africa and Asia may also
reap international collaboration connections from these past associations.
The sex gap among those who reported that they traveled abroad for an
international collaboration varied greatly based on the bachelor’s degree region.
Among those who received a bachelor’s degree from a European, U.S., or African
institution, there was a negligible difference in the percentage of men and women
who reported that they traveled abroad for international collaboration. The gap
was much wider for scholars who received their bachelor’s degree from an
institution in Asia (12 percent gap) or the Americas (16 percent gap). Men who
had earned bachelor’s degrees from institutions in the Americas were the most
likely to report that they traveled abroad to participate in an international
collaboration (68 percent), while 63 percent of women who had been trained in
Africa were most likely to report that they traveled abroad for an international
i collaboration.
I Conclusions
The likelihood of engaging in an international collaboration and of
traveling abroad for collaboration differs along many dimensions: discipline,
race/ethnicity, sex, family status, and citizenship. In general, the key differences
are noted on the first of these two variables — international collaboration — with
I smaller gaps on the second variable, i.e., travel abroad. It should be remembered
j that the second variable was contingent on the first, so that when controlling for
1 whether or not a person collaborates internationally, the likelihood of traveling
' abroad is less dependent upon a host of independent variables.
; It is important to note that holders of doctoral degrees from U.S. colleges
I and universities who were employed in academia were less likely than those
employed in business/industry or government to engage in international
collaborative research. In today’s shrinking world, in which students are
i
!
i
I
j
spring 201 1
66
increasingly becoming involved in global issues, faculty involvement in
international work, in general, should be a matter of public concern.
Our findings show that the relationship between sex and international
collaboration is quite complicated and, to some extent, affected by the interaction
of sex with variables like field, rank and tenure status, and discipline. Women’s
concentration in the life, social and behavioral sciences suggests a need to further
examine the subfields within these disciplines as possibly shaping women’s
likelihood of engaging in international collaboration. Finkelstein et al (2009), for
example, showed that “internationalization” for faculty members in humanities
and social sciences tended to involve incorporation of global content in the
classroom, while that of STEM field faculty involved research with international
colleagues. The SDR data, however, were inadequate to tease out these
differences, since the survey question implemented a broad and general definition
of international collaboration.
That 43 percent of the men who were in the five science and engineering
fields examined and employed in academic settings were full professors compared
to just 22 percent of the women and that women showed a greater likelihood of
being in non-tenure track positions, was quite important. Among tenured and
tenure track faculty at the assistant and associate levels, there was no discernible
sex gap in international collaboration but the gap at the full, tenured level was
quite large. On the one hand, this suggests a cohort effect, possibly due to a
mechanism such as cumulative disadvantage.^^ On the other hand, it may indicate
that caution needs to be exercised in monitoring faculty members as they advance
to insure that women and men have similar opportunities to engage in
international collaborations.
The analyses conducted here are best described as exploratory, suggesting
that subsequent multivariate analysis using logistic regression would likely be a
fruitful analytical strategy to tease out the factors that are most salient. Being able
to examine how marital status, presence of children of various ages, field, and
rank/tenure status have differential effects for women’s and men’s international
collaboration will be important in developing strategies for encouraging all
faculty to participate.
Washington Academy of Sciences
67
Endnotes/References
* Peters, Michael A. 2006. “The rise of global science and the emerging political economy of
international research collaborations.” European Journal of Education 41(2): 225-244.
2
National Science Board. 2010. Science and Engineering Indicators 2010. Arlington, VA:
National Science Foundation (NSB 10-01).
^ Benokraitis, Nijole V. and Joe R. Feagin. 1995. Modern Sexism: Blatant, Subtle and Covert
Discrimination, 2"^^ Edition. (Upper Saddle River, NJ: Prentice Hall). The first edition
appeared in 1986.
Adler, Nancy J. 1987. “Pacific Basin Managers: A Gaijin, Not a Woman.” Human Resource
Management 26(2): 1 69- 191.
^ Ibid.
^ Altman, Yochanan and Susan Shortland. 2008. “Women and International Assignments: Taking
Stock — A 25-Year Review.” Human Resource Management 47(2): 199-215.
^ Napier, Nancy K. and Sully Taylor. 2002. “Experiences of Women Professionals Abroad:
Comparisons across Japan, China and Turkey.” International Journal of Human Resource
Management 13(5): 837-851.
^ While there are important disciplinary differences in these findings, more details about these can
be viewed elsewhere. Frehill, Lisa M. and Kathrin Zippel. “Appendix A. International
Collaboration by Employment Sector and Sex, 2006 for Selected Science and Engineering
Disciplines” available online at
http://nuweb.neu.edu/zippel/nsf-workshop/docs/SDR_Oct22_2010.pdf
^ Melkers, Julia and Agrita Kiopa. 2010. “The Social Capital of Global Ties in Science: The
Added value of International Collaboration.” Review of Policy Research: 389-414.
See http://www.nsf gov/statistics/srvydoctoratework/ for details.
" There are no standards for specifying what constitutes a “meaningful” gap. The difference of
proportions is common to use with 2x2 contingency tables of the type constructed for many
of the analyses in this article. As this measure approaches 1 (or 100 percent when represented
in that way) there is evidence for the assertion that the association is strong and when the
difference of proportions is closer to 0, the association between the two variables is said to be
weak. Nielsen, Joyce McCarl. 1990. Sex and Gender in Society: Perspectives on
Stratification, 2""^ Edition. (Long Grove, IE: Waveland Press). Agresti, Alan and Barbara
Finlay. 1997. Statistical Methods for the Social Sciences, Edition. (Upper Saddle River,
NJ: Prentice Hall).
There were an additional six items beyond those described here. According to the National
Science Foundation, the data from these items were not released due to data quality problems.
Smykla, Emily and Kathrin Zippel. 2010. “Literature Review: Gender and International
Research Collaboration.” Available online at http://nuweb.neu.edu, zippel/nsf-workshop/.
Spring 2011
68
Frehill, Lisa M. 2008. ‘‘Executive Summary” in Professional Women and Minorities: A Total
Human Resources Data Compendium. (Washington, DC: Commission on Professionals in
Science and Technology).
George, Yolanda, Shirley Malcom, and Lisa M. Frehill. 2009. “Evaluation of the AAAS
Women’s International Scientific Cooperation (WlSC) Program” (Washington, DC:
American Association for the Advancement of Science).
Hertz, Rosanna. 1986. More Equal than Others: Women and Men in Dual-Career Marriages.
(Berkeley, CA: University of California).
Ledin, Anna, Lutz Bommann, Frank Gannon and Gerlind Wallon. 2007. “A Persistent Problem:
Traditional Gender Roles Hold Back Female Scientists.” Reports 8(11): 982-987.
Shauman, Kimberlee A. and Yu Xie. 1996. “Geographic Mobility of Scientists: Sex Differences
and Family Constraints.” Demography 33(4): 455-468.
Suitor, J. Jill, Dorothy Mecom, and liana S. Feld. 2001. “Gender, Household Labor, and
Scholarly Productivity among University Professors.” Gender Issues 19(4): 50-67.
Gustafsson, Per. 2006. “Work-Related Travel, Gender, and Family Obligations.” Work,
Employment and Society 20(3): 513-530.
Op. cit., Shauman and Xie, 1996.
Tharenou, Phyllis.: 2008. “Disruptive Decisions to Leave Home: Gender and Family
Differences in Expatriation Choices.” Organizational Behavior and Human Decision
Processes 105: 183-200.
In some studies, location of high school completion is taken as a good proxy for whether the
individual would be considered more a product of the U.S. versus some other educational and
cultural system. Here, though, we lacked that measure. Since it is still relatively uncommon
for non-U. S. students to travel to the United States to earn a bachelor’s degree, this may be a
valid demarcation point for defining those who are more culturally U.S. versus non-U. S.
Op. cit., George et al. 2009.
Finkelstein, Martin J., Elaine Walker, and Rong Chen. 2009. “The Internationalization of the
American Faculty: Where are we? What Drives or Deters Us?” Conference Presentation: The
changing academic profession over 1992-2007: International, comparative, and quantitative
perspectives, organized by the Research Institute for Higher Education, Hiroshima University
and Research Institute for Higher Education, Hijiyama University.
http://en.rihe.hiroshimau.ac.ip/pl default 2.php?bid= 100 132.
Op. cit., George et. al. 2009.
There were four of the 30 cells in the ethnicity by sex by discipline matrix for the percentage of
individuals who had reported an international collaboration in which there were less than
1,000 cases. Three of these were for URM women: computer and mathematical sciences
(326), physical sciences (585) and engineering (331). The fourth cell was Asian American
women in computer and mathematical sciences, just 995 individuals in this cell had
collaborated internationally.
Washington Academy of Sciences
69
Budig, Michelle J. and Paula England. 2001. “The Wage Penalty for Motherhood.” American
Sociological Review 66: 204-225.
Chiodo, Abigail J. and Michael T. Owyang. 2002. “For Love or Money?: Why Married Men
Make More” The Regional Economist. St. Louis, MO: Federal Reserve Bank. April 2002.
Op. cit., George et al. 2009.
Hogan, A., K. Zippel, L. Frehill. 2010. “Report: International Workshop on International
Research Collaboration.” (Boston, MA: Northeastern University) accessed online at
http://nuvveb.neu.edu zippel/nsf- workshop/.
Valian, Virginia. 1998. Why So Slow? The Advancement of Women. (Cambridge, MA: MIT
Press).
Spring 201 1
This page intentionally left blank
Washington Academy of Sciences
71
SCIENCE IS MURDER
Washington Academy of Sciences
December 21, 2010
Minutes
Ron Hietala
Philosophical Society
It was a cold night in a city that knows it can’t keep a secret. A crowd of
scientists schmoozed around a marbled lobby in a downtown office
building, talking quietly and eating hors d’ oeuvres.
At about 7 pm, they wandered into a conference room for the occasion of
the second “Science is Murder” program of the Washington Academy of
Sciences, December 21, 2010.
Academy President Mark Holland welcomed everyone. He made a pitch
for membership in the Academy and extolled the virtues of the Journal of
the Washington Academy of Sciences, He expressed appreciation for
donations of refreshments from Barrel Oak Winery^ and the Martarella
Winery. (Do we see a pattern here?)
President Holland turned the microphone over to Kathy Harig, owner of
the Oxford, Maryland bookstore “Mystery Loves Company,” to moderate
the immoderately mysterious panel. Ms. Harig introduced the panelists,
four accomplished authors of mysteries involving science: Lawrence
Goldstone, Ellen Crosby, Louis Bayard, and Dana Cameron. While most
mysteries involve science, these authors’ works do so in more meaningful
ways.
Lawrence Goldstone ’s books are usually historical (no surprise, since he
holds a PhD in American constitutional history). They include Out of the
Flames, The Friar and the Cipher, and Anatomy of Deception. His latest
book. The Astronomer, is set in Paris among the heretic-burning of the
1500s. In it, a young man is pressed into the service of the Inquisition,
where he acquires doubts about the wisdom and justice of what is going on
as he pursues his investigations and learns of scientific discoveries.
Ellen Crosby is a freelance journalist. Her mysteries are all set in the
Virginia wine country, where she lives. Some of the attendees had been
drinking the products of those very vineyards earlier. She has five books in
Spring 201 1
72
her wine country series. Their alliterative titles include The Viognier
Vendetta, The Riesling Retribution, and The Bordeaux Betrayal.
Louis Bayard has written three historical thrillers: The Black Tower, The
Pale Blue Eye, and Mr Timothy. He has earned high praise from The New
York Times and The Washington Post. The Post placed him “in the upper
reaches of the historical thriller league” and the Times placed him on its
Notable Authors list for 2009. The central character of his fiction, Eugene
Francois Vidocq, was an actual detective in the 1800s, at the beginning of
scientific forensic analysis.
Dana Cameron was the only scientist on the panel. (It’s unusual for an
archeologist to be referred to as the scientist in the room, Ms. Cameron
volunteered.) She writes the Emma Fielding archeological mysteries. She
won the 2007 Anthony award for best paperback original and the 2008
Agatha award for best short story. The Emma Fielding character of her
books asks questions similar to the ones Ms. Cameron does in her
archeology.
Ms. Harig asked the panelists: “None of your novels are weighty, scientific
tomes. How do you balance the science and the mystery aspects when
your readers might be new to the science you discuss?”
Cameron: There are common threads. Many of the procedures and
concerns are the same. Detectives, like scientists, try to maximize the
usefulness of the data. To speak to the uninitiated, I often have a new
graduate student, a kid who wanders up to the project, or a reporter on the
site. By explaining matters to such characters, I can get the readers
educated without lecturing to them.
Bayard: For me, it is good that I write historical mysteries, because it is a
subtractive process. You have to go back and figure out all the things that
people didn’t know. And they are many. Black Tower is set in 1818 Paris.
They knew nothing of DNA or even bacteriology. But Vidocq, the hero,
was in fact the father of modem criminology. He was the first to use
ballistics and plaster of Paris imprints. He was the first to recognize the
implications of fingerprints. It is exciting to me that, with all the
differences and advantages current detectives have, the spirit of Vidocq
lives on.
Crosby: I’m a journalist. When I started to write mysteries set in the wine
country, the only thing I knew was that I liked to drink wine. I did what
any journalist does; I asked a lot of questions. I like explaining something
nobody understands and making it interesting and fun. My neighbor.
Washington Academy of Sciences
73
Donna Andrews (one of last year’s panelists) has a concept she calls the
“info dump.” You can have enormous gobs of information about
something like, say, how to spray for powdery mildew. Too many details
like that can quickly suflFocate the plot. You’ve got to weave the
winemaking in. I talk to people. I talk to the winemaker. I write what I
learn from them and use their words as my characters’ words. In the right
amounts and the right context, it can make the story more interesting.
Goldstone: I’m pleased to be here, because I get to say something every
writer dreams of saying (pause) - “I’d like to thank the Academy.”
There is a balance. For our kind of audience, readers will stay with you as
long as you don’t “dull them out.” It might be more of a challenge for a
different kind of audience, but all of us write for people whose interests go
beyond their own disciplines. As long as the technical aspects can be
woven into the plot and spoken from interesting characters, the interest of
the audience will hold. Even with historical characters, you can put words
in their mouths. In Astronomer, Rabelais showed up. He was, historically,
such an outrageous character that it’s hard to imagine something he would
not have said. I don’t find it such a difficult balance, and I get as many
comments favoring the informative parts as the entertaining plots.
Harig: Dana, you give us insight into the working life of an archeologist
in your Emma Fielding books. Are you aiming for accuracy, suspense, or
both? How would you describe her character and her involvement with
crime?
Cameron: Both. One of my goals was to depict archeology in realistic
fashion. I wanted to let people know archeology happens everywhere, not
just in Greece and Egypt. I’d read a lot of books where archeologists were
funky, unrealistic, sometimes unsavory, adventurers - Indiana Jones types.
I wanted to let them know we are just mild mannered, professorial types.
Archeology is suspenseful. Often the surprise is on last day of the dig. It is
a proven fact. It will be when it’s raining, when half the crew has already
gone home, when you have no more money and no more time.
Emma and her relationship with crime, that’s a good question. She gets
involved with a person who’s there. She realizes she has the ability to take
clues from the past and reconstruct what went on. She feels she should,
because she can, and it involves someone close to her. She gets engaged
and embroiled, to the point where, by book six, she is thinking about
whether she should continue teaching archeology or take up forensics for
real.
Spring 2011
W
74
Harig: What are your favorite digs that you’ve been on, and how do they
find their ways into your books?
Cameron: All of them! I’ve worked on fabulous sites. The one that started
me writing was an English fort site on the coast of Maine, roughly
contemporary with Jamestown. It lasted only a year, 1607 - 1608. It was a
perfect time capsule. My boss and I were surveying the site, and a guy
came out with a gun, to steal artifacts. There were no valuable artifacts; it
was a collection of broken household trash. Most of my books are set in
New England, because of that experience. They reflect the historic houses
I’ve worked on there.
I’ve also worked at the British Museum and I was a Fellow at the
Winterthur Library and at the Peabody Essex Library. Behind the scenes
there is a treasure of good artifacts. I was writing my fourth book when I
should have been writing a monograph. My books do reflect the research
sites, but I put better artifacts in the books. It’s more fun that way.
Harig: Ellen, your character comes to Virginia from France. Her father
died and she comes to take over the winery. What suggested that to you?
Crosby: Well, first, we should be glad I wrote about a vineyard. If it had
been a dairy, we would be drinking milk tonight.
How did I get into writing about Vineyards? I was posted to London in the
1990s. There I wrote a book about Moscow. On a holiday back here, we
had a friend who, when he came to dinner, always brought a Virginia wine
over. My husband, who is French, would always look at the bottle and
wonder, should we use this for the vinaigrette or what? Then one day, the
friend rented a van and took us, the whole family, on a tour of Virginia
wineries.
Back in London, my publisher asked, “What did you do on holiday?” I
told her about the terrific weekend we’d had, and she said, “That would be
a great setting for a book.” She pushed pretty hard, and I thought, “I will
write one.”
I got out a map of Virginia and found the nearest vineyard to my house,
and that set my next book at the Swedenburg Vineyard in Middleburg.
And that’s the very scientific explanation of how my books got located in
Virginia wine country.
Harig: But you met a very interesting person there.
Crosby: I did. Juanita Swedenburg welcomed me and taught me
everything she knew about growing grapes and winemaking. She had been
Washington Academy of Sciences
75
horrified to discover it was against the law to ship wine to New York. She
had a friend, a local lawyer who sued New York. New York sued back. It
became a constitutional issue; it went back to the states being able to
regulate alcohol. My husband came home from work one day and said,
“Juanita is in the Financial Times.” After the groundwork was laid, big-
gun lawyers came down from New York to take her out to lunch. They
wanted to take on the case. She said, “My lawyer was good enough for me
when you people didn’t know who I was, and he’s going with me to the
Supreme Court.” He did, and they won. She died about a year after case
was won.
Harig: Larry, what suggested your book to you? What research do you do
to bring your characters to the page?
Goldstone: Research? It’s fiction.
I’d written six books with my wife. Many of them had an element of the
tension of empiricism in conflict with theology or religion. There was a
common pattern, where empiricism confronted religious dogma, and
empiricism gradually won out, sometimes after considerable hardship.
The early 1500’s was a time of great purity in religious interpretation of
empirical matters. Pope Leo was confronted with the problem that the
calendar was off. Copernicus was called in about 1516; he told Pope Leo
that it might have something to do with the Sun. Pope Leo was interested,
but he was too busy building St. Peters’. He started indulgences to help
pay for it; that caused Martin Luther to post his 95 theses, and Leo quickly
lost his astronomy bug.
Copernicus went off to Poland and continued to work on his theory. He
knew he was in a delicate area. Thomas Aquinas had incorporated
Aristotle’s thinking into Catholicism. He made Earth the center of the
Universe because it seemed reasonable that, since man was the center of
God’s attention, man’s home should be the center of the Universe.
So I thought, okay, what happens if people, kind of, get wind of it? That’s
how I got the germ. Then I got to bring in all the interesting characters,
Servitus, Rabelais, and others. It’s delicate, because you want the facts to
be consistent with history, but it’s fun because you get to invent facts and
dialogue also. It was great fodder for a story, if you can hold it together.
Harig: Lou, you’ve written a number of stories about science and
detective work in history. Now you have this character, Vidocq, who
invented forensics and applied scientific methods to his work. Did he, like
Spring 201 1
76
Copernicus, encounter skepticism about his methods, and was he always a
detective?
Bayard: Yes, he did encounter skepticism.
Your second question was very delicately phrased. Vidocq earlier was a
convict. He escaped from many French prisons. He worked his way back
to Paris; he was tired of running; he was being blackmailed by many of his
former compatriots including his ex-wife, and he volunteered to work for
the police as an informant. He went back to prison as a spy, and he was so
successful, he worked his way up the chain of command. Within a year or
two, crime in Paris was down. He founded the first private detective
agency, which was a model for Pinkertons. Even more controversial: he
staffed it with ex-cons, like himself. He was featured in works by Victor
Hugo and Honore de Balzac. Dickens and Melville alluded to him. Hugo
divided him in two; he was a rich enough character for two characters.
Without him. I’m not sure modem detective work or detective fiction
would be the same. I’m not sure we would have Sherlock Holmes.
Ha rig: When you are stuck, whom do you turn to?
Goldstone: On a technical matter, I turn to experts. In Anatomy of
Deception, I got a referral from my gastroenterologist, who referred me to
a man who had studied with one of my characters. With the 16th century,
you can’t do that. Mostly, the research is where I turn. Ptolemy and
Copernicus have been extensively translated into English.
Crosby: The short answer is, I turn to experts. I contact the Fairfax
County Police. Juanita was hard to get to; she did not use email; she did
not have an answering machine. I’m always hoping for one expert who
has a sense of whimsy who will tell you how to kill people. Once when I
tried that, I was lucky I did not get turned in to Homeland Security, before
the police found I was a writer.
Bayard: At the risk of coming off as lower class, I do a lot of research on
Google. I found an extensive history of Vidocq there, which I later learned
was about half incorrect. But, as a novelist, I feel free to make stuff up.
I got a question once from a copy editor who wanted the “scholarly
citations” regarding a French psychologist from the early 18^^ century. I
said, “There are none, because I made him up.”
Readers go out of the way to tell us all the things we did wrong. I’m sure
we all have experiences with people who come to us with great details
about what we did wrong. I think of them as people with lots of cats. One
Washington Academy of Sciences
77
told me that I should know that poinsettias were not in English drawing
rooms in 1842. 1 did know, but I’m a whore for a good detail. I feel that, as
a historical novelist, I have a duty to err on the side of the story.
Cameron: I ran into a similar thing. I was asked to write a werewolf story
for Christmas. I thought, “Okay, to the reference books.” It took me a good
ten minutes to think, “Wait a minute, this is fiction!” There is an extensive
canon on werewolves and vampires, but you don’t have to read it.
I was asked by the Boston Noir editor Dennis Lehane if I’d like to
contribute a story. I said, “Oh my God, yes, please.” But I didn’t want to
sound like I was following a formula for writing noir. I set it in 1740 in
Boston, on the wharves. It had many of the conventions, such as an
embattled young woman with no one to protect her. I deliberately did not
read much about how to write noir. When I finished, I felt, “Okay, it’s
noir, but it’s my noir.” My academic training taught me to value accuracy.
It was a hurdle for me, then, to learn that, when I’m stuck, I can just make
something up.
Goldstone: The Amazon review is the bane of modem writer. One
knocked me down two stars because I had the potato in Europe 20 years
before it happened. And it wasn’t like I made it a French fry.
Harig: Okay, final question: What are you working on now?
Cameron: I’m taking the idea that archeologists have traits and skills in
common, and updating it. I’m working on an espionage novel. Ellen
[Crosby] has helped me. It’s been a lot of fun, learning to be a spy. I’m
learning gunplay. Hopefully, it will go to a smart, savvy editor. Also, three
short stories, one about my Fangbom vampires and werewolves, and two
noir.
Bayard: I have a book coming out called the School of Night. It’s set
partly in Elizabethan England and partly in modem Washington. The
School of Night was a group of Elizabethan scholars who were mmored to
dabble in dark arts. It included Christopher Marlowe and Walter Raleigh.
Actually, the hero of the book is a man named Thomas Herriot, who,
unfortunately, did not leave many papers. We are still trying to figure out
what he knew and when he knew it. He drew a picture of the moon before
Galileo. He knew of the law of refraction. He was encouraged to keep
quiet, like Galileo.
Then, the next book is about sainthood in the Catholic Church. It’s about
the whole business of confirming sainthood, which is an interesting.
Spring 201 1
78
complicated process. [Here, Mr. Goldstone quietly advised Mr. Bayard to
get an unlisted phone number.]
Crosby: I just turned in my sixth book in my wine country series. I just
got it back. I’ll be doing revisions -over Christmas. I’m not supposed to
talk about it yet. There are two more in the making.
I have two publishers, Scribners, for hardcover, and Pocket, for paperback.
They are both part of Simon and Schuster, but they are completely
different companies. The illustration Scribners put in the hardcover
catalog is an elegant thing, with a classy etching of F. Scott Fitzgerald,
done for his 1 00th birthday. The Pocket illustration is of a swinging chick
in a bikini on a surfboard in the Keys. I am very impressed that there can
be such radically different icons associated with these two presentations of
the same book.
Goldstone: I have a book coming out in February called Inherently
Unequal. It’s about the shameful record of the Supreme Court in civil
rights cases from 1865 to 1903. That’s my day job, the Constitution. I’m
finishing another thriller about when heroin was first marketed as a cough
medicine for children.
But what I am really working on is a book about my kid’s piano teacher.
One day, this woman, Vemona Gomez by name, showed up for a recital in
a Yankees hat. “What’s she doing in a Yankees hat?” I asked. Another
parent said, “Do you know who that is? That’s Lefty Gomez’s daughter.”
Years went by. One day, she called. “Can I come over?” “Yes.”
[She brought over her material. She was working on a book and she had
interviews with people who didn’t give interviews. Mr. Goldstone referred
her to his agent. Against his wife’s advice, he declined to get involved
himself, at that time. Six months later, the agent was burned out on the
project. Gomez’s son, a lawyer, had “completely obnoxed” the agent.
Goldstone called and begged the agent to do it. The agent said, “I’ll do it
if you’ll do it.” So Goldstone is doing it, with Ms. Gomez.]
It’s an unbelievable trove of material of an American Odyssey. Her father
was best friends with Babe Ruth. He was Joe DiMaggio’s roommate for
seven years. This guy grew up dirt poor. The day Castro marched into
Havana, Lefty was at Ernest Hemingway’s house with an invalid passport.
He’d been asked to go down there by John Foster Dulles after Nixon had
been spat on in South America. Hemingway sent him to the Hotel with
instructions to stay inside. The Castro people weren’t going to let
Washington Academy of Sciences
79
anybody out without a valid passport. They found out who he was and
Fidel Castro gave permission for him to leave.
Lefty, An American Odyssey, will be out in 2012.
Harig: That’s it for the formal program. Any questions from the audience?
Question 1: You all have such breadth. How do you find the time? Do
you all have day jobs?
Cameron: I just stay in my pajamas for an extra hour. I can’t do anything
else in pajamas, and I’m comfortable working that way, and I built on my
day job that way. I’m a full-time writer now. I’m a recovering
archeologist. You never get over that completely. Our vacations are all
about broken things (artifacts). It’s more fun when you can devote your
whole time to a project.
Bayard: Stealing time is almost the writer’s vocation. I used to write for
nonprofit groups and others. I knew I was becoming successful when I
found I could carve out two or three hours. Now, it is close to my day job.
Writers, all of us here, have so many pins in the air, we are quite busy.
And there is nothing more depressed than a writer between books.
Crosby: I worked as an economist on Capitol Hill. Then my husband got
posted to Switzerland. I thought I’d work, but couldn’t get a work permit.
We went to Moscow, and there I gravitated toward journalism, which is
hard to sell as a free-lance. Then I turned to nonfiction, which is better. It
is full time; I have a book a year. My husband says he had no idea our
lives were going to be like this. Fortunately, we don’t live on what I make.
He brings home the good paycheck.
Goldstone: I teach at a local community college. I write for my work.
Writing doesn’t pay terribly well. You get regular advances, and
sometimes you score, but usually not. It’s like the old garment center
joke - lose a little on every garment, but make it up in volume.
You have to need to write. If you don’t have the need, the business will
just chew you up.
[The writers agreed that they need to have more than one project going.
If one stalls, they can keep going on others.]
Question 2: How difficult is it to develop the conversation that goes on
between characters within the plot line?
Goldstone: My dialogue works best when I am a reporter and when I am
“watching.” With experience, you develop an ear and an eye. You “hear”
Spring 201 1
80
the characters; you “see” the scenes. When the characters sound tinny,
you know you are off. You feel it. You change it.
Bayard: There are perils in writing historical fiction. I started School of
Night in Elizabethan, and I found I hated it. It sounded too stilted. I had to
create a new style of dialogue that was more modem but with Elizabethan
style touches. That seems to work better.
The second peril is making your characters mouthpieces for your research.
You are tempted to teach the reader everything you learned about Grecian
sewers. If you sprinkle too many historical facts into the dialogue, the
characters don’t seem real.
Henry James said there was no such thing as a good historical novel. You
are forcing them to say things they would never say. But I think there is a
way of working it, and it does require an ear.
Crosby: I was in radio, and I read all my books out loud. I usually read
with my cat, who is pretty discerning and critical. I think that’s why all my
books are unabridged audio books. You catch all the little words that don’t
work.
Cameron: I do that, too. I read them to my husband. Historical facts?
You don’t want to put in all the tidbits. You have to develop a feel for
“just enough.” Paring it down between a dissertation and dialogue
between criminals, that takes some work.
Question 3: How do you deal with the perceptions and misperceptions of
your audience?
Crosby: The greatest one I deal with is the assumption that wine is not
made in Virginia. That actually has been kind of fun. I travel a lot, in
California, especially, they are surprised that people make wine in
Virginia. It is a pleasure to educate them.
Goldstone: In books on the 15th century, you don’t find much of that.
Few people know much about the 15th century. In the end, you tmst your
research. You’ve done the reading; the critics are usually less well
informed. The overly particular criticisms are often from people who want
to justify not liking the book.
You need to accept that, no matter how good a book is, not everybody will
like it.
Bayard: Writing about Tiny Tim, I was writing about a character I
loathed. My pjrpose was to turn him into a character I liked. When I wrote
Washington Academy of Sciences
81
I
about Poe, it was about a period of his life few people know about, when
he was a cadet at West Point. There are many myths about Poe, for
example, that he was a drug addict. There is no evidence to support that.
I am reminded of the John Ford line, in “Liberty Valance”: when the myths
and the truth conflict, publish the myth. I guess we just create our own
new myths.
Cameron: It usually isn’t a problem with the archeologists. But I had a
character once who was an Army brat. A real Army brat thought I hit a
chord exactly wrong. I haven’t had people call me out on having vampires
cure people. I’ve cast vampires as sort of misunderstood superheroes. So
far. I’m getting away with it.
I got a nice note from the Massachusetts Office of Historic Preservation,
from the state archeologist. She was enthused about how I had presented
women in historical settings, running pubs, getting beaten by their
husbands. She had also deduced that must be true.
Archeologists often tell me they read my books because they are just like
their own lives. I say, “I’m so sorry!”
People often want my characters to be just like them. I find that very
funny.
Harig: Thanks to everybody for coming.
Peg Kay recalled one of the high points from the program of the previous
year: Donna Andrews doing a remarkably animated imitation of a Penguin
in Heat. Cameron remarked that she had been present at another penguin
exhibit when Andrews recalled that eponymous penguin, [vide Donna
Andrews’ The Penguin Who Knew Too Much]
Spring 201 1
83
MEMBERSHIP APPLICATION
Washington Academy of Sciences
1200 New York Avenue
6th floor
Washington, DC 20005
Please fill in the blanks and send your application to the Washington Academy of
Sciences at the address above. We will contact you as soon as your application has
been reviewed by the Membership Committee. Thank you for your interest in the
Washington Academy of Sciences.
(Dr. Mrs. Mr. Ms.)
Business Address
Home Address
Email
Please indicate your preferred mailing address
Business Home
Present Occupation or Professional Position
Please list memberships in scientific societies - include office held:
Spring 201 1
This page intentionally left blank
Washington Academy of Sciences
84
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
Acoustical Society of America
American/International Association of Dental Research
American Association of Physics Teachers, Chesapeake Section
American Fisheries Society
American Institute of Aeronautics and Astronautics
American Institute of Mining, Metallurgy & Exploration
American Meteorological Society
American Nuclear Society'
American Phytopathological Society
American Society for Cybernetics
American Society for Microbiology
American Society' of Civil Engineers
American Society' of Mechanical Engineers
American Society' of Plant Physiology
Anthropological Society of Washington
ASM International
Association for Women in Science (AWIS)
Association for Computing Machinery
Association for Science, Technology, and Innovation
Association of Information Technology Professionals
Biological Society of Washington
Botanical Society of Washington
Chemical Society of Washington
District of Columbia Institute of Chemists
District of Columbia Psychology Association
Eastern Sociological Society'
Electrochemical Society
Entomological Society of Washington
Geological Society of Washington
Historical Society of Washington, DC
Human Factors and Ergonomics Society
Institute of Electrical and Electronics Engineers, Wash. DC Section
Institute of Electrical and Electronics Engineers, Northern Va. Section
Institute of Food Technologies
Institute of Industrial Engineers
Instrument Society of America
Marine Technology Society
Mathematical Association of America
Medical Society of the District of Columbia
Paul Arveson
J. Terrell Hoffeld
Frank R. Haig, S.J.
Ramona Schreiber
David W. Brandt
Michael Greeley
Kenneth Carey
Steven Arndt
Kenneth L. Deahl
Stuart Umpleby
VACANT
Kimberly Hughes
Daniel J. Vavrick
Mark Holland
Marilyn London
Toni Marechaux
Jodi Wesemann
Kent Miller
F. Douglas Witherspoon
Barbara Safranek
F. Christian Thompson
Emanuela Appetiti
Jim Zwolenlk
Jim Zwolenlk
David Williams
Ronald W. Mandersheid
Robert L. Ruedisueli
F. Christian Thompson
Bob Schneider
VACANT
Michael Eidelkind
Richard Hill
Murty’ Polavarapu
Isabel Walls
Neal F.Schmeidler
Hank Hegner
Judith T. Krauthamer
Sharon K. Hauge
Duane Taylor
Washington Academy of Sciences
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
Washington Academy of Sciences
6“" Floor
1200 New York Ave. NW
Washington, DC 20005
Return Postage Guaranteed
I
NONPROFIT ORG
US POSTAGE PAID
MERRIFIELDVA 22081
PERMIT# 888
r '
5*5************]yiixED ADC 207
ERNST MAYR LIBRAY
Harvard University
26 Oxford St
Museum Comp Zoology Serial Records Division
Cambridge, MA 02138-2902
V-
f"
i
1
\A)/^S
MCZ
LIBRARY
OCT 1 1 2011
HARVARD
university
Volume 97
Number 2
Summer 2011
Journal of the
WASHINGTON
ACADEMY OF SCIENCES
Washington -Academy of Sciences
Founded in 1898
Board of Managers
Elected Officers
President
Gerard Christman
President Elect
James Cole
Treasurer
Larry Millstein
Secretary
Terrell Erickson
Vice President, Administration
Jim Disbrow
Vice President, Membership
Sethanne Howard
Vice President, Junior Academy
Dick Davies
Vice President, Affiliated Societies
Victor Miriel
Members at Large
Denise Ingram
Michael Cohen
Paul Arveson
Frank Haig, S.J.
Neal Schmeidler
Catherine With
Past President Mark Holland
Affiliated Society Delegates
Shown on back cover
Editor of the Journal
Jacqueline Maffucci
Associate Editor
Sethanne Howard
Academy Office
Washington Academy of Sciences
6'" Floor
1200 New York Ave. NW
Washington, DC 20005
Phone: 202/326-8975
The Journal of the Washington Academy of
Sciences
The Journal \s the official organ of the Academy.
It publishes articles on science policy, the history
of science, critical reviews, original science
research, proceedings of scholarly meetings of
its Affiliated Societies, and other items of interest
to its members. It is published quarterly. The last
issue of the year contains a directory of the
current membership of the Academy.
Subscription Rates
Members, fellows, and life members in good
standing receive the Journal free of charge.
Subscriptions are available on a calendar year
basis, payable in advance. Payment must be
made in US currency at the following rates.
US and Canada $30.00
Other Countries $35.00
Single Copies (when available) $15.00
Claims for Missing Issues
Claims must be received within 65 days of
mailing. Claims will not be allowed if non-
delivery was the result of failure to notify the
Academy of a change of address.
Notification of Change of Address
Address changes should be sent promptly to
the Academy Office. Notification should
contain both old and new addresses and zip
codes.
POSTMASTER:
Send address changes to WAS, 6^^ Floor,
1200 New York Ave. NW
Washington, DC 20005
Journal of the Washington Academy of
Sciences (ISSN 0043-0439)
Published by the Washington Academy of
Sciences 202/326-8975
email: was@washacadsci.org
website: www.washacadsci.org
1
Editor’s Comments
July 21, 2011 marked a bittersweet moment in NASA’s space
program; namely, the end of the shuttle program. When Atlantis touched
down at NASA’s Kennedy Space Center, many paused to wonder what the
future holds for space exploration and uncovering the mysteries of our
solar system. This 30-year program has changed the way the public views
space, and served as a catalyst for many young, aspiring scientists.
Although to many this was a sad moment in history, the end of the
shuttle program is not the end of space exploration as a whole. Just 18
days after Atlantis touched down, the spacecraft Juno was launched on its
mission to explore Jupiter, and other missions are approaching. In addition
to these, we have countless scientists using advanced technology and
methodology here on Earth to study our solar system.
In recognition of this, this issue of the Journal celebrates the
wonders of astronomy, and discusses some of what we know, and in doing
so demonstrates how much left there is to find out. Sethanne Howard
provides a series of papers discussing black holes, dark matter, and the
cosmic distance ladder. I hope that you enjoy immersing yourself in topics
that we often hear about, but may not really know about.
Following these, we have included a summary of our Annual
Awards Banquet, held on May 12, 2011. It was a great night. Mr. Sam
Kean, author of The Disappearing Spoon, provided our keynote address
and was a great success. We introduced the incoming board and said
farewell and thank you to our outgoing members. And of course, we
recognized the contributions of scientists in a variety of disciplines
through our awards presentation. Thank you to the Awards Committee for
your effort. We look forward to next year!
Jacqueline Maffucci
Editor, The Journal of the Washington Academy of Sciences
MCZ
LIBRARY
Summer 201 1
OCT 1 1 2011
HARVARD
UNIVERSITY
il
INSTRUCTIONS TO AUTHORS
1. Manuscripts should be in Word (Office 03/07/10) and not PDF.
2. They should be 6,000 words or fewer (exceptions may be made by
the Editor). If there are 7 or more graphics, reduce the number of
words.
3. Graphics (photographs, drawings, figures, tables) must be in
graytone only (no color accepted), and be easily resizable by the
editors to fit the Journal’s page size. Do not wrap text around the
graphics.
4. References (and bibliography, if included) may be in the format
generally acceptable for the disciplinary or professional field
represented by the manuscript. They must be accurate, complete,
and consistent in format throughout the paper.
5. Include both an e-mail address and a postal address for the author
(or primary author) including title and institutional affiliation if
any.
6. Papers are peer reviewed.
7. Send Manuscripts by e-mail as an attachment, or on a CD, to
Joumal@washacadsci.org or directly to the editor. Dr. Jacqueline
Maffucci - jamaffucci'g gmail.com. Hard copy cannot be accepted.
Manuscripts can be aceepted by any of the Board of Discipline
Editors.
Emanuela Appetiti - anthropology at eappetiti@hotmail.com
Elizabeth Corona - systems science at elizabethcorona@:gmail.com
Jim Eigenreider - science education at iim@deepwater.org
Terrell Eriekson - environmental natural sciences at
terrell.ericksonl@wdc.nsda.gov
Mark Holland - botany at maholland@salisbur\ .edu
Kiki Ikossi - engineering at ikossi@ieee.org
Carol Lacampagne - mathematics at clacampagne@eaithlink.net
Raj Madhaven - engineering at rai.madhaven@nist.gov
Kent Miller - computer sciences at kent.l.miller@alumni.cmu.edu
Jean Mielczarek - physics and biology at mielczar@phvsics.gmu.edu
Robin Stombler - health at rstombler@auburnstrat.com
Alain Touwaide - history of medicine at atouwaide@hotmail.com
Steve Tracton - atmospheric studies at straction@;hotmail.com
Washington Academy of Sciences
AFFILIATED INSTITUTIONS
The National Institute For Standards and Technology
Meadowlark Botanical Gardens
The John W. Kluge Center of the Tibraiy^ of Congress
Potomac Overlook Regional Park
Koshland Science Museum
American Registry of Pathology
Living Oceans Foundation
Summer 201 1
IV
Washington Academy of Sciences
1200 New York Avenue
6th floor
Washington, DC 20005
Please fill in the blanks and send your application to the Washington Academy of
Sciences at the address above. We will contact you as soon as your application has
been reviewed by the Membership Committee. Thank you for your interest in the
Washington Academy of Sciences.
(Dr. Mrs. Mr. Ms.)
Business Address
Home Address
Email
Please indicate your preferred mailing address
Business Home
Present Occupation or Professional Position
Please list memberships in scientific societies - include office held:
I
!
Washington Academy of Sciences
1
Black Holes Can Dance
Sethanne Howard
USNO, retired
Abstract
This is the story of black holes seen from one astronomer’s perspective.
Although some very technical information is included, the intent is to
review some information about this odd piece of nature.
Introduction
Simply put, a black hole^ is a region of space from which nothing, not
even light, can escape. It is the result of the deformation of space-time
caused by a very compact mass - a lot of mass in a teeny (actually zero)
volume. Around the black hole there is an undetectable surface (called the
event horizon) which marks the point of no return. Once inside nothing
can escape. A black hole is called “black” because it absorbs all the light
that hits it, reflecting nothing, just like a perfect blackbody in
thermodynamics. We cannot see, hear, smell, touch, or taste it.
Now that we know that much, let us look at some history.
Newton’s universe did not include black holes. I shall start there,
and assume we know the basics of Newton’s laws.
Even though Newton did not discuss black holes, the idea of them
has been around for some time.
The idea of a body so massive that even light could not escape was
first put forward by geologist John Michell" in a letter written to Henry
Cavendish'" in 1783 to the Royal Society:""
If the semi-diameter of a sphere of the same density as the Sun
were to exceed that of the Sun in the proportion of 500 to 1, a
body falling from an infinite height towards it would have
acquired at its surface greater velocity than that of light, and
consequently supposing light to be attracted by the same force
in proportion to its vis inertiae, with other bodies, all light
emitted from such a body would be made to return towards it
by its own proper gravity.
Summer 201 1
2
In 1796, mathematician Pierre-Simon Laplace'" promoted the same idea in
the first and second editions of His book Exposition du systeme du Monde
(it was removed from later editions). He pointed out that there could be
massive stars whose gravity is so great that not even light could escape
from their surface.
Such dark objects were ignored until the 20^^ century, since it was
not understood how gravity could influence a massless wave such as light.
It took Albert Einstein (and others) to show that gravity can
influence light. First with his Special Theory of Relativity and second with
his General Theory of Relativity he proved that gravity does influence the
motion of light. According to Einstein, space warps when close to matter.
The more matter there is, the more space warps. The description of the
curvature (warping) of space is the mathematically complicated part of
general relativity. It involves tensor calculus and metrics. In mathematics,
the word metric refers to a fairly general function which defines the
‘distance’ between elements in a set. I tend to think of a metric as a
bendable and twistable ruler that allows one to measure intervals
(distances) between two events. Keep the concept of a metric in mind.
Figure 1. The curvature of space caused by a massive object.
Figure 1 represents a two-dimensional slice through three-
dimensional space showing the curvature of space produced by a spherical
object, e.g., the Sun. Einstein’s view is that the planets follow the
curvature of space around the Sun (and produce a tiny amount of curvature
themselves).
Metrics and the Special Theory of Relativity
The Special Theory of Relativity (STR) has as its basic premise
that light moves at a uniform speed, c = 300,000 km/s, in all frames of
reference. This results in setting the speed of light as the absolute speed
limit in the universe and also produces the famous relationship between
mass and energy, E = me?.
Washington Academy of Sciences
3
In Newtonian Hat space (the kind we are familiar with) the metric
that defines distance is:
ds^ — dx^ + dy^ + dz^
where ds is the distance and (x, y, z) are the spatial coordinates (remember
your high school geometry). Strictly speaking this is the line element, not
the metric, f or the purpose here, I use the words interchangeably.
In S I R the metric becomes a combination of time and space:
ds^ = -c^dt^ +dx^ + dy^ + dz^ .
In spherical coordinates it is:
ds^ = -c^df- + dr^ + r^dO^ -H sin^ 6d(l)^
or more concisely
ds^ - -c^dt^ + dr^ + r^dil^ .
It is from S I R that we get the term space-time - space and time forming a
single continuum, ds. Note the difference between this metric and the
metric of Newton’s world. In Idnstein’s world the distance (interval)
between two events depends on the time and space intertwined.
The General Theory of Relativity
As useful as Newtonian mechanics may be, it is merely a limiting
case of relativistic mechanics, fhe Cieneral fheory of Relativity (O I R) is
the geometric theory of gravitation published by Albert Idnstein in 1916.
It is the current description of gravitation in modern physics.
We need GIR because black holes require GTR for
explanation, yet (ffR is a difficult subject no matter how one looks at it.
d his is what the basic equation of (i I R looks like:
-
8;rG'
'Y'V
where is the Hinstein tensor,'" A is the cosmological constant,'"' is
the metric tensor,'"" and is the stress-energy tensor, fhis equation
describes the interaction of gravitation as a result of space-time being
curved by matter and energy, d'he left side of the equation contains the
information about how space is curved (the geometry), and the right side
contains the information about the location and motion of the matter (the
Summer 201 1
4
dynamics). When fully written out, the equations are a system of coupled,
nonlinear, hyperbolic-elliptic partial differential equations.
You may now forget these equations because they are not
necessary for the rest of the paper except to say that solutions to these
equations under certain conditions give us black holes.
Two Metrics That Define Black Holes
Solutions to the Einstein’s GTR equations are metrics of space-
time - ways to describe gravity and mass interacting with each other. The
metric is the fundamental object of study for black holes.
The first solution came in 1916 when astronomer Karl
Schwarzschild (1873-1916) solved the equations for the particular case of
a non-rotating spherically symmetric point mass.^^’^ This point mass
solution (where all the mass is concentrated into a single point) describes a
black hole.
The metric solution for the point mass was named after
Schwarzschild - the Schwarzschild metric defines the space-time
environment near a black hole of mass m. The metric is spherically
symmetric and non-rotating (no angular momentum). This is the simplest
type of black hole. The metric only looks complicated:
c^ dT
2Gm
cf j
c^ dt
2Gm
c~r
-1
dr
r^dQ}
The quantity
1-
2Gm^
c^r j
appears twice. It is there so that in the limit of
large r and small m the metric reduces to the Newtonian gravitational field
around a point mass. At r = 0 there is a true singularity.^^ Note, however,
'J
the possibility of infinity when r = 2Gmlc . This particular value for r is
called the Schwarzschild radius {ts), a special radius that is quite useful, as
we shall see.^"
It took some time for the next black hole solution to appear;
however, in 1963, mathematician Roy Kerr found the exact solution for a
rotating black hole. The more complicated Kerr metric for a black hole
with angular momentum 7 is:
ddd = \ \-SL
P )
p^de^-\ +
• z /I I • 2 ^ 2rra?,\n‘
sin 6 hm 6d0 ^ r —
6
cdrd0'
where r^ is the Schwarzschild radius, and the scale lengths a, p, and A are:
Washington Academy of Sciences
J
a- —
me
5
= r“ + a~ cos^ 9, and
A = r“ - + cr.
At r = 0 there is the true singularity, however, the Kerr metric has
two values for r where it appears to be singular: rmner and routes. The inner
surface occurs where the purely radial component of the metric goes to
infinity:
inner ^
The other singularity occurs where the purely temporal component of the
metric changes sign from positive to negative:
r - — —
outer
The Kerr black hole, therefore, has two special radii with the ergosphere
(sphere of influence of the black hole - more on it later) between them.
The outer surface is also called the static limit. The inner surface is also
called the event horizon.
Note something important. The parameter t (time) does not occur
in the right side of either metric. Time stops at a black hole.
So by the 1960s scientists could describe the enivironment around
stationary,^"* non-rotating, and rotating black holes. Given these metrics,
people got to work on the dynamics of black holes.
The Four Laws
By the 1970s research by many people led to the formation of the
four laws of black hole dynamics. These laws describe the behavior of a
black hole in close analogy to the laws of thermodynamics by relating
mass to energy, surface area to entropy, and surface gravity to
temperature. The analogy was completed when Stephen Hawking, in
1973, showed that quantum field theoiy^ predicts that black holes radiate
{Hawking radiation, see the section on this) like a blackbody with a
temperature proportional to the surface gravity of the black hole.^' Further
description of the four laws is highly mathematical and beyond the scope
of this paper.
r„“ - Act cos“ 9
Summer 201 1
6
Despite these laws we still cannot describe a black hole all the way
to r = 0. That will require combining quantum and gravitational effects
into a single theory, although the single theory does have a name: quantum
gravity. This is an area of active research.
However, the four laws led to a definition of what one can measure
in a black hole.
Black Holes Have No Hair
The four laws led to the ‘no-hair theorem’ - black holes have no
hair. This means that a stationary black hole is completely described by
only three things: its mass, angular momentum, and electric charge. There
is no other way to ‘grab’ onto (measure) a black hole. These properties are
special because they and only they are detectable from outside the black
hole. For example, a charged black hole repels other like charges just like
any other charged object. Why these three? The reason is mathematical;
these are unique, conserved imprints in the external fields of the black
hole (conserved Gaussian flux intervals).
Theoretically a black hole may possess electric charge but it would
quickly attract charge of the opposite sign and become neutral. The net
result is that any realistic black hole would tend to exhibit zero charge.
I shall discuss two types of black holes: one defined by the
Schwarzschild metric and one defined by the Kerr metric. There are
others, but they are quite specialized. In fact, there are four basic types:
When most people think of a black hole, it is usually the
Schwarzschild black hole. So I shall discuss this one first.
The Particulars of the Schwarzschild Black Hole
The Schwarzschild black hole is stationary (not moving through
space) with zero charge and is non-rotating. It is ‘dead’ in the sense that
one can never extract from it any of its mass-energy. No information can
ever come from a Schwarzschild black hole. This means it is stable against
a perturbation {e.g., a kick), were you so inclined.
Washington Academy of Sciences
7
At the Schwarzschild radius, rs, some of the terms in the metric
apparently become infinite. This is not a true singularity.""^' It is due to the
choice of spherical coordinates; however, it does have a physical effect.
In 1958 physicist David Finkelstein"""" identified the Schwarzschild
surface = IGmlc^ as an event horizon, ”a perfect unidirectional
membrane: causal influences can cross it in only one direction.” That
means it is a one way gate. Things go in and do not come out. At the rim
of the event horizon one must travel at the speed of light just to stay in
place. Once inside the event horizon the radial coordinate ‘evaporates’
because there can be no spatial direction that will lead back to the outside.
Once inside the event horizon escape is not possible.
The 'size’ of a black hole, as determined by the radius of its event
horizon (Schwarzschild radius), is roughly proportional to its mass M:
2GM
^s =
M
2.95 km.
where M© is the mass of the Sun. This relation is exact only for
Schwarzschild black holes; for more general black holes it can differ up to
a factor of two. Table I shows this relation for some common objects.
Table I. The Schwarzschild radius for different sized objects.
So, for the Earth to become a black hole, all its mass must be consolidated
within a sphere of a one centimeter radius. This is highly unlikely.
Figure 2 illustrates how space-time curves about a Schwarzschild
black hole. At the center of a black hole lies a true gravitational
singularity.""''"' At r = 0 the space-time curvature becomes infinite. For a
non-rotating black hole this region takes the shape of a single point. For a
rotating (Kerr) black hole it is smeared out to form a ring singularity lying
in the plane of rotation. In both cases the singular region has zero volume.
The singular region can thus be thought of as having infinite density. All
this really means is that we do not understand what happens at the
singularity.
Summer 201 1
8
Figure 2. Curved space-time around a Schwarzschild black hole.
Does this mean that gravity is somehow different around a black
hole? It is misleading to say that black holes have ‘stronger’ gravity than
other masses. Black hole or not, the curvature one feels depends strictly on
the mass of the object and the distance one is from that mass, not whether
the object is a black hole. When that mass is concentrated in a small
volume, however, one can get closer to the mass than otherwise is the
case. This may be why one thinks gravity is stronger around a black hole.
It is actually the field density that is greater. The gravitational field density
close to an Earth mass compressed within a 1 cm radius is much higher
than the density around an Earth mass with its current radius. A lot of
mass in a very tiny space can strongly warp the space nearby - tiny and
nearby being the key words.
So, if the Sun became a black hole, would we on Earth notice? We
would miss the sunlight and die (so we would notice), but the gravitational
effect on Earth would be what it always was. The mass of the Sun has not
changed (even though it occupies a smaller volume); the Earth’s distance
from the Sun has not changed; therefore, the Earth feels the same effect
and continues to orbit the black hole Sun.
This leads me to the next point.
Black holes are not cosmic vacuum cleaners. They do not zoom
around space sucking up matter. The black hole Sun will not scoop up the
Earth. Far from the Sun there is no unusual gravitational influence. Only
within a few Schwarzschild radii is there a significant effect. Black holes
can accrete matter but only when the matter is quite close. In fact,
scientists believe black holes are surrounded by accretion disks - a disk of
accreting matter (usually gaseous) orbiting the central object.
Now that we know a bit more about them, let us see how they are
made.
Washington Academy of Sciences
9
Making a Black Hole
There are two classes of black holes: (1) a stellar mass black hole,
and (2) a super massive black hole.
Class (1) happens when a heavyweight star reaches the end of its
life. One way to classify stars is by their birth weight. Heavyweight stars
are bom with more than 30 solar masses to their credit. Most of a star’s
life is spent maintaining a balance between two forces: radiation pressure
from the nuclear fusion in the core that pushes outward; and the
gravitational force trying to compress the gas inward. Ultimately a
heavyweight star will, as all stars do, consume all its nuclear fuel. It can
then no longer support itself against a subsequent gravitational collapse. If
it fails to eject its excess mass in the collapse process then nothing can
stop the stellar remnant from collapsing toward a point - forming a black
hole. This collapse happens in milliseconds - the star winks out. Our Sun
(a lightweight star) is not massive enough to end its life this way. It will
end as a white dwarf. A middleweight star (between 6 and 30 solar
masses) will explode as a supernova and end as a neutron star.
Most stars are not perfectly spherical and have a lot of angular
momentum, so the gravitational collapse produces a black hole more in
line with a Kerr black hole than a Schwarzschild black hole.
Class (2) is thought to lurk in the centers of most galaxies. A super
massive black hole contains thousands to millions of solar masses. Once a
super massive black hole has formed, it can continue to grow by absorbing
additional matter. One model for the formation of super massive black
holes is by slow accretion of matter onto a stellar mass black hole.
Another model involves a large gas cloud collapsing into a relativistic star
of perhaps a hundred thousand solar masses or larger. The star would then
become unstable and may collapse directly into a black hole without a
supernova explosion. Super massive black holes have properties which
distinguish them from stellar mass black holes:
• The average density of a super massive black hole (defined as the mass
divided by the volume within its Schwarzschild radius) can be as low
as the density of water for very high mass black holes.
• The tidal forces in the vicinity of the event horizon are significantly
weaker than the tidal forces around a stellar mass black hole.
Astronomers think the universe is littered with black holes, that they are
not rare at all. In addition to the stellar type they think that nearly eveiy^
galaxy has a central super massive black hole. What it we could visit one?
Summer 201 1
10
A Trip to a Black Hole
An observer falling into a Schwarzschild black hole cannot avoid
the singularity at r = 0. Any attempt to do so will only shorten the time
taken to get there. As the traveler spirals in, there is a last stable orbit^‘^ at
a distance of 3rs. Continuing inward, the traveler crosses the event
horizon. At the singularity the traveler is crushed to infinite density and its
mass added to the black hole. Before that happens, though, it will have
been tom apart by the tidal forces in a process sometimes referred to as
spaghettification (I did not make that up) or the tube-of-toothpaste-effect.
To describe this in more detail, assume there are two astronauts, a
smart one and a dumb one. Their spaceship arrives near a 3 solar mass
black hole {r^ = 9 km). The smart astronaut stays in the spaceship. The
dumb astronaut jumps toward the black hole. Tet us pick up the action 900
km away (lOOr^).
At this distance of 100 Ts from the black hole the dumb astronaut is
tom apart due to the tidal effect of gravity, and the story ends. For
comparison, the gravity tide on a human (head to toe) on the Earth’s
surface is about one millionth of a g.
For a 3 solar mass black hole the tidal force of the black hole is
shown in Table II. Remember that tidal forces go as \/r\ so the tides
quickly become fatal as one approaches the black hole.
Table II. Tidal force in g’s versus distance in km from a 3 solar mass black
hole.
For the purposes of argument, though, assume the dumb astronaut
is 'stretchable.’ Then as he falls, toe first, his toes are closer to the
upcoming event horizon than his head. The gravity tides between his toes
and head cause his toes to travel faster than his head. He stretches. The
closer he gets, the more he stretches. Simultaneously he is squeezed into
regions of ever decreasing circumference. He gets longer and thinner,
forming dumb astronaut spaghetti strings.
Washington Academy of Sciences
As he travels toward the event horizon he may notice nothing out
of the ordinary, except an inability to steer himself in any but one direction
- which is toward the '‘invisible” hole. He will never know when he has
crossed the event horizon were it not for the increased tidal tugging that
draws his body longer and longer, squeezing in from the sides (actually at
this point he is a set of disconnected atoms, zooming along all in a line).
Just before he reaches the event horizon, each piece of him emits high
energy radiation (x-rays) as that piece disappears forever. He winks out of
sight with a puff of radiation. It is a rather spectacular way to die.
And it is a wonderful way to gamer energy. The efficiency of
energy generation near a stationary black hole is about 6%. Near a rotating
black hole this reaches about 30% efficiency. This is a staggering amount.
It is the best return of energy known. Compare the efficiency of
o
combustion on Earth which is only about 10‘ . The efficiency of nuclear
burning (in a star) is about 7 x lO'*'.
A visit to a super massive black hole is less dramatic although it
ends the same way. If the mass of the black hole is about 30,000 solar
masses then the dumb astronaut will not be tom apart by tidal forces at the
event horizon. This will wait until he is much deeper inside. Of course,
once he crosses the event horizon he cannot return or send messages.
Although he may survive those tidal forces, the high energy radiation (all
those x-rays and gamma rays lurking at the event horizon) will fry him.
One can calculate how long the dumb astronaut spaghetti string
will "live” once inside the event horizon. No matter how he approaches a
black hole of mass M, once inside the event horizon he will be killed at the
r = 0 singularity in a proper time of about 1.54 x 10*'^ M/M© seconds. So
for a 10 solar mass black hole, he will die in 154 x lO"^ sec (0.00154 sec).
One way or another, the dumb astronaut will not survive the trip.
Time Dilation
If the dumb astronaut carries a flashlight and points it back at the
smart astronaut, and flashes it in a regular pattern, what will the smart
astronaut see? She will see the flashes get further and further apart
eventually slowing down to a stop (after an infinite amount ot time). The
GTR predicts that time will slow in the presence of matter - this is called
time dilation. It is not just clocks by the way, all physical processes,
including clocks ticking (however they measure their ticks), hearts
beating, aging, etc., must slow down, but the only one who notices is the
distant timekeeper. This is not an imaginary effect. When transporting
Summer 201 1
12
atomic clocks on the Earth, one must correct for the GTR effects of the
Earth on the moving clock.
Gravitational Redshift
In addition to the slow down of time, the light she sees is
redshifted more and more as the dumb astronaut gets closer to the event
horizon. This is not a Doppler shift. Light loses energy when escaping
from a gravitational field. Because the energy of light is proportional to its
frequency, a shift toward lower energy represents a shift toward the red for
visible light. This gravitational redshift was first observed in the spectra of
dense white dwarf stars, whose light is redshifted by about lA.
Gravitational redshift was experimentally verified on Earth by the Pound-
Rebka experiment of 1959.
Now that we know the dumb astronaut will not survive, is there a
way we can we tell that he fell in?
Cosmic Censorship
When an object falls into a black hole, any and all information
about the shape of the object or distribution of charge on it is evenly
distributed along the horizon of the black hole, and is lost to outside
observers. So not only does the dumb astronaut disintegrate, but also there
is no way to determine that it was a dumb astronaut that fell in.
Because the black hole eventually achieves a stable state with only
three measureable parameters (mass, charge, and angular momentum),
there is no way to avoid losing information about the initial conditions.
Nature puts a curtain around black holes so that we cannot see inside or
know what happens inside - this cosmic censorship is complete. There
are no ‘naked’ singularities. And the event horizon must be real, not
complex.^ The information that is lost includes every quantity, including
the total baryon number, lepton number, and all the other nearly conserved
pseudo-charges of particle physics.
At least that was the state of the current thought until the 1970s.
The physicist Stephen Hawking (author of A Brief History of Time and
The Universe in a Nutshell) has long worked in theoretical cosmology. In
2009 he received the Presidential Medal of Freedom. He even played
himself on Star Trek. I discuss one aspect of his work next.
Washington Academy of Sciences
13
Hawking Radiation
In 1974 Stephen Hawking realized that black holes are not
absolutely black. There are quantum effects that allow black holes to emit
blackbody radiation. The temperature of this radiation is inversely
proportional to the black hole’s mass; the tinier the black hole the higher
the temperature of the radiation - called Hawking radiation.
Hawking radiation is due to particle/anti-particle pairs {e.g.,
electron/positron) which are continuously created and annihilated in free
space. When this pair creation happens near a black hole it is possible for
one of the two particles to cross the event horizon before it meets and
annihilates its partner. The other particle is then free to leave the scene,
making the black hole appear to the outside world as a source of radiation.
In other words, there is 'new’ energy. So to satisfy energy conservation,
the particle that fell in must have a negative energy (with respect to an
observer far away from the black hole). Thus, the black hole loses mass,
and, to an outside observer, it appears that the black hole has just emitted a
particle. It takes energy to create new particles. This energy must come
from the black hole. The black hole therefore decreases its mass as it
radiates. Thus black holes will slowly evaporate.
A one solar mass black hole has a temperature of only 60
nanoKelvin (v-e-r-y cold); in fact, such a black hole would absorb far
more cosmic microwave background radiation than it emits. Evaporation
will take 10^^ years. This is far longer than the age of the universe.
A smaller black hole of 4.5 x 10^^ kg (about the mass of the
Moon) would be in equilibrium at 2.7 K, absorbing as much radiation as it
emits. Even smaller black holes would emit more than they absorb, and
thereby lose mass.
For a miniature black hole - about lO'^ kg mass which is the mass
of a mountain - evaporation will take about as long as the universe is old.
It is conceivable that conditions in the ver>^ earliest epochs ot the universe
might have been just right to compress pockets of matter into these
miniature black holes. The Schwarzschild radius of such a black hole is
about lO'^^ m, comparable to the size of a subatomic particle. This begs
the question how these teeny black holes got formed; however, at the end
of its life, the mass of the tiny black hole becomes smaller and smaller,
and hence its temperature tends towards infinity. The black hole ultimately
disappears in an explosion. Fortunately (or unfortunately) current physics
is unable to explain the last phases of the evaporation of the black hole.
Summer 201 1
14
Black Hole in a Bathtub
Recently scientists in Canada have measured the equivalent of
Hawking radiation from a “bathtub black hole.” In August 2010, the
Canadian scientists announced^^‘ that they had made an event horizon in a
water channel. They sent a steady flow of water in one direction. As it
passed over the top of a piece of wood whittled in the shape of an airplane
wing, the water traveled faster (aka Bernoulli). In the opposite direction,
the group created water waves. When these waves approached the wing,
where water was flowing faster, they slowed to a stop. Technically this
bathtub version is a white hole, an inverted black hole that keeps waves
out rather than bringing them in. But the white hole serves as an analog
because it shares an important feature with astrophysical black holes — an
imaginary boundary that emits an unusual kind of radiation. These
laboratory emitters of Hawking type radiation share one required feature
with their astrophysical counterparts — a point of no return, analogous to
the black hole’s event horizon. Both types have event horizons, so both
ought to emit Hawking radiation. In fact, pairs of short-wavelength waves
were created at the bathtub horizon and swept away, and the energy of
these emitted waves matches what would be predicted from Hawking
radiation around a real black hole.
This seems a bit forced to me.
Hawking radiation introduced a debate in cosmological circles. Is
it consistent with the no hair theorem? This leads to a paradox.
The Paradox in Hawking Radiation
There is a paradox with Hawking radiation. From the no hair
theorem, one expects the Hawking radiation to be completely independent
of the material entering the black hole. All information is lost entering a
black hole. Nevertheless, if the material entering the black hole were a
pure quantum state, the transformation of that state into the mixed state of
Hawking radiation would destroy information about the original quantum
state. The rules of quantum mechanics say information is conserved in the
wave function. The no hair theorem says the information is lost - a
physical paradox.
Hawking remained convinced that the equations of black hole
thermodynamics together with the no-hair theorem led to the conclusion
that quantum information will be destroyed. This annoyed many
physicists, notably John Preskill, who in 1997 bet Hawking and Kip
Thome that information was not lost in black holes. This led to the
Washington Academy of Sciences
15
Susskind-Hawking battle, where Leonard Susskind and Gerard’t Hooft
publicly declared war on Hawking’s solution, with Susskind publishing a
popular book about the debate in 2008 {The Black Hole War: My battle
with Stephen Hawking to make the world safe for quantum mechanics).
The book carefully notes that the war was purely a scientific one, and that
at a personal level, the participants remained friends. The solution to the
problem is the holographic principle (a property of quantum gravity
combined with string theory). With this, as the title of an article puts it.
''Susskind quashes Hawking in quarrel over quantum quandaiy .”
In July 2005, Stephen Hawking announced a theory' that quantum
perturbations of the event horizon could allow information to escape from
a black hole, which would resolve the information paradox. When
announcing his result. Hawking also conceded the 1997 bet, paying
Preskill with a baseball encyclopedia "from which information can be
retrieved at will.” However, Thome remains unconvinced of Hawking's
proof and declined to contribute to the award.
It does not end there. Roger Penrose advocates the Conformal
Cyclic Cosmology (CCC) which critically depends on the condition that
information is in fact lost in black holes. In CCC, the universe iterates
through infinite cycles, with the future time-like infinity of each previous
iteration being identified with the Big Bang singularity of the next. This
CCC model might in future be tested experimentally by detailed analysis
of the cosmic microwave background radiation (CMB): if true the CMB
should exhibit circular patterns with slightly lower or slightly higher
temperatures. In November 2010, R. Penrose and V. G. Gurzadyan
announced they had found evidence of such circular patterns (Figure 3), in
data from the Wilkinson Microwave Anisotropy Probe corroborated by
data from the BOOMERanG experimenC""". However, the statistical
significance of the claimed detection has been questioned. Three groups
have independently attempted to reproduce these results, and found that
the detection of the concentric anomalies was not statistically significant.
Stay tuned.
Summer 201 1
16
Figure 3. Possible circles in BOOMERanG data -the concentric circles are highlighted.
The mottling represents the wrinkles in space-time of the cosmic microwave background.
The Particulars of the Kerr Black Hole
The other type of black hole I shall discuss is the Kerr black hole,
which rotates (has angular momentum). Seen in cross-section, the Kerr
black hole is oval-shaped, with the ergosphere extending farther into space
at the black hole’s equator than at its poles (Figure 4). The r = 0
singularity is a ring of zero volume.
The Kerr black hole is actually more significant than the
Schwarzschild black hole because most black holes spin. Part of the mass
is actually stored as rotational energy in the ergosphere (which means
'place where work can be done’) and is, in theory, available for extraction
since the mass has not yet crossed the event horizon. This type can inject
energy into its surroundings - hence this type is Tive.’
Kerr space-time is what happens when a black hole has reached its
final evolutionary state. Kerr space-time is time-independent, meaning that
nothing in Kerr space-time changes over time. In effect, time stands still.
.Remember that the time parameter t does not appear in the right side of the
Kerr metric. A black hole in such a state is essentially stationar\\
Washington Academy of Sciences
17
Figure 4. Side view of a conceptual Kerr black hole.
Frame Dragging
When the black hole is spinning it actually pulls the fabric of
space-time around with it - an effect called frame dragging, also known as
the Lense-Thirring effect. The rotation of the black hole or even the
rotation of a very massive object will alter local space-time by dragging a
nearby object out of position compared with the predictions of Newtonian
physics. Frame dragging is like what happens if a bowling ball spins in a
thick fluid such as molasses. As the ball spins, it pulls the molasses around
itself Anything stuck in the molasses will also move around the ball. This
dragging happens in the ergosphere. The closer to the black hole the
greater the dragging.
Inside the ergosphere (inside the static limit) nothing can stand
still; therefore, particles falling within the ergosphere are forced to rotate
and thereby gain energy. They must orbit in the same direction as the
black hole rotates. So long as they are still outside the event horizon, they
may, however, escape the black hole. The net process is that the rotating
black hole emits energetic particles at the cost of its own total energy. The
possibility of extracting spin energy from a rotating black hole was first
proposed by the mathematician Roger Penrose in 1969.
The Earth is a very massive object; therefore, as the Earth rotates,
it pulls space-time in its vicinity around itself. This action introduces a
Summer 201 1
18
precession on all gyroscopes in a stationary system surrounding the Earth
(Figure 5). The predicted Lense-Thirring effect is small — about one part
in a few trillion - yet measureable. A Foucault pendulum would have to
oscillate for more than 16000 years to process I degree.
n
r
Q. 3 r
Figure 5. The Earth rotating with angular velocity Q; The gyroscope a distance r away
precesses with angular velocity co.
TAGEOS (Laser Geodynamics Satellites) are a series of scientific
research satellites designed to provide an orbiting laser ranging benchmark
for geodynamical studies of the Earth. The Lense-Thirring effect on
LAGEOS due to the rotating Earth has been measured. The effect shifts
the orbits of the satellites about 2 meters per year in the direction of
rotation. The results are compatible with the predictions of GTR.
Another test of frame dragging is the Gravity Probe B satellite -
launched in 2004 (now decommissioned) with a dual purpose: to measure
the frame dragging of Earth, and to measure the geodetic effect - the
amount by which the Earth warps the local space-time in which it resides.
For Gravity Probe B, in polar orbit 642 km above the Earth, frame
dragging causes its gyroscope spin axes to process in the east-west
direction by a mere 39 milliarcsec/yr. — an angle so tiny that it is
equivalent to the average angular width of Pluto as seen from Earth.
Initial results from Gravity Probe B confirmed the expected
geodetic effect to an accuracy of about 1%. In December 2008 NASA
reported that the geodetic effect was confirmed to better than 0.5%.
Unfortunately the expected frame-dragging effect was similar in
Washington Academy of Sciences
19
magnitude to the noise level. Work continues on the data to model and
account for these sources of unintended signal, thus permitting extraction
of the frame-dragging signal if it exists at the expected level. By August
2008 the uncertainty in the frame-dragging signal had been reduced to
15%. Final results are expected in 201 1.
Many astrophysical objects, e.g. pulsars and black holes, emit jets
of energy. These jets may also provide evidence for frame-dragging.
Such jets are extremely powerful bursts of energy. Some of them extend
huge distances into space. There are images of jets later in the paper
(Figures 12, 13). These jets are tightly collimated flows of energy,
collimated perhaps by the twisting of magnetic field lines by frame
dragging.
The energy released in an astrophysical jet is overwhelmingly
powerful - at the highest end of the electromagnetic spectrum - x-rays and
gamma rays. A trip through a jet would quickly fry the traveler. Or is there
a way to cut through space-time?
Shortcuts through Space - Wormholes?
If one can avoid the jet, perhaps one can escape to another
universe. A wormhole is a hypothetical topological feature of space-time
that would be, fundamentally, a “shortcut” through space-time. The
physicist John Wheeler coined the term wormhole in 1957; however, in
1921, the mathematician Hermann Weyl already had proposed the
wormhole theory.
There is no observational evidence for wormholes, but there are
valid theoretical solutions to the equations of GTR which contain
wormholes. These solutions say that it is theoretically possible to avoid the
singularity at r = 0 and exit the black hole into a different space-time with
the black hole acting as a wormhole.
For a simple explanation of a wormhole, consider space-time as a
two-dimensional (2D) surface. If this surface is folded along a third
dimension, it allows one to picture a wormhole “bridge.” (Please note that
this is merely a visualization to convey a structure existing in four or more
dimensions). The parts of the wormhole could be higher-dimensional
analogues for the parts of the curved 2D surface; for example, instead ol
mouths which are circular holes in a 2D plane, a real wormhole’s mouths
could be spheres in 3D space. A wormhole is, in theory, much like a
tunnel with two ends each at separate points in space-time. Figure 6
illustrates a 2D wormhole.
Summer 201 1
20
Figure 6. A 2D representation of a wormhole.
The first type of wormhole solution discovered was the
Schwarzschild wormhole. Technically the Schwarzschild metric has a
negative square root as well as a positive square root solution for the
geometry. The complete Schwarzschild geometry consists of a black hole,
a white hole, and two universes connected at their event horizons by a
wormhole. The negative square root solution inside the horizon represents
a white hole. A white hole is a black hole running backwards in time. Just
as black holes swallow things irretrievably, so also do white holes spit
them out."^^"' The negative square root solution outside the event horizon
represents another universe. The wormhole joining the two separate
universes is known as the Einstein-Rosen bridge. Unfortunately it is
impossible for a traveller to pass through this wormhole from one universe
into the other. A traveller can pass through an event horizon only in one
direction. First, the traveller must wait until the two white holes have
merged, and their horizons meet. The traveller may then enter through one
horizon. But having entered, the traveller cannot exit, either through that
horizon or through the horizon on the other side. The fate of the traveller
who ventures in is to die at the singularity which forms from the collapse
of the wormhole.
Washington Academy of Sciences
21
Figure 7. A 2D representation of a traversable wormhole.
Wormholes which could actually be crossed (Figure 7), known as
traversable wormholes, would only be possible if exotic matter with
negative energy density could be used to stabilize them (keep them from
collapsing). Physicists have not found any natural process which would
form a stable wormhole, although the quantum foam hypothesis (QFH) is
I sometimes used to suggest that tiny wormholes might appear and
I disappear spontaneously at the tiniest scale. Qualitatively QFH is
described as subatomic space-time turbulence at extremely small distances
of the order 10'''^ meters. At such small scales of time and space the
Heisenberg uncertainty principle allows particles and energy to come
briefly into existence, and then annihilate, without violating conservation
I laws. However, without a theory of quantum gravity it is impossible to be
I certain what space-time would look like at these scales.
I
I Finally, even if wormholes exist and are stable, they are quite
unpleasant to travel through. Radiation that pours into the wormhole (from
nearby stars, the cosmic microwave background, jets, etc.) gets blueshifted
to very high frequencies, well into the high energy end of the spectrum. As
you try to pass through the wormhole, you will get fried by these x-rays
and gamma rays. So, at the moment, space travel using wormholes is not
possible.
1
Summer 201 1
22
Figure 8. Two artists’ conception of a black hole surrounded by an accretion disk and
bipolar jets.
Observational Evidence for Black Holes
All this fancy theory means little unless there is observational
evidence. Fortunately, despite its invisible interior, a black hole can be
detected through its interaction with other matter.
There are several types of interactions. Five of them are: (1) A
black hole exerts gravitational pull on surrounding matter, although this is
indistinguishable at r » r^, from the pull of an object with the same mass;
(2) Gas surrounding a black hole is pulled inward and heated so that it
emits x-rays and gamma rays that might be observed; (3) A lump of matter
falling into a black hole should emit a burst of gravitational waves; (4)
Tidal forces will tear matter apart and eject a blob of relativistic matter
(tube-of-toothpaste effect); (5) Frame dragging will twist the magnetic
field lines that may surround a black hole and thus ‘shake’ the external
plasma.
Figure 8 gives two artists’ conception of a black hole surrounded
by an accretion disk with two polar jets. Conservation of angular
momentum means gas falling into the gravitational well created by a
massive object will typically spiral in to form a Frisbee-like structure
(accretion disk) around the object. Accretion disks are where the action
is. In the case of black holes, the accretion disk is outside the event
horizon. The gas in the inner regions (closer to the event horizon) becomes
so hot that it will emit vast amounts of radiation (mainly x-rays), which
may be detected by telescopes. In many cases, accretion discs are
accompanied by relativistic jets emitted along the poles, which carry away
much of the energy. The mechanism for the creation of these jets currently
is not well understood, although frame dragging is part of the solution.
Washington Academy of Sciences
23
It is unlikely we can observe an accretion disk directly (they are
too small), but the jets are easily seen (there are examples later in the
paper).
The strongest evidence for black holes comes from binary' star
systems in which a visible star orbits a massive but unseen companion.
Binary’ x-ray sources^^*' are excellent candidates for black holes because
matter from the accretion disk streaming into the black hole is ionized and
greatly accelerated, producing x-rays.
In 1972 an x-ray source (named Cygnus X-1) was discovered in
the constellation Cygnus. The Cyg X-1 system has a blue supergiant star
(HDE226868), about 25 times the mass of the sun, orbiting the x-ray
source. So something non-luminous is there (neutron star or black hole).
Figure 9 is an artist’s conception of the Cyg X-1 system. The indirect
evidence for the black hole Cyg X-1 is a good example of the search for
black holes.
Doppler studies of the blue supergiant indicate a revolution period
of 5.6 days about the dark object. Using that period plus spectral
measurements of the visible companion’s orbital speed leads to a
calculated total system mass of about 35 solar masses. The calculated
mass of the dark object then is 8 to 10 solar masses; much too massive to
be a neutron star which has an upper limit of about 3 solar masses - hence
black hole. Figure 10 is an image of the system. The jet is clearly seen.
Figure 9. Artist’s conception of Cygnus X-1 - matter is drawn from the supergiant star
into an accretion disk around the black hole.
Summer 201 1
24
Further evidence for a black hole is the emission of x-rays from its
location, an indication of temperatures in the millions of degrees. This x-
ray source exhibits rapid variations, with time scales on the order of a
millisecond. The light travel time is then a light-millisecond. This suggests
a source not larger than a light-millisecond (300 km), so it is very
compact. The only possibility that would place that much matter in such a
small volume is a black hole.
Figure 10. Jet in Cyg X-1, the jet is coming out from the center toward 1 o’clock. The
accretion disk is much too small to see.
In November 2010 evidence of the youngest black hole (all of 30
years old) known to exist in our cosmic neighborhood was found. This
object provides a unique opportunity to watch a black hole develop from
near infancy. The object is a remnant of supernova 1979C, a supernova in
the galaxy Ml 00 approximately 50 million light years from Earth. The
scientists think the progenitor star for the supernova was a star about 20
times more massive than the Sun.
Astronomers have identified numerous stellar black hole
candidates, and have also found evidence of super massive black holes at
the center of galaxies. In 1998, astronomers found compelling evidence
that a super massive black hole is located near the Sagittarius A* region (a
bright and very compact astronomical radio source discovered in 1974 at
the center of our own Milky Way). Astronomers monitored the orbits of
individual stars very near the black hole and used Kepler’s laws to infer
the enclosed mass. Recent results indicate that the super massive black
hole is 4.31 ± 0.38 million solar masses. Ultimately, what is seen is not the
black hole itself, but observations that are consistent only if there is a
black hole present near Sgr A.* There is a nice time lapse movie of the
stellar motions in the area: http://apod.nasa.gov/apod/ap001220.html.
Washington Academy of Sciences
25
Super massive black holes can produce amazing jets. Figure 1 1
shows three jets from the object 3C 75 (object number 75 in the Third
Cambridge Catalogue of radio sources).
Figure 1 1 . The right image is of 3C 75 - there are three clear jets. They originate at the
bright spot in the center. The jets flare and bend as they encounter the intergalactic
medium. The left image is an optical image of the galaxy NGC 1 1 28 - the central bright
dots in the right image.
The jets emanate from the vicinity of two super massive black
holes (coming from the bright spot in the right image). These black holes
are in the dumbbell galaxy NGC 1 128, which has produced the giant radio
source, 3C 75. The jets can reach incredible lengths - megaparsecs'^'^''* -
streaming into intergalactic space.
The peculiar dumbbell structure of this galaxy is thought to be due
to two large galaxies that are in the process of merging. Such mergers are
common in the relatively congested environment of galaxy clusters. An
alternative hypothesis is that the apparent structure is the result of a
coincidence in time when the two galaxies are passing one another, like
ships in the cosmic sea.
There is more. Black holes can come in pairs! Galaxies commonly
collide and merge to form new, more massive galaxies. A merger between
two galaxies should bring two super massive black holes to the new, more
massive galaxy formed from the merger. The two black holes gradually
spiral towards the center of this new galaxy, engaging in a gravitational
tug-of-war with the surrounding stars. The result is a black hole dance.
Summer 201 1
26
Astronomers expect many such waltzing super massive black holes
in the universe, but until recently only a handful had been found. In
January of 2010, astronomers announced the discovery of 33 pairs of
waltzing black holes in galaxies. This result shows that super massive
black hole pairs are more common than previously known from
observations. Also, the black hole pairs can be used to estimate how often
galaxies merge with each other.
The largest known black hole inhabits the core of M87, a giant
elliptical galaxy in the constellation Virgo. The M87 black hole appears to
be about (6.4 ± 0.5) x 10^ solar masses, with an event horizon diameter of
about 18 billion km - almost twice the diameter of the orbit of Pluto.
Figure 12 contains a series of photos of M87 with its jet. Surrounding the
black hole is a rotating disk of ionized gas that is oriented roughly
perpendicular to the jet. This gas is moving at velocities of up to roughly
1,000 km/s. Gas is accreting onto the black hole at an estimated rate equal
to the mass of the Sun every ten years.
Conclusion
Black holes retain their fascination despite the decades of solid
research. They are both simple and complex: simple because it takes only
three parameters to describe them; complex because it takes GTR to
handle the dynamics. They come singly, in binary systems, and in pairs,
but never ‘naked’. They come both small and large in mass. They are
impossible to see, but their effects on their environment can be distinctive,
although they are not cosmic vacuum cleaners. We think they are found in
the centers of most galaxies. There are dozens of possible detections of
stellar mass black holes.
Gravity trumps all the other forces of nature in these objects. It
compresses the mass of a dozen Suns, or a million, or a billion into a
pinpoint of infinite density. Space and time are squeezed out of existence,
and the structure of the universe turns into a “quantum foam” that’s ruled
by laws that scientists do not yet fully comprehend. We have a lot more to
learn.
Washington Academy of Sciences
27
Crvdit: Fiacci Owan (NRAO), John (STScl)andoo(l«agu»s
Th« ('tolonal Rado Astronomy Obsorvatoty isa tacMy o< lha
Nalonai Scwnca Foundarion.opaiattd und«< cooparativ*
agiaamanlby Assocatad Urvt^rsias. Inc
Figure 12. A series of multi-wavelength photos of M87 and its jets. Lobes of matter from
the jet extend out to a distance of 250,000 light-years. Start in the center, then move to
the upper left and follow clockwise the expansions of each image.
' The term ‘black hole’ was first publicly used in 1967 by physicist John Wheeler during
a lecture. He always insisted that it was suggested to him by somebody else.
" John Michell (1724 - 1793) was an English natural philosopher and geologist whose
work spanned a wide range of subjects from astronomy to geology, optics, and
gravitation.
Summer 201 1
28
Henry Cavendish FRS (1731 - 1810) was a British scientist noted for his discovery of
hydrogen which he called inflammable air.
" Michell, J. “On the Means of Discovering the Distance, Magnitude, &c. of the Fixed
Stars, in Consequence of the Diminution of the Velocity of Their Light, in Case Such
a Diminution Should be Found to Take Place in any of Them, and Such Other Data
• Should be Procured from Observations, as Would be Farther Necessary for That
Purpose.”./^/?//. Trans. R. Soc. (London) 74: 35-57 (1784).
' Pierre-Simon, marquis de Laplace (23 March 1749 - 5 March 1827) was an
astronomer/mathematician.
" It represents the curvature in a Riemannian manifold. A tensor is a geometrical higher-
order vector. Think of a matrix, although all matrices are not tensors.
Einstein called A his greatest blunder. Today scientists use it to explain ‘dark energy’.
It was originally introduced by Einstein to allow for a static universe (/.e., one that is
not expanding or contracting). This effort was unsuccessful for two reasons: the
static universe described by this theory was unstable, and observations of distant
galaxies by Hubble a decade later confirmed that our universe is, in fact, not static
but expanding.
This allows one to measure intervals and to define distance in the curved space.
On the outbreak of war in August 1914 Schwarzschild volunteered for military service.
While at the Russian front he wrote two papers on relativity theory providing the
first exact solution to the field equations.
A few months after Schwarzschild’s work, mathematician Johannes Droste
independently gave the same solution for the point mass.
Singularities are difficult to describe. They are absolute termination points - cessation
of existence.
Actually Schwarzschild solved the equations with no mass and, then, in the weak field
approximation, used the mass to bring it into coincidence with the Newtonian limit.
Stationary means the black hole might rotate but not translate. A non-stationary black
hole might be one that is orbiting another object.
There are other metrics that are beyond the scope of this paper.
Bardeen J.M., Carter, B., Hawking, S., Commiin. Math. Phys. 31, 161-170 (1973)
This means the infinity disappears in some coordinate systems.
D. Finkelstein, Phys. Rev. 110, 965-967 (1958).
Which means we do not really know what happens.
It can orbit at this distance (and not fall in) if it moves quickly enough.
“ The nCT is not allowed.
Science News, 178, p 28.
See JWAS, Winter 2010 issue for a description of this experiment.
White holes cannot exist, since they violate the second law of thermodynamics.
Binary x-ray sources have a visible star and an invisible source of x-rays.
Because angular momentum is conserved, observations of binary systems can give the
total mass of the system.
1 parsec is 3.08568025 x |0'^ km.
Washington Academy of Sciences
29
The Dark Side of Astronomy
Sethanne Howard
USNO, Retired
With all that starlight around how can there be dark stuff? Astronomers
now routinely use dark matter and dark energy to match observations. 1
discuss both dark matter and dark energy, describing their properties and
implications. Despite all we do know, there is far more that we do not
know.
Introduction
For millennia astronomy has used information gleaned from the
electromagnetic spectrum - first with optical light and then with other
wavelengths. The universe is awash in light. One might even ask why the
night sky is dark with all that light around. This is known as Olbers’
paradox: a dark sky conflicts with an infinite and unchanging universe.
The light from an infinite number of stars will fill the sky, and yet the
night sky is dark. The Big Bang Model solves this paradox. According to
the model the universe has a finite age (so there are not an infinite number
of stars), and the universe is expanding. The original light from the Big
Bang event is now in the microwave regime - the cosmic microwave
background - where our eyes do not see it. It takes a very sensitive
microwave detector to ‘see’ the cosmic microwave background. So we
have our beautiful dark night sky.
Figure 1 . The Coma cluster of galaxies
Summer 201 1
30
In 1933 the Swiss astrophysicist Fritz Zwicky observed the Coma
cluster of galaxies. This cluster contains over 1000 galaxies and is found
in the constellation of Coma Berenices. Figure 1 is an optical image of the
Coma cluster. All of the asymmetrical objects in the figure are galaxies.
Zwicky asked the question: what is the total mass of the cluster? At
this time in astronomical history, there was no robust way to estimate the
mass of an individual galaxy. If there were, one could simply add up the
individual masses. He estimated the cluster’s total mass in two ways:
(1) Based on the motions of galaxies near its edge under the
assumption that the galaxies were not escaping and were bound to the
cluster. This is a good assumption. It says the cluster is a bound system
and is not flying apart; and
(2) Based on the number of galaxies and total brightness of the
cluster. In other words mass follows light (this means the mass to light
ratio is unity when both are measured in solar units — = 1 where M is the
A
mass and A is the luminosity). At the time this was also a good
assumption. He could measure the brightness and therefore infer the mass.
He found that there was about 400 times more estimated mass
using method (1) than was visually observable by method (2). This is far
outside any error of measurement. This means that the gravity of the
visible galaxies in the cluster (via method 2) was far too small to constrain
(bind to the cluster) the moving galaxies, so something extra was required.
Based on these conclusions, Zwicky inferred that there must be some non-
visible form of matter that would provide enough of the mass and gravity
to hold the cluster together. This unexpected result became known as the
“missing mass problem.”
Nothing changed for 40 years after Zwicky’ s initial observations.
The missing mass remained missing. No observations indicated that the
mass to light ratio was anything other than unity. With — = 1 the
A
implication is that the circular velocity of the stars revolving about a
galaxy center would drop with distance from the galaxy center - similar to
a Keplerian drop-off.‘ In other words, as one moves from galaxy center to
edge the light decreases and so, therefore, does the mass. Astronomers did
question the issue, but there was no resolution. I remember when I was a
mere research assistant at Tick Observatory in 1965. I knew nothing of
this issue, so when a staff astronomer asked me if the mass to light ratio
Washington Academy of Sciences
31
could be other than unity I said, showing my ignorance, of course, why
not. He smiled a gentle, forgiving smile.
Initial Observations of Dark Matter
Then, in the late 1960s and early 1970s, Vera Rubin, a young
astronomer at the Carnegie Institution of Washington presented findings
based on measurements of the velocity curve" of spiral galaxies to a
greater degree of accuracy than had ever before been achieved.
Spiral galaxies look rather like cosmological fried eggs. They are
quite flat when seen edge-on, and display a wide variety of circular shapes
when seen face-on. See Figure 2 for two examples.
Together with fellow staff-member Kent Ford, Rubin announced at
a 1975 meeting of the American Astronomical Society the discovery that
most stars in spiral galaxies orbit at roughly the same speed
(approximately 220 km/s) regardless of distance from the center. This
implies that their mass densities are uniform well beyond the location with
most of the stars (the galactic bulge; i.e., the yolk in the fried egg). This
means that even though the light falls off as one moves away from the
center, the mass does not. These results suggest that either Newtonian
gravity does not apply universally (an unacceptable conclusion), or that,
conservatively, upwards of 50% of the mass of galaxies are contained in a
relatively dark galactic halo.
Figure 2. The left image is a spiral galaxy seen almost face-on. It has intricate structure.
The right image is another spiral galaxy seen edge-on. It looks flat.
A galactic halo is a spherical distribution of matter surrounding a
spiral galaxy. The spiral disk 'floats’ in the middle of the spherical ball of
dark matter. Dark matter responds to gravity but nothing else. It cannot be
'seen’ in any part of the electromagnetic spectrum.
Summer 201 1
32
Met with skepticism, Rubin insisted that the observations were
correct. Eventually other astronomers began to corroborate her work, and
it soon became well-established that most galaxies were in fact dominated
by dark matter. Zwicky was vindicated.
Figure 3 illustrates the issue. A rotation curve is the locus of points
for the circular speed as a function of distance from the galaxy center. ‘A’
(the dotted line) represents the expected Keplerian drop-off in orbital
speed with distance - the farther from the center the slower the star
travels. 'B’ (the solid line) represents Rubin’s observational results.
Rubin’s pioneering work has stood the test of time. Measurements
of velocity curves in spiral galaxies were soon followed by velocity
dispersions of elliptical galaxies.
Elliptical galaxies are rather like cosmological hard boiled eggs.
They look the same regardless of the direction of view. See Figure 4.
Measurements of the velocity dispersions in ellipticals also indicate a high
dark matter content.
Figure 3. Rotation curve of a typical spiral galaxy: predicted (A) and observed (B). Dark
matter can explain the velocity curve having a ‘flat’ appearance out to a large radius. The
velocity is plotted on they axis, and the distance from the galaxy center on the x axis.
Washington Academy of Sciences
33
More Observations of Dark Matter
Rubin estimated a dark matter component that was 50% of the total
amount of gravitating matter. Subsequent measurements of the diffuse
interstellar gas found at the outer edge of galaxies indicate that the
galaxies are gravitationally bound up to ten times their visible radii. This
has the effect of pushing up the dark matter component from the 50%
measured by Rubin to the now accepted value of nearly 95%.
Measurements of dark matter are now routine. For example
astronomers no longer assume that mass follows light - that the mass to
light ratio is unity. Galaxy mass profiles are thought to look very different
from their light profiles. The typical model for dark matter in galaxies is a
smooth, spherical distribution in halos that surround the disk.
The Milky Way (our own galaxy) is part of the Local Group of
galaxies. These are galaxies that form a close grouping with the Milky
Way (including the Andromeda galaxy). Recently (2010) astronomers
found that a member of the Local Group, the galaxy Segue I, has a
combined visual mass of about 1000 stars (it is a small galaxy) yet the
whole mass is more than 500 times larger. Segue 1 may be made of mostly
dark matter.
In 2005, astronomers from Cardiff University claimed to discover
a galaxy made almost entirely of dark matter, 50 million light years away
in the Virgo Cluster. This galaxy, named VIRGOHI21, does not appear to
contain any visible stars: it was seen with radio frequency observations of
hydrogen. Based on rotation profiles, the scientists estimate that this object
contains approximately 1000 times more dark matter than hydrogen and
has a total mass of about 1/lOth that of our own Milky Way. For
comparison, the Milky Way is believed to have roughly 10 times as much
dark matter (in its halo) as ordinary matter. Skeptics of this interpretation
argue that VIRGOHI21 is simply a tidal tail of the nearby galaxy NGC
4254. The nature of this galaxy remains a contentious issue.
Low surface brightness (LSB) dwarf galaxies are important
sources for studying dark matter, because they have an uncommonly low
ratio of visible matter to dark matter (hence the name), and have few
bright stars at the center which would otherwise impair observations of the
rotation curve of outlying stars. LSBs are probably dark matter-dominated,
with the observed stellar populations making only a small contribution to
rotation curves. This class of galaxy is extremely important because it
allows one to avoid the difficulties associated with the de-projection (the
Summer 201 1
34
tilt of the disk to the line-of-sight) and disentanglement of the dark and
visible contributions to the rotation curves.
Dark matter has the ability to deflect light.*'" Gravitational lensing
observations of galaxy clusters allow direct estimates of the true
gravitational mass based on its effect on light from background galaxies.
In clusters such as Abell 1689 (Figure 5), lensing observations confirm the
presence of considerably more mass than is indicated by the clusters’ light
alone. The short, thin arcs of light in the image are due to lensing.
Figure 5. Cluster Abell 1689
According to observations of structures larger than galaxies, as
well as Big Bang cosmology, dark matter accounts for 23% of the mass-
energy density of the observable universe. In comparison, ordinary matter
accounts for only 4.6% of the mass-energy density of the observable
universe, with the remainder attributable to dark energy. From these
figures, dark matter constitutes 83% of the matter in the universe, while
ordinary matter makes up only 17%. Figure 6 illustrate the various
percentages for today and for 13.7 billion years ago.
Washington Academy of Sciences
Aloim
4.6-.
35
Djrk
Mjttef
23-
Ojrk
Energy
72 .
Nrut
10 .
Ddfk
aitter
63
Photons
IS .
Alofn\
12
'}*WXtCN>HBSViO
330 000 »>!
Figure 6. The top pie chart gives the percentages for today. The bottom pie chart gives
the percentages for 13.7 billion years ago according to the Big Bang Theory .
Counter Examples for Dark Matter?
There are a few galaxies that have velocity profiles that indicate an
absence of dark matter, such as the elliptical galaxy NGC 3379. However
in 2006, using observations of globular clusters in NGC 3379, astronomers
found evidence for normal quantities of dark matter in the galaxy’s dark
halo. This .is contrary' to the previous observations that indicated a paucity
of dark matter in the galaxy.
Globular clusters (10^ stars) are sprinkled in the halos of spiral
galaxies.' They show little evidence that they contain dark matter.
It is important to note here that it is possible to form a spiral disk
of stars and gas that follows a flat rotation curve without bringing in dark
matter. The Mestel gravitational potential can accomplish this. The
astronomer Mestel was an expert in galaxy dynamics.'' A Mestel disk
(with a flat rotation curve) forms when a uniformly rotating spherical
cloud with slightly decreasing density (center to edge) collapses while
conserving angular momentum. Such a disk is an observationally close
approximation to a thin disk of stars and gas. However, these disks are
difficult to maintain. They tend to become gravitationally unstable unless
there is some dark matter in a spherical halo surrounding the disk. A 3D
spherical gravitational field will stabilize the disk against fragmentation.
These few counter examples are not very robust. Dark matter
seems firmly established through observations.
Summer 2011
36
What Is Dark Matter?
All of these observations of dark gravitating matter beg the
question, just what is it?
First, the chief property of dark matter is that, it is “dark,” i.e. it
emits no light - not visible, not x-ray, not infrared. So it is not large clouds
of hydrogen gas, since we can usually detect such clouds in the infrared or
radio. It is not in the form of stars and planets that we can see in the
optical.
Second, dark matter must interact with visible matter
gravitationally. So the dark matter must be massive enough to cause the
gravitational effects that we see in galaxies and clusters of galaxies.
Third, dark matter is not antimatter because we do not see the
unique gamma rays that are produced when antimatter and matter
annihilate.
Fourth, dark matter particles must be electrically neutral; otherwise
they would scatter light and thus not be dark.
Finally, we can rule out large galaxy-sized black holes on the basis
of how many gravitational lenses we see. High concentrations of matter
bend light passing near them from objects further away, but we do not see
enough lensing events to suggest that such objects make up the required
25% of dark matter contribution.
That is it. In other words, we do not know much.
However, there are a few viable dark matter possibilities. The two
main categories of objects that scientists consider as possibilities for dark
matter include MACHOs (yes, really!) and WIMPs (again, really!). These
are acronyms which help astronomers to remember what they represent.
MACHOs (MAssive Compact Halo Objects): MACHOs are
objects ranging in size from small stars to super massive black holes.
MACHOs are made of ordinary matter (like protons, neutrons and
electrons) and are found in the halos of galaxies.
Astronomers detect MACHOs by using their gravitational effects
on the light from distant objects. The gravitational attraction of a massive
object can bend the path of a light ray, much like a lens does. So when a
massive dark MACHO passes in front of a distant object {e.g. a star or
another galaxy), the light from the distant object is “focused” and the
distant object appears brighter for a short time. Astronomers search for
Washington Academy of Sciences
37
MACHOs in the halo of our Galaxy by monitoring the brightness of stars
near the center of our Galaxy and in the Large Magellanic Cloud (LMC).''"
The MACHO Project, one of the groups using this gravitational
lens technique, observed about 1 5 lensing events toward the LMC over a
span of six years of observations. They set a limit of 20% as the
contribution to the dark matter in our Galaxy due to objects with mass less
than 0.5 that of the Sun. So while they have been observed, astronomers
have found no evidence of a large enough population of these objects that
would account for all the dark matter in our Galaxy.
What about neutron stars and black holes as MACHOs? Neutron
stars and black holes are the final results of a supernova of a massive star.
Because a supernova usually leaves behind a remnant cloud of visible gas,
neutron stars and black holes must travel far from the remnant to ''hide."
On the positive side neutron stars are veiy^ massive, and if they are
isolated, they can be dark. On the negative side, because they result from
supemovae, they are not necessarily common objects. There is no
evidence that they occur in sufficient numbers in the halos of galaxies.
Black holes are not likely sources of dark matter because they have
such a dramatic effect on their surroundings -they typically produce high
energy jets that are easy to observe.
The most common view is that dark matter is not ordinary' matter
(electrons, protons, neutrons) at all; instead it is made up of other, more
exotic particles like axions or WIMPs (Weakly Interacting Massive
Particles).
WIMPs are subatomic particles which are not made up of ordinaiy
matter. They are weakly interacting because they can pass through
ordinary matter without any effects. They are massive in the sense of
having mass (whether they are light or heavy depends on the particle). The
prime candidates include neutrinos, axions, and neutralinos.
Neutrinos were first "invented" by physicists in the early 20^^
century to make particle physics interactions work properly. They were
later discovered, and physicists and astronomers had a good idea how
many neutrinos there are in the Universe. But they were thought to be
without mass. In 1998 one type of neutrino was discovered to have a mass,
albeit very small. This mass is too small for the neutrino to contribute
significantly to the dark matter despite the large number of them present in
the Universe.
Summer 201 1
38
Axions are particles which have been proposed to explain the
absence of an electrical dipole moment for the neutron. They thus serve a
purpose for both particle physics and for astronomy. Although axions may
not have much mass, they would have been produced abundantly in the
Big Bang. Current searches for axions include laboratory experiments and
searches in the halo of our Galaxy and in the Sun.
Neutralinos are members of another set of particles that have been
proposed as part of a physics theory known as supersymmetr>^ This theory'
is one that attempts to unify all the known forces in physics. Neutralinos
are proposed as massive particles (they may be 3 Ox to 5000x the mass of
the proton), but they are also the lightest of the electrically neutral
supersymmetric particles. Astronomers and physicists are developing
ways of detecting the neutralino either underground or searching the
universe for signs of their interactions. A lightest neutralino of roughly
10-10000 GeV is the leading WIMP dark matter candidate.
So far there have been no detections of axions or neutralinos.
There are other factors that help scientists determine the mix
between MACHOs and WIMPs as components of the dark matter. Recent
results by the WMAP satellite'"^^^ show that our universe is made up of only
4% ordinary matter. This seems to exclude a large component of
MACHOs. About 23% of our universe is dark matter. This favors the dark
matter being made up mostly of some type of WIMP. However, the
evolution of structure in the universe indicates that the dark matter must
not be fast moving, since fast moving particles prevent the clumping of
matter in the universe. We want clumping because galaxies and planetary'
systems form from such clumps. So while neutrinos may make up part of
the dark matter, they are not a major component because they move too
fast. Particles such as the axion and neutralino appear to have the
appropriate properties to be dark matter. However, they have yet to be
detected.
The conclusion is that we simply do not know what makes dark
matter; however, the evidence is strong that there is something dark that
responds to gravity.
Ordinary matter and dark matter make up only about 28% of the
Universe. That leaves us with the majority player in the universe - dark
energy.
Washington Academy of Sciences
39
Dark Energy
In the early 1990’s, one thing was fairly certain about the
expansion of the Universe. It was expanding. It might have enough energy
density to stop expanding and re-collapse, it might have so little energy
density that it would never stop expanding, but gravity was certain to slow
the expansion as time went on. Granted, the slowing had not been
observed, but, theoretically, the Universe had to slow. The Universe is full
of gravitating matter and the attractive force of gravity pulls all matter
together.
Then in the late 1990’s two teams'^ published observations of Type
la supemovae (SN). Since then, these observations have been corroborated
by several independent sources. The latest studies (in 2011) have
reinforced the results, shrinking the error bars by about 30 percent.
A Type la SN is a sub-category of cataclysmic variable stars that
results from the violent explosion of a white dwarf star. A white dwarf is
the remnant of a star that has completed its normal life cycle and has
ceased nuclear fusion. Type la SN have a characteristic light curve, the
graph of luminosity as a function of time after the explosion. See Tigure 7.
The similarity in the absolute luminosity profiles of nearly all
known Type la SN has led to their use as a secondary^ standard candle in
the distance ladder used in extragalactic astronomy. The cause of this
uniformity in the luminosity curve is still an open question.
That means if a Type la SN is observed in a distant galaxy then we
know the distance to that galaxy. We observe the apparent luminosity and
since all Type la’s have the same absolute luminosity we can use the
distance-luminosity relation^ to get the distance.
Figure 7. An illustration of the light curve of a supernova Type la. Luminosity is plotted
versus time since the explosion. These are good for standard candles because the
maximum luminosity is always the same for this type of supernova.
Summer 201 1
40
These observations showed that, a long time ago, the Universe was
actually expanding more slowly than it is today. So the expansion of the
Universe has not been slowing due to gravity, as everyone thought; it has
been accelerating. The Big Bang impelled galaxies apart in an expanding
Universe. But in defiance of cosmic gravity it appears that galaxies are
picking up speed instead of slowing down. This is shocking. No one
expected this-; no one knew how to explain it; it was an enormous surprise.
But there it was in the data.
The extra expansion of the Universe in recent times is revealed by
the excessive faintness of distant Type la SN, whose brightness calibrates
their distances. In other words the supemovae were too faint for their type
- they are further away than they ought to be due to their brightness. For
them to be as faint as they are, the Universe had to speed up to get them as
distant as they are observed to be. See Figure 8.
Distance in 10^ light years
Figure 8. The small dots are measurements of a supernova plotted as distance versus
redshift. The solid line marks where the mean ought to fall if the universe were uniformly
expanding. Note that the distant supemovae fall beneath the line; therefore, the
observations show that the supemovae are fainter than expected.
Dark energy is hardly science fiction. What is true for a ball tossed
in the air only to return to Earth is not true for the Universe. Although
cosmologists adopted a cute name, dark energy, for whatever is driving
this apparently anti-gravitational behavior on the part of the Universe,
nobody claims to understand why it is happening, or its implications for
the future of the universe and of the life within it, despite thousands of
learned papers and scores of conferences.
Washington Academy of Sciences
41
Figure 9 shows a diagram of how the Universe might have evolved
within the concept of dark energy.
The consequences of dark energy for fundamental physics will not
be clear until its origin is discovered, but the effects on the universe are
dramatic. Dark energy effectively contributes 70-75% of the current
energy density of the Universe, governing the expansion of space, causing
it to accelerate over the last ~7 billion years, and will determine the fate of
the Universe. Such a phenomenon is not predicted within the experimental
experience of gravity as an attractive force.
Expanding universe
Figure 9. This diagram shows changes in the rate of expansion since the Universe's birth
13.7 billion years ago. The more shallow the curve, the faster the rate of expansion. The
curve changes noticeably about 7.5 billion years ago, when objects in the Universe began
flying apart at a faster rate. Astronomers theorize that the faster expansion rate is due to a
mysterious, dark energy that is pulling galaxies apart. Credit; NASA/STSci
Gravitation as an attractive force acts to slow down the cosmic
expansion, so dark energy acts in this sense as antigravity or cosmic
repulsion. This can however occur within general relativity for substances
with strongly negative pressure. Recall the main set of equations for the
theory of general relativity:
g: + Ag:
8^G
rj^V
^// '
The left side essentially says that space is driven by geometry^ The right
side says that energy and mass follow that geometry. The second term on
the left hand side was inserted by Einstein to obtain the stationary
Summer 2011
42
Universe he believed it to be: 'A’ is the cosmological constant. He
abandoned the term after Edwin Hubble’s observations showed that the
Universe was expanding.
It is now thought that adding the cosmological constant term to
Einstein’s equations does not lead to a static Universe at equilibrium
because the equilibrium is unstable; if the Universe expands slightly, then
the expansion releases vacuum energy, which causes yet more expansion.
Likewise, a Universe which contracts slightly will continue contracting.
The nature of this dark energy is a matter of speculation. It is
known to be very homogeneous, not very dense, and is not known to
interact through any of the fundamental forces other than gravity. Since it
is not very dense — roughly 10 grams per cubic centimeter — it is hard to
imagine experiments to detect it in the laboratory. Dark energy can only
have such a profound impact on the Universe, making up 74% of universal
density, because it uniformly fills otherwise empty space.
Various types of dark energy have been proposed, including a
cosmic field associated with inflation; a different, low-energy field dubbed
“quintessence” (named after the ancients Greeks’ fifth element); and the
cosmological constant, or vacuum energy of empty space. Unlike
Einstein’s A, the cosmological constant in its present incarnation does not
balance gravity in order to maintain a static Universe; instead, it has
negative pressure that causes expansion to accelerate.
Empirically, the onslaught of cosmological data in the past decades
(all those observations of Type la SN) strongly suggests that our Universe
has a positive cosmological constant. The explanation of this small but
positive value is an outstanding theoretical challenge. The challenge
comes from explaining dark energy as a property of space. Einstein was
the first person to realize that empty space is not empty. It has the property
that it is possible for more space to come into existence.
The version of Einstein’s gravity theory that contains a
cosmological constant makes a prediction: empty space can possess its
own energy. Because this energy is a property of space itself, it would not
be diluted as space expands. As more space comes into existence, more of
this energy-of-space would appear. As a result, this form of energy would
cause the universe to expand faster and faster. Unfortunately, no one
understands why the cosmological constant should even be there, much
less why it would have exactly the right value to cause the observed
acceleration of the Universe.
Washington Academy of Sciences
43
Another explanation for how space acquires energy comes from
the quantum theory of matter. In this theory, empty space is actually full
of temporary (virtual) particles that continually form and then disappear (a
kind of quantum foaming). But when physicists tried to calculate how
much energy this would give empty space, the answer came out wrong -
wrong by a lot. The number came out 10 times too big. It’s hard to get
an answer that bad. So the mystery continues.
A last possibility is that Einstein’s theory of gravity is not correct.
That option would not only affect the expansion of the Universe, but it
would also affect the way that normal matter in galaxies and clusters of
galaxies behaved. This fact would provide a way to decide if the solution
to the dark energy problem is a new gravity theory or not: we could
observe how galaxies come together in clusters. But if it does turn out that
a new theory of gravity is needed, what kind of theory would it be? How
could it correctly describe the motion of the bodies in the Solar System, as
Einstein’s theory is known to do, and still give us the different prediction
for the universe that we need? There are candidate theories, but none are
compelling.
The thing that is needed to decide between dark energy
possibilities - a property of space, a new dynamic fluid, or a new theory of
gravity - is more and better data.
Theorists still don't know what the correct explanation is, but at
least they have given the solution a name - dark energy.
Dark Energy Opponents
Not all experts are comfortable with the idea that a strange force is
mysteriously tugging the universe apart.
One alternative is the idea that our cosmic neighborhood - the
solar system and the whole Milky Way - happens to sit at the center ot a
relatively empty bubble of space eight billion light-years across (a void). It
this were the case, we would measure the same accelerated expansion rate
we do, except it would be an illusion created by our special position in the
void.
But the latest precision measurements of the universe’s
acceleration seem to rule out that idea, which predicts a somewhat
different value for the expansion rate. However, the latest results do not
disqualify all versions of the void model. In some more complicated
Summer 201 1
44
scenarios in which the Big Bang did not happen at the same time at all
points in space, this hypothesis could still be valid.
However, ultimately many scientists are dubious of all the void
models because they put us in a special place in the universe. Copernicus
shot that idea down.
Some Current Work
Arthur Chemin^* and his colleagues are working on applying dark
energy on small cosmic scales. In the standard Cold Dark Matter model
with dark energy (ACDM), all celestial bodies are embedded in a perfectly
uniform dark energy background and experience a repulsive antigravity
action. In the cold dark matter theory, structure grows hierarchically, with
small objects collapsing first and merging in a continuous hierarchy to
form more and more massive objects. In the hot dark matter paradigm,
popular in the early eighties, structure does not form hierarchically
{bottom-up), but rather forms by fragmentation {top-down), with the
largest superclusters forming first in flat pancake-like sheets and
subsequently fragmenting into smaller pieces like our Galaxy the Milky
Way. The predictions of hot dark matter strongly disagree with
observations of large-scale structure, whereas the cold dark matter
paradigm is in general agreement with the observations. So the cold dark
matter approach is the one favored currently.
Chemin’s team asks the question, can dark energy have strong
dynamical effects on small cosmic scales as well as large scales? The
distant supemovae give us the global effect of dark energy; is there an
effect on a scale as small as the Tocal Group? They find that the
antigravity produced by dark energy is stronger than the gravity of the
Tocal Group at distances larger than ~I5 Mpc from the group center. In
other words, the answer to their question is yes. The effects of dark energy
can be seen on many different scales.
Conclusion
More is unknown than is known. We know how much dark energy
there is because we know how it affects the Universe’s expansion. Other
than that, it is a complete mystery. But it is an important mystery. It turns
out that roughly 70% of the Universe is dark energy. Dark matter makes
up about 25%. The rest - everything on Earth, everything ever observed
with all of our instruments, all normal matter - adds up to less than 5% of
Washington Academy of Sciences
45
the Universe. Maybe it should not be called normal matter at all, since it is
such a small fraction of the Universe.
What do we know? We know that the light we can detect makes
for a Universe filled with beautiful things, and for us here on Earth, that
light makes an awesome night sky.
‘ A Keplerian drop-off is what the planets do in our Solar System. The farther from the
Sun, the slower the planet travels.
" A velocity curve is a plot of circular velocity (y axis) versus distance from the center (x
axis).
Science News August 28, 2010.
Just as non-dark matter does.
' A globular cluster is small spheroid of densely packed stars ~ 10^ stars. Globular clusters
orbit around the disk of spiral galaxies. There are, for example, about 100 globular
clusters surrounding the Milky Way.
L. Mestel, ytpJ, 126, 553, 1963.
The LMC is another member of the Local Group of galaxies.
A satellite observing the cosmic microwave background.
"" One at U. Berkeley and one at the Space Telescope Science Institute.
^ M — m = — 5(logjQ d — \) where Mis the absolute magnitude, m is the apparent
magnitude, and d is the distance.
A. Chemin et al. Astronomy & Astrophysics, 520, 104, 2010.
Summer 201 1
This page intentionally left blank
Washington Academy of Sciences
47
Cosmic Distance Ladder
Sethanne Howard
USNO, retired
Abstract
Astronomers use a wide variety of methods to determine the distance to
celestial objects. The ultimate goal is to obtain the Hubble constant and
establish the Hubble law. With this law one can measure the most distant
object and determine the parameters that control our universe.
Introduction
There is no meter stick long enough to reach the stars; therefore,
astronomers use a succession of methods to determine the distances to
celestial objects. This succession is known as the cosmic distance ladder.
It has taken millennia to figure how to measure such extreme distances,
extreme, that is, compared to distances here on Earth.
Measurements of the size of the Earth go back in time to at least
the ancient Greeks. Eratosthenes (3”^^ century BCE) came surprisingly
close to determining the radius of the Earth (he was perhaps one sixth too
high). Eratosthenes also invented the concepts of latitude and longitude.
The great Indian mathematician Aryabhata (CE 476 - 550) was a pioneer
of mathematical astronomy. He came within one percent of the current
value for the circumference of the Earth. John O’Connor and Edmund
Robertson' wrote of the medieval Persian Abu Rayhan al Biruni (CE 973 -
1048) that:
Important contributions to geodesy and geography were also made
by Biruni. He introduced techniques to measure the Earth and
distances on it using triangulation. He found the radius of the Earth
to be 6339.6 km, a value not obtained in the West until the 16^^
century^
Triangulation is important in determining distances. Triangulation
is the process of determining the location of a point by measuring angles
to it from known points at either end of a fixed baseline, rather than
measuring distances to the point directly. The point can then be fixed as
the third point of a triangle with one known side and two known angles
(the old angle-side-angle trick from geometry). This is a useful tool on
Earth, especially for surveying.
Summer 201 1
48
If the triangle is very large (as it is in astronomy) then one can use
the small angle approximation. Figure 1 shows the relationships for a large
triangle. In this case, the distance .s is very nearly equal to the distance O.
fhe smaller the angle 9 the closer they are to each other. The small angle
approximation gives:
6
cos^ ~ 1 .
2
tan 6-0
Figure 1 . A large triangle where A is the distance from the origin to the point d.
In astronomy, the angle subtended by the image of a distant object
is often only a few arcseconds, so it is well suited to the small angle
approximation. The linear size {s) is related to the angular size (0) and the
distance from the observer {A) by the simple formula:
s^e — - —
206265
when 9 is measured in arcseconds. The number 296,265 is approximately
equal to the number of arcseconds in a circle (1,296,999) divided by In.
The exact formula is:
f
s = 2^ tan
V
Ok ^
1296999,
The above approximation follows when tan(9) is replaced by 0.
The Astronomical Unit
Measurement starts locally with the Earth. Once people had a
handle on Earth sized distances, and they had a tool kit of standard
measuring devices (e.g., the kilometer, the second, the gram), then they
could consider measuring the sky. To begin with, astronomers needed a
Washington Academy of Sciences
49
precise determination of the distance between the Earth and the Sun,
which is called the Astronomical Unit (AU).
Historically, the transits of Venus across the Sun were used for
this. By measuring how long it took Venus to transit across the Sun, one
could derive the value for the AU. Astronomers would observe the Venus
transit in two different locations on Earth. Using the times of the transits
they could calculate the solar parallax (which cannot be measured
directly). By combining that with the relative distances of the Earth and
Venus from the Sun they could calculate the AU. Parallax occurs when the
eye sees an object appear to shift compared to distant objects.
Another method involved work by the astronomer Simon
Newcomb in 1895. He also used data from the transits of Venus. He then
collaborated with A. A. Michelson to measure the speed of light with
Earth-based equipment; when combined with the constant of aberration
(which is related to the light-time per unit distance) this gave the first
direct measurement of the Earth-Sun distance in kilometers.
Astronomers now use radar and telemetry instead of transits.
Precise measurements of the relative positions of the inner planets can be
made by radar and by telemetry from space probes. As with all radar
measurements, these rely on measuring the time it takes for light to reflect
from an object. The measured positions are then compared with those
calculated by the laws of celestial mechanics: the comparison gives the
speed of light in AUs, which is 173.144 632 6847(69) AU/day. Because
the speed of light in meters per second (csi) is fixed in the International
System of Units, this measurement of the speed of light in AU/day (cau)
also determines the value of the astronomical unit in meters:
AU = 86400-^.
^AU
In 1976 the International Astronomical Union revised the
definition of the AU for greater precision, defining it as that length for
which the Gaussian gravitational constant takes the value
0.017 202 098 95 when the units of measurement are the astronomical
units of length (AU), mass (solar mass) and time (day). The Gaussian
gravitational constant is equal to the square root of the Newtonian
gravitational constant G; it is also roughly equal to the mean angular
velocity of the Earth in orbit around the Sun. The best current (2009)
estimate of the International Astronomical Union (lAU) for the value ol
the AU in meters \s\ AU= 149 597 870 700(3) m.
Summer 201 1
50
rhe uncertainties of the various methods are given in Table I. The
AU is given in Gigameters.
1 able I - Uncertainty in measurement of the AU.
Once the AU was known precisely, astronomers could move
outward to other objects. At the base of the ladder are fundamental
distance measurements (like the AU), in which distances are determined
directly, with no physical assumptions about the nature of the object in
question. This fundamental work is done by astronomers specializing in
the discipline of astrometry - precise astronomical measurements. The
AU, then, is the first rung on the distance ladder and becomes a baseline
for distances to the nearby stars. Subsequent rungs will eventually depend
upon assumptions about the physical state of the object being used.
Parallax
The next rung on the distance ladder comes from trigonometric
parallax. Parallax is an apparent displacement or difference in the
apparent position of an object viewed along two different lines of sight.
Triangulation is the technique that uses parallax. This technique can be
used only for objects ‘close enough’ (within about 1000 parsecs) to Earth.
The distance unit parsec stands for^^allax second - the distance at which
the angle subtended by the celestial object is one arcsecond. Figure 2 is a
photo of a statue in honor of the parallax method. Astronomers usually
express distances in units of parsecs (pc); light-years are used in popular
media, but almost invariably values in light-years have been converted
from numbers tabulated in parsecs in the original source.
As the Earth orbits around the Sun, the position of nearby stars will
appear to shift slightly against the more distant background of stars (this
shift is called parallax). These shifts form angles in a right triangle, with 2
AU making the short leg of the triangle and the distance to the star is the
long leg. See Figure 3.
Washington Academy of Sciences
51
The amount of shift is quite small, measuring one arcsecond for an
object at a distance of 1 parsec (3.26 light-years), thereafter decreasing in
angular amount as the reciprocal of the distance.
Figure 2. Statue of an astronomer and the concept of the cosmic distance ladder by the
parallax method, made from the azimuth ring and other parts of the Yale-Columbia
Refractor (telescope) (c 1925) wrecked by the 2003 Canberra bushfires which burned out
the Mount Stromlo Observatory; at Questacon, Canberra, Australian Capital Territory.
Figure 3. A diagram illustrating astronomical parallax.
Summer 201 1
52
Because parallax decreases as distance increases, useful distances
can be measured only for stars whose parallax is larger than the precision
of the measurement. Stars are quite distant, so observations of the tiny
parallax awaited the development. of precision instrumentation. In fact, the
lack of an observed parallax for stars had long been used to 'prove’ the
[earth's central position in the Universe - not orbiting the Sun. The first
stellar parallax (of the star 61 Cygni) was measured by Friedrich Wilhelm
Bessel (1784 - 1846) in 1838. He arrived at a parallax of 313.6
milliarcseconds (mas), close to the currently accepted value of 287.18
mas. fhis was a valuable proof that the Earth orbited the Sun. Bessel is
also known for the Bessel functions in mathematical physics.
First done on the ground, and now done in space, parallax
measurements typically have an accuracy measured in mas. In the 1990s,
HIPPARCOS, an astrometric satellite in Earth orbit (1989 - 1993), had the
mission of measuring precise parallaxes. It obtained parallaxes for over a
hundred thousand stars with a precision of about a mas, providing useful
distances for stars out to a few hundred parsecs.
So the next rung on the ladder has a precision of about a
milliarcsecond. The European Space Agency’s Gaia mission, due to
launch in 2012 and come online in 2013, will be able to measure parallax
to an accuracy of 10 microarcseconds. As one moves farther out in space
the precision drops. It can never improve. This is an important point. The
farther out one goes the greater the uncertainty. Astrometry, as always,
forms the basis of the distance ladder. Trigonometric parallax is good for
stars to a distance of about 100 parsecs. Overall, this is not very far.
Using the AU as the baseline forms the first and most precise type
of parallax: trigonometric parallax. There are others: statistical parallax,
secular parallax, moving cluster parallax, and spectroscopic parallax.
Cross referencing one method to another allows one to move outward in
distance. Overlap between methods is necessary.
The statistical parallax comes from a statistical analysis of the
motion of stars that are located at about the same distance, are with the
same spectral class, and show a similar brightness range. It assumes that
the average radial velocity of the set of stars is the same as the average
transverse velocity. One can then obtain a mean parallax for that set of
stars. This method is useful for measuring the distances of bright stars to
about 500 pc.
Washington Academy of Sciences
53
The secular parallax uses the motion of the Sun through space as
the baseline. For stars in the Milky Way disk, this corresponds to a mean
baseline of about 4 AU/year (this is how far the Sun moves at 16.9 km/s
with respect to the local standard of rest). After several decades, the
baseline can be orders of magnitude greater than the Earth-Sun baseline
used for traditional parallax. However, secular parallax introduces a higher
level of uncertainty because the relative velocity of other stars is an
additional unknown. When applied to samples of multiple stars, the
uncertainty can be reduced; the precision is inversely proportional to the
square root of the sample size. As with statistical parallax, this method is
useful to a distance of about 500 pc.
Neither of these two methods is useful for individual stars. There
are uncertainties in the brightness measures and in the velocity measures.
Secular parallax is the better choice when the velocity of the Sun is greater
than the average radial velocity of the sample. Statistical parallax is better
when the velocity of the Sun is less than the average radial velocity of the
sample.
The moving cluster parallax uses the motions of individual stars in
nearby star clusters (such as the Hyades cluster) to find the distance to the
cluster. One assumes that the stars in the cluster are about the same age
and about the same distance from Earth. This method gives the Hyades
cluster a distance of 45.53 ± 2.64 pc. The average of HIPPARCOS
trigonometric parallaxes for Hyades members gives a cluster distance of
46.34 ± 0.27 pc. Thus, when comparing the two methods, one can see that
they do not give identical results, even for a nearby object; hence the need
for more data. This clearly indicates the loss in precision as one moves
outward from Earth.
Spectroscopic parallax is a method useful to a distance of about
10,000 parsecs. When the spectrum of a star is observed carefully, it is
possible to determine the surface temperature and the surface gravity of
the star. Knowing these two allows us to determine the intrinsic luminosity
(the brightness emitted by the star). Knowing the luminosity and the flux
(the value received at Earth), one can determine the distance from the
inverse square law. However, this only works for 'normaf stars (stars on
the main sequence), and any given single object might not be normal.
Additionally, this method depends upon theoretical models of stars and is
only as good as the models (which are actually rather good).
In astronomy, the brightness of an object is usually given in terms
of its absolute magnitude, M This quantity is derived from the logarithm
Summer 201 1
54
c
of its luminosity as seen from a distance of 10 pc. The apparent ;
magnitude, m, (the magnitude as seen by the observer), can be used to ^
determine the distance D to the object in kiloparsecs (where 1 kpc equals I
10'’ pc) as follows: • i
5 log,o m - M - 5
kpc
where m represents the apparent magnitude and M represents the absolute
magnitude. For this to be correct both magnitudes must be in the same
frequency band (z.e., one cannot compare blue magnitudes to red
magnitudes) and there can be no relative motion in the radial direction.
There is an additional issue with interstellar extinction. The space between
stars is not empty. It contains gas and dust which makes distant objects
appear fainter and redder than they actually are. One must correct for this.
Measurements of the interstellar extinction are part of fundamental
astronomy.
The difference between apparent and absolute magnitudes {m- M)
is called the distance modulus, and astronomical distances, especially
intergalactic ones, are sometimes tabulated in this way.
As an example of spectroscopic parallax consider the star Spica. Its
apparent magnitude is 0.98. Its spectral type” is B1 which means its
absolute magnitude can range from -3.2 to -5.0. The distance modulus
therefore gives a range in distance of 157.05 to 68.54 pc. HIPPARCOS
measurements give a distance of 80.38 pc. Hence the method can work but
is not very^ precise.
Wow, all this work and we have yet to leave the Milky Way,
which has a diameter of about 30,000 pc.
Other individual celestial objects can have fundamental distance
estimates made for them under special circumstances. If the expansion of a
gas cloud, like a supernova remnant or planetary nebula, can be observed
over time, then an expansion parallax distance to that cloud can be .
estimated. The distance estimate comes from computing how far away the ,
object must be to make its observed absolute velocity appear with the
observed angular motion. ^
Expansion parallaxes in particular can give fundamental distance 5
estimates for objects that are very far away, because supernova ejecta have I
large expansion velocities and large sizes (compared to stars). Further, L
they can be observed with radio interferometers which can measure veiy^
Washington Academy of Sciences
55
small angular motions. These combine to mean that some supernovae in
other galaxies have fundamental distance estimates. Though valuable,
such cases are quite rare, so they serve as important consistency checks on
the distance ladder rather than workhorse steps by themselves.
The Standard Candle
To move outward in distance one starts with trigonometric
parallaxes, then observes the same object with the other types of less
precise parallaxes to calibrate and scale them. Once this is done one has
the distance ladder reaching about 10,000 pc - halfway across the Milky
Way.
At this point one must put aside the parallax method and use other
methods. With few exceptions, distances based on direct measurements
are available only out to about a thousand pc, which is a modest portion of
our own Galaxy. For distances beyond that, measurements are going to
depend upon physical assumptions, that is, knowledge of the object in
question. One must recognize the object and assume the class of objects is
homogeneous enough that its members can be used for a meaningful
estimation of distance - a standard candle as it were.
Almost all of the remaining rungs on the ladder are standard
candles of one kind or another. A standard candle is an object that belongs
to some class that has a known brightness (z.e., all members of the class
have the same brightness). By comparing the known luminosity of the
latter to its observed brightness, the distance to the object can be computed
using the inverse square law.
Two problems exist for any class of standard candle. The principal
one is calibration, determining exactly what the absolute magnitude of the
candle is. This includes defining the class well enough that members can
be recognized, and finding enough members with well-known distances
that their true absolute magnitude can be determined with enough
accuracy. The second lies in recognizing members of the class, and not
mistakenly using the standard candle calibration upon an object which
does not belong to the class. At extreme distances, which are where one
most wishes to use a distance indicator, this recognition problem can be
quite serious.
Another significant issue with standard candles is the question of
how standard they are. For example, all observations seem to indicate that
Type la supernovae that are of known distance have the same brightness
(corrected by the shape of the light curve); however, the possibility that
Summer 201 1
56
the distant lype la supemovae have different properties than nearby Type
la supernovae exists. The use of Type la supemovae is cmcial in
determining the correct cosmological model. If indeed the properties of
the Type la’s are different at large distances, i.e. if the extrapolation of
their calibration to arbitrary distances is not valid, ignoring this variation
can dangerously bias the reconstruction of the cosmological parameters.
That this is not merely a philosophical issue can be seen from the
history of distance measurements using Cepheid variables (this technique
is described below). In the 1950s, astronomer Walter Baade discovered
that the nearby Cepheid variables used to calibrate the standard candle
were of a different type than the more distant ones used to measure
distances to nearby galaxies. The nearby Cepheid variables were young,
massive stars with much higher metal content than the distant old, faint
ones. As a result, the old stars were actually much brighter than believed,
and this had the ultimate effect of doubling the distances to the globular
clusters, the nearby galaxies, and the diameter of the Milky Way.
Cepheids
Now that Cepheids have been mentioned, let me discuss this
important class of stars. They are a crucial mng in the distance ladder.
Cepheids are luminous variable stars that radially pulsate. The strong
direct relationship between a Cepheid’ s luminosity and its pulsation period
makes them an important standard candle for Galactic and extragalactic
sources. Type I Cepheids undergo pulsations with very regular periods on
the order of days to months.
A relationship between the period and luminosity for Type I
Cepheids was discovered in 1908 by Henrietta Swan Leavitt in her
investigation of thousands of variable stars in the Magellanic Clouds.
Figure 4 is the figure from her discovery paper of 1912.
To use them as standard candles, one observes the pulsation period
to get the luminosity (absolute magnitude). By then measuring the
apparent brightness (value observed at Earth) one has everything needed
to use the distance modulus m - M. The work was so important that
Leavitt was considered for the Nobel Prize, but she died before her name
could be submitted.
One needs a distance measurement from some other method for at
least one Cepheid to correlate the class to the distance ladder. The original
Cepheid variable, delta Cephei, is close enough that we have a
Washington Academy of Sciences
57
trigonometric parallax measurement for it. Figure 5 shows the data from
HIPPARCOS for delta Cephei.
logaiithm of period in days
Figure 4. (From Leavitt’s publication): ‘in Figure 2 ... a straight line can readily be drawn
... showing that there is a simple relation between the brightness of the variables and their
periods.... Since the variables are probably at nearly the same distance from the Earth,
their periods are apparently associated with their actual emission of light, as determined
by their mass, density, and surface brightness.”
-Leavitt (1912)
HIP 110991
Figure 5. The period luminosity plot for delta Cephei from Hipparcos data. The dots are
the observations; the line is the predicted curve.
Summer 201 1
58
In addition, using data from the HIPPARCOS astrometry satellite,
astronomers calculated the distances to many Galactic Cepheids using the
trigonometric parallax technique. The resultant period-luminosity
relationship for Type 1 Cepheids .was:
=-2.81 log(P)- (1.43 ±0.1)
where My is the absolute magnitude and P is the period. The uncertainty is
now much greater than the uncertainty for the AU. Once the class is
calibrated using Milky Way Cepheids, one can then move out to the
nearby Small. Magellanic Cloud using Cepheids found there and leap-frog
to the Large Magellanic Cloud and outward to the Andromeda galaxy.
To date, NGC 3370, a spiral galaxy in the constellation Leo,
contains the farthest Cepheids yet found at a distance of 29 Mpc. Cepheid
variable stars are in no way perfect distance markers: for nearby galaxies
they have an error of about 7% and up to a 15% error for the most distant.
There are a number of technical issues with Cepheids. Tying down
the errors is a difficult process and is on-going. In fact, in 2009 astronomer
Allen Sandage said that the existence of a universal period-luminosity
relation of classical Cepheids is an only historically justified illusion.
Nevertheless they remain an important rung on the distance ladder.
The same principle used for Cepheids applies to RR Lyrae variable
stars. RR Lyrae variables are periodic variable stars, commonly found in
globular clusters, and often used as standard candles to measure galactic
distances. This type of variable is named after the prototype, the variable
star RR Lyrae in the constellation Lyra. Once one knows that a star is an
RR Lyrae variable (e.g., from the shape of its light curve), then one knows
its luminosity. From that one can determine the distance. RR Lyrae stars
are useful standard candles for objects within the Milky Way.
Binary Stars
Binary star systems”' are very important in astronomy because
calculations of their orbits allow the masses of their component stars to be
directly determined, which in turn allows indirect estimates of other stellar
parameters, such as radius and density. This also determines an empirical
mass-luminosity relationship from which the masses of single stars can be
estimated. Binaries can sometimes be used as distance indicators.
Binary' stars are often detected optically, in which case they are
called visual binaries. These binaries are seen as two separate stars. Many
Washington Academy of Sciences
59
visual binaries have long orbital periods of several centuries or millennia
and therefore have orbits which are uncertain or poorly known. Binary'
stars may also be detected by indirect techniques, such as spectroscopy
{spectroscopic binaries). If a binary star happens to orbit in a plane along
our line of sight, its components will eclipse and transit each other; these
pairs are called eclipsing binaries, or, if they are detected by their changes
in brightness during eclipses and transits, photometric binaries.
The distance to a visual binary star may be estimated from the
masses of its two components, the size of their orbit, and the period of
their revolution around one another. A dynamical parallax is an (annual)
parallax which is computed from such an estimated distance.
In the last decade, the advent of 8 meter class telescopes has
enabled the measurement of eclipsing binaries’ fundamental parameters
{e.g., mass, radius). This makes it feasible to use them as indicators of
distance. Recently, they have been used to give direct distance estimates to
the Large Magellanic Cloud, the Small Magellanic Cloud, the Andromeda
galaxy, and the Triangulum galaxy. These galaxies are in the Local Group
- the group of galaxies that contains our Milky Way. Eclipsing binaries
offer a direct method to gauge the distance to galaxies to a new improved
5% level of accuracy out to a distance of around 3 Mpc.
Andromeda is about 778 kpc from the Milky Way. Astronomers
first used an eclipsing binary to determine the precise distance to the
Andromeda galaxy in 2005. This distance is in excellent agreement with
other, less direct determinations. The binary, known as
M31VJ00443799+4129236 (I had to share the name!), has two hot blue
stars of spectral types O and B. As the stars orbit each other every' 3.54969
days, they pass in front of and behind each other.
By comparing the absolute and apparent magnitudes of the two
stars, the astronomers concluded the Andromeda Galaxy is 2.52±0.14
million light-years from Earth. This agrees perfectly with the Cepheid-
based distance to Andromeda: 2.5 million light-years. The distance,
however, does not depend on first assuming a distance to the Large
Magellanic Cloud and leap-frogging from there. The agreement means
astronomers can probably trust Cepheid distances to more distant galaxies,
such as those in the Virgo and Fornax clusters.
Nevertheless the uncertainty is now at the 5% level.
Summer 201 1
60
Beyond Cepheids
A succession of distance indicators, which is the distance ladder, is
needed for determining distances to other galaxies. Objects bright enough
to be recognized and measured at large distances are so rare that few or
none are present nearby, so there are too few examples close enough with
reliable trigonometric parallax to calibrate the indicator. For example,
Cepheid variables, one of the best indicators for nearby spiral galaxies,
cannot be satisfactorily calibrated by trigonometric parallax alone. There
are not enough overlapping stars. The situation is further complicated by
the fact that' different stellar populations generally do not have all types of
stars in them. Cepheids in particular are massive stars, with short lifetimes,
so they will only be found in places where stars have very recently been
formed. Consequently, because elliptical galaxies usually have long
ceased to have large-scale star formation, they will have few or no
Cepheids. Instead, distance indicators whose origins are in an older stellar
population (like RR Lyrae variables) must be used. However, RR Tyrae
variables are less luminous than Cepheids (so they cannot be seen as far
away as Cepheids can).
Because the more distant steps of the cosmic distance ladder
depend upon the nearer ones, the more distant steps include the effects of
errors in the nearer steps, both systematic and statistical ones. The result of
these propagating errors means that distances in astronomy are rarely
known to the same level of precision as measurements in the other
sciences, and that the precision necessarily is poorer for more distant types
of object.
There are still only about 30,000 stars with relative parallax
accuracy better than 10%. And these stars are all in the solar neighborhood
with few Cepheids and RR Lyraes.
Another concern, especially for the very brightest standard candles,
is their '‘standardness:” how homogeneous the objects are in their true
absolute magnitude. For some of these different standard candles, the
homogeneity is based on theories about the formation and evolution of
stars and galaxies, and is thus also subject to uncertainties in those aspects.
For the most luminous of distance indicators, the Type la supemovae, this
homogeneity is known to be poor; however, no other class of object is
bright enough to be detected at such large distances, so the class is useful
simply because there is no real alternative.
So where do we go from here? We look at Type la supemovae.
Washington Academy of Sciences
61
Type la Supernovae
Figure 6 shows a supernova in the galaxy NGC 4526. The
supernova is the bright spot in the lower left. It is, temporarily, as bright as
the galaxy. Type la supemovae (SN) have a very well-determined
maximum absolute magnitude as a function of the shape of their light
curve and are useful in determining extragalactic distances up to a few
hundred Mpc.
Figure 6. Supernova SN 1994D in the NGC 4526 galaxy (bright spot on the lower left).
Image by NASA, ESA
Type la SN are some of the best ways to determine extragalactic
distances. la’s occur when a binary white dwarf star begins to accrete
matter from its companion red dwarf star. As the white dwarf gains matter,
eventually it reaches its Chandrasekhar Timit of 1.4 This is as
massive as it can get. Once that limit is reached, the white dwarf star
becomes unstable and undergoes a runaway nuclear fusion reaction.
Because all Type la SN explode at about the same mass, their absolute
magnitudes are all the same. This makes them very useful as standard
candles. All Type la SN have a standard blue and visual magnitude of
-M,. - -19.3 ±0.3.
Therefore, when observing a Type la SN, if it is possible to determine
what its peak magnitude was, then its distance can be calculated. It is not
necessary to capture the SN directly at its peak magnitude. Compare the
shape of the light curve (taken at any reasonable time after the initial
explosion) to a family of parameterized curves to determine the absolute
magnitude at the maximum brightness. This method also takes into effect
interstellar extinction/dimming from dust and gas. So the method is to
Summer 201 1
62
observe the SN to get its apparent brightness at maximum, and since the
absolute luminosity is known, use the distance modulus to get the
distance.
Using Type la SN is one of the better methods, particularly since
SN explosions can be visible at great distances (their luminosities rival
that of the galaxy in which they are situated), much farther than Cepheid
variables (500 times farther). Much time has been devoted to the refining
of this method, fhe current uncertainty approaches a mere 5%,
corresponding to an uncertainty of just 0.1 magnitudes.
There are other specialized distance indicators, but they are
typically empirical and lack a robust underpinning.
Future Possibility
The D-a relation, used in elliptical galaxies, relates the angular
diameter (D) of the galaxy to its velocity dispersion (a). The observed
velocity dispersion is the result of the superposition of many individual
stellar spectra, each of which has been Doppler shifted because of the
star’s motion within the galaxy. Therefore, a can be determined by
analyzing the integrated spectrum of the whole galaxy; the galaxy
integrated spectrum will be similar to the spectrum of the stars which
dominate the light of the galaxy, but with broader absorption lines due to
the motions of the stars. The velocity dispersion is a fundamental
parameter because it is an observable that better quantifies the potential
well of a galaxy.
The parameter D is, more precisely, the galaxy’s angular diameter
—2
out to the surface brightness level of 20.75 B-mag arcsec . This surface
brightness is independent of the galaxy’s actual distance from us. Instead,
D is inversely proportional to the galaxy’s distance. This relation between
D and o is:
log,o(D) = 1.3331og(cj) + C
where C is a constant which depends on the distance to the galaxy clusters.
This method has the possibility of becoming one of the strongest
methods of extragalactic distance calculators. As of today, however,
elliptical galaxies aren’t bright enough to provide a calibration for this
method through the use of techniques such as Cepheids. So instead
calibration is done using cruder methods like supemovae.
Washington Academy of Sciences
63
Conclusion
The purpose of the ladder is to obtain the Hubble Constant, Hq.
Hubble observed that the fainter the galaxy the more its spectra was
redshifted. Figure 7 shows Hubble’s original plot. Finding the value of the
Hubble constant was the result of decades of work by many astronomers,
both in amassing the measurements of galaxy redshifts and in calibrating
the steps of the distance ladder. Hubble’s Law relates redshift to distance
and is the primary means we have for estimating the distances of quasars
and distant galaxies in which individual distance indicators cannot be
seen. The redshift, z, is given by the shift in the spectral line:
z = — - 1, and
A
cz - H^d
where d is the distance. So if one can measure the redshift, z, then one can
determine the distance.
A 2011 determination of Hq is Hq = 73.8 ± 2.4 (km/s)/Mpc. By
linking Cepheid observations in the galaxy NGC 4258 with those in host
galaxies of supemovae, this recent determination claims to reduce the
error to less than 5%. The age of the Universe is inversely proportional to
//q. This puts an uncertainty of about ±0.1 billion years for the age of the
Universe.
After all this we realize that the age of the Universe and the
Hubble constant ultimately depend upon that first rung of the ladder
determined by astronomers who work in the field of astrometry. Figure 8
is an old cartoon showing this dependence. It does not really overstate the
case. Errors in the cosmic distance ladder do indeed increase with
distance. As one ascends the ladder, the more precise the preceding rung
is, then the more precise the next link will be.
Summer 201 1
64
*BOOtll
»ooni
Figure 7. Hubble’s original plot relating redshift (y axis) to distance (x axis)
THE ASTRO HOMICAL PYRAMID
ILJLUSTMTING THE irniRDEPEM)E/CE OF T>€ VARIOUS AREAS OF STICY
GET BACK TO BASICS ~ support AsTROMgTRv
Figure 8. The only problem I have with this cartoon is that they are all men!
' John J. O'Connor, Edmund F. Robertson (1999). Abu Arrayhan Muhammad ibn Ahmad
al-Biruni, MacTutor History of Mathematics archive, U. St. Andrews, Scotland.
" This is a scale based on the spectrum that runs from hot (O stars) to cool (M stars). The
scale has seven levels: OBAFGKM. Each level defines a range of absolute
magnitudes.
These are stars that orbit each other.
Washington Academy of Sciences
Outgoing President’s Remarks
WAS Awards Banquet - May 201 1
Mark Holland
65
Greetings to all: Ladies and Gentlemen. WAS members,
distinguished awardees, honored guests. The awards banquet is one of the
highlights of the Washington Academy calendar. Allow me to begin by
thanking all of the many people responsible for putting together this
evening’s program and handling the logistics: the WAS banquet and
Awards Committees and our Executive Director, Peg Kay. We thank you
all for your hard work.
The founding fathers of the WAS established the Academy for the
purpose of “encouraging scientific endeavor in all its forms.” As this
year’s Academy President, I suppose that makes me the 113th head
cheerleader for the Washington area science team. We have a lot to cheer
about. It still amazes me that within our relatively small geographic area,
more than 60 scientific societies and institutions are affiliated with the
WAS. What a vital atmosphere and what fertile ground for the scientific
endeavor!
During the past year, the Academy continued its long-standing
support of junior scientists through the activities of the Junior Academy
and STARS program. It is wonderful to see that Dick Davies has so ably
taken over the reins of this program from Paul Hazen and that it is
thriving. Congratulations to Dick and to Paul for a job well done. Last fall
Summer 201 1
66
we sponsored a lecture by Capt. Phillip Renaud of the Living Oceans
Foundation on their work charting reef communities. We also presented a
second installment of our “Science is Murder” series, a panel discussion
by popular mystery writers who incorporate science themes in their work.
Both of these programs were well attended and well received. The
affiliates’ reception, held in November at AAAS, featured a talk by Dr.
(iene Williams, our Vice President for Affiliated Societies, on a scientific
expedition to Iceland.
Of all our activities during 2011, the one that makes me the most
happy is the establishment of College and University Chapters of the
Washington Academy. With the establishment of these chapters, the
Academy has a mechanism for reaching out to the next generation of
scientists and welcoming them into the profession. I encourage any of you
here in the audience this evening with ties to a local college or university
to consider starting an Academy Chapter. Please contact me for additional
information. The upcoming CapSci meeting in 2012 will be a great
opportunity for our college and university students to showcase their work
and make contact with working professionals whose scientific interests
mesh with their own.
Finally, I invite each of you and the societies you represent to take
an active role in the work of the WAS. We cheerleaders of the Academy
are here to encourage the scientific endeavor. Flelp us as active
participants to identify ways in which we can be more supportive of your
work and the collective work of our scientific community.
Thank you to the Academy and to all of you for your
encouragement and support during the past year. I look forward to
continuing to work with the Academy and our next President (head
cheerleader), Gerry Christman.
Washington Academy of Sciences
AWARDEES
Physical Sciences,
Gerald Fraser
Social and Behavioral Sciences
Gary E. Machlis
Summer 201 1
Health Sciences
Naomi L. Corman Luban
Biological Sciences
Mina Izadjoo
Washington Academy of Sciences
69
Leo Schubert Award for the Teaching of Science in College
David L. Trauger
Engineering Sciences
Neal F. Schmeidler
Summer 201 1
70
Banquet Speaker
Sam Kean
Can the Periodic Table Tell a Story?!
May 12, 2011
Minutes by Ron Hietala of Sam Kean’s Talk
On the occasion of the Annual Meeting and Award Ceremony of the
Washington Academy of Sciences, Dr. Jacqueline Maffucci introduced
Sam Kean, the featured speaker. Mr. Kean is a correspondent for Science
magazine and recently published his first book. The Disappearing Spoon
and other True Tales of Madness, Love, and the History of the World
from the Periodic Table of the Elements (Little, Brown, 2010). The book
has received very favorable reviews and comments. Briefly, reviewers
find it, as do I, an unusually good read and a fun, interesting book about
science, especially science history. Mr. Kean’s address to the Academy
was also a recount of tales from the Table.
When a kid in the third grade or so Mr. Kean had a streak of strep throat
infections. He stayed home from school for several periods. His mom
took his temperature with an old fashioned thermometer, and more than
once. Mr. Kean, who says he was a clumsy kid, dropped it and broke it.
He was secretly excited by this. Brilliant spheres of mercuiy^ scattered
about the floor. His mother got down with a toothpick to scoot the blobs
together. It was cause for wonder how two blobs would come close
together, and, with a jiggle, jump into one slightly larger blob. Mercury
Washington Academy of Sciences
71
was so neat that the Keans kept it in a little pill bottle. Sam's mother
would get it down sometimes and show it to the kids. That was how he
got started on the Periodic Table.
When they gave him the Table in school, he looked for mercury and did
not find it. When he learned the symbol for mercury was Hg, he found
that really strange; neither of those letters actually occurs in the word,
mercury. He asked around about it and found the letters come from Tatin
and Greek words.
This led to further inquiry. He found the element had been known since
ancient times. There was a god with the same name and a planet named for
it. Alchemists used mercury in their experiments and demonstrations.
Nearer home, Mr. Kean found some more history involving mercury. In
South Dakota, they always had a long section of history courses devoted to
Tewis and Clark. Benjamin Rush, a physician who was also one of the
signers of the Declaration of Independence, supported the Expedition.
Rush stayed behind to fight a yellow fever epidemic but did leave a mark
by assisting them nevertheless.
One of Rush’s favorite treatments was a mercury chloride sludge. He
prescribed this often, sometimes until his patients’ hair and teeth fell out
and they drooled. He also had a patented mercuric pill called "Dr. Rush’s
Bilious Pills.” About four times the size of an aspirin, 600 of them went
with Tewis and Clark. They were powerful laxatives, popularly referred to
as thunderclappers, and Rush encouraged their liberal use. They flushed
people’s systems very effectively. Historians and archeologists today can
pinpoint the locations of some Tewis and Clark camps by concentrations
of mercury in the soil.
From this one element, Mr. Kean said, he learned much beyond chemistry.
He learned about history, alchemy, entomology, poisons, and psychology.
He gravitated toward the teachers who told stories that included such
broad context with their material, and that was the pattern he followed in
The Disappearing Spoon.
Take aluminum, for instance. Today it is one of the most common metals.
For a long time, it was more precious than silver and gold. This was
because it was very hard to get it purified, to separate it from the oxygen.
When scientists did start to get it, it was considered miraculous; it was
light, strong, and beautiful. Kings and emperors wanted it. Mr. Kean
showed a picture of an aluminum sculpture used by Napoleon III as a
centerpiece. Gold items held places off the center. Napoleon III also had
Summer 201 1
72
an aluminum cutlery set used by his most important guests, while less
favored guests used the gold pieces. The top of the Washington Monument
was finished with a small square of aluminum, to show the wealth of the
developing nation in 1884.
Not long after that, chemists figured out how to separate aluminum
efficiently. One of the chemists, Charles Martin Hall, formed a company
called Alcoa, which started shipping aluminum at the breathtaking rate of
50 pounds a day. The price of aluminum dropped quickly from dozens of
dollars an ounce to 25 cents a pound.
So aluminum has all the elements of a great story. It had a romantic
history, a breakthrough development, and a great change in practice.
Finally, it had a new and changed state of being. Your interpretation may
depend on your temperament, whether aluminum was better off as a
precious metal or a useful metal.
Cadmium has a similar story arc. Early on, it found use as a pigment to
make red and yellow paints. Painters favored the vibrant cadmium
colors. In Japan about the time of WWI, they were refining zinc.
Cadmium has similar properties to zinc, and some of the processes
yielded cadmium that contaminated the zinc. When they got the
cadmium separated, they dumped it in the streams. It followed the
streams down to the rice paddies. Rice, it seems, is a cadmium sponge,
and farm families soon experienced problems including kidney disease,
pain and brittle bones. One woman reportedly had her wrist broken by a
doctor taking her pulse. They called it itai-itai (ouch-ouch) disease, for
the pained shouts of the afflicted.
A local doctor figured out that what was happening was chronic
cadmium poisoning. A long and tortuous civil action resulted in a large
settlement for the victims, and cadmium became a symbol for evil in
Japan.
Even in the 1980’s, in making another in the Godzilla series of movies, the
evils of cadmium were summoned. To kill off Godzilla, the heroes used
missiles tipped with cadmium. Cadmium, even in 1980, was the nastiest
thing they could imagine. Considering that Godzilla was himself the
biological accident of an H-bomb explosion that is quite a distinction for
cadmium.
Washington Academy of Sciences
73
Mr. Kean speculated that, had he given this talk a title, it might have been,
can the Periodic Table tell a story....” Not, '‘Can the Periodic Table Tell
a Story?” but, “Boy, Can the Periodic Table Tell a Story!” You can really
learn a lot of science from the Periodic Table. People remember and
absorb better what they learn through stories. The personalities of the
people also tell us things.
People eat and breathe the Periodic Table. They bet huge sums of money
on it. Philosophers use it to probe the meaning of science. It even spawns
wars sometimes.
During the cold war, the Periodic Table was a contested field. Science was
then acknowledged to be led by European scientists, who viewed
Americans as upstarts. Elements then discovered might have been named
for Alabama, Illinois, or Virginia, where they were discovered. The
European scientists who were in charge then did not trust the American
claims. Eater European scientists found these elements and named them
things like Francium. After WWII, however, Americans, especially the
Berkeley group, started filling in box after box after box.
When the Soviets got going in science during the cold war, they generally
had the support of Stalin. That was with the exception of the new variety
of physics. Stalin, who considered himself an intellectual authority on just
about everything, was suspicious of the developing sciences of relativity
and quantum mechanics. Tie wanted them gone. He thought they were
spooky and counterintuitive. He was planning to order the purge when he
was told by a brave adviser that this might hurt the nuclear weapon
program. Stalin thought about that a few seconds and then said, “Eeave the
physicists in peace; we can always shoot them later.” On that basis, Soviet
atomic physics moved ahead apace.
Soviet scientists were more comfortable studying elements. These new
elements had obvious value and obvious validity. Their accomplishments
were ones that the whole Soviet Union could be proud of They did make
progress, and in 1963, they finally beat the Americans at what had become
the Americans’ game; they discovered an element before the Americans
did.
Then, the Americans treated the Soviets the same way the Europeans had
treated the Americans. The Americans refused to admit the validity of the
proof of the new element. Subsequent discoveries were contested at length
Summer 201 1
74
and with strength, and the disputes over who had discovered them
outlasted the cold war.
I'he Americans managed to get one named Seaborgium, after Glenn
Seaborg. This seemed quite boorish elsewhere in the world, as the
tradition had been that you had to be dead before you could enjoy such an
honor. Mr. Kean showed a picture of Mr. Seaborg as an elderly, smiling
cherub looking proudly at his box, box 106, on the Table. New rules were
made after that, and if they last, Mr. Seaborg will be the only living man
ever so fortunate as to have his name in the Table. Mr. Kean calls the
Table "the most limited real estate in science,” as there are only 100 plus a
few spaces on it.
In writing The Disappearing Spoon, Mr. Kean spoke to many scientists,
some of whom had not looked at the Table in decades. Some were
surprised at how much this very fundamental construct had changed. It
used to be only eight boxes wide. Some elements did not even rate their
own box; they shared one with another element across a diagonal line.
Many elements have been added.
One might wonder, are they going to discover more elements? Presumably
so. The most recent one was added only a little over a year ago. It is called
by a temporary name, ununseptium, which is Tatin for 117. It filled the
bottom row and made the Table a perfect rectangle, and that may be the
last time that occurs, also. The larger elements are very fragile and last a
few seconds at most, so discovery is likely to be more haphazard from this
point forward.
The Table has been pictured in many ways. It has been portrayed as a
galaxy, as board games, as a sort of double helix, maps, mobius strips, and
even as a Rubik's cube. (That last one was patented.) Mr. Kean found one
woman who went to a photomat and made about 120 pictures of herself
She used them to decorate a Periodic Table which she keeps on her
refrigerator. Mr. Kean doesn’t know how some of these unorthodox Tables
are useful, but he is pleased that people continue to tinker with the Table.
Finally, Mr. Kean pointed out that the Table, in addition to being a veiy^
compact scientific heuristic, has implications that are very broad. Astro-
scientists have searched for ways to try to communicate with intelligent
life elsewhere in the universe. This is a tough question, since these beings
would likely share little of our culture. Flow could we communicate with
them; what might we have in common with them? Various ideas were
Washington Academy of Sciences
75
offered, such as prime numbers and pi, the relationship of the
circumference of a circle to its diameter. Mr. Kean liked the notion of
using the Periodic Table. There are only 100-some elements in the
universe and it seems likely intelligent beings would know of them. It is a
literally universal concept, perhaps the most nearly perfectly universal
concept we know, and the elements are arranged the same eveiy^where.
After the talk, Mr. Kean offered to answer questions and sell books. The
appreciative audience took both offers.
Summer 201 1
76
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
Acoustical Society of America
American/International Association of Dental Research
American Association of Physics Teachers, Chesapeake Section
American Fisheries Society
American Institute of Aeronautics and Astronautics
American Institute of Mining, Metallurgy & Exploration
American Meteorological Society
American Nuclear Society
American Phytopathological Society'
American Society for Cybernetics
American Society for Microbiology
American Society of Civil Engineers
American Society of Mechanical Engineers
American Society of Plant Physiology
Anthropological Society of Washington
ASM International
Association for Women in Science (AWIS)
Association for Computing Machinery
Association for Science, Technology, and Innovation
Association of Information Technology Professionals
Biological Society of Washington
Botanical Society of Washington
Chemical Society of Washington
District of Columbia Institute of Chemists
District of Columbia Psychology Association
Eastern Sociological Society
Electrochemical Society
Entomological Society of Washington
Geological Society of Washington
Flistorical Society of Washington, DC
Human Factors and Ergonomics Society
Institute of Electrical and Electronics Engineers, Washington DC Section
Institute of Electrical and Electronics Engineers, Northern Va. Section
Institute of Food Technologies
Institute of Industrial Engineers
Instrument Society of America
Marine Technology Society
Mathematical Association of America
Medical Society of the District of Columbia
Paul Arveson
J. Terrell Hoffeld
Frank R. Haig, S.J.
Ramona Schreiber
David W. Brandt
Michael Greeley
Kenneth Carey
Steven Arndt
Kenneth L. Deahl
Stuart Umpleby
VACANT
Kimberly Hughes
Daniel J. Vavrick
Mark Holland
Marilyn London
Toni Marechaux
Jodi Wesemann
Kent Miller
F. Douglas Witherspoon
Barbara Safranek
F. Christian Thompson
Emanuela Appetiti
Jim Zvvolenlk
Jim Zvvolenlk
David Williams
Ronald W. Mandersheid
Robert L. Ruedisueli
F. Christian Thompson
Bob Schneider
VACANT
Michael Eidelkind
Richard Hill
Murty Polavarapu
Isabel Walls
Neal F.Schmeidler
Hank Hegner
Judith T. Krauthamer
Sharon K. Hauge
Duane Taylor
Washington Academy of Sciences
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
i
I
National Capital Astronomers
National Geographic Society
Optical Society of America
Pest Science Society of America
Philosophical Society of Washington
Society of American Foresters
Society of American Military Engineers
Society of Experimental Biology and Medicine
Society of Manufacturing Engineers
Soil and Water Conservation Society
Technology Transfer Society
Virginia Native Plant Society, Potomac Chapter
Washington Evolutionary Systems Society
Washington History of Science Club
Washington Chapter of the Institute for Operations Research and Management
Washington Paint Technology Group
Washington Society of Engineers
Washington Society for the History of Medicine
Washington Statistical Society
World Future Society
Jay H. Miller
VACANT
Jim Cole
VACANT
Peg Kay
Denise Ingram
VACANT
VACANT
VACANT
Bill Boyer
Clifford Eanham
VACANT
Jerry L.R. Chandler
Albert G. Gluckman
Russell R. Vane III
VACANT
Alvin Reiner
Alain Touwaide
Mike Cohen
Russell Wooten
Washington Academy of Sciences
Floor
1200 New York Ave. NW
NONPROFIT ORG
US POSTAGE PAID
MERRIFIELDVA 22081
PERMIT# 888
Washington, DC 20005
Return Postage Guaranteed
/
^*^H:***H:*******J^JXE£) ADC 207
ERNST MAYR LIBRAY
Harvard University
26 OXFORD ST
MUSEUM COMP ZOOLOGY SERIAL RECORDS DIVISION
CAMBRIDGE, MA 02138-2902
WAS
MCZ
LIBRARv
Jan .'012
harvard
UNIVERSITY
Volume 97
Number 3
Fall 2011
Journal of the
WASHINGTON
ACADEMY OF SCIENCES
Edrtor’s Comments J. Maffucci i
Haig Dedication F. Haig ii
Instructions to Authors v
Affiliated Institutions vi
2011 State of the Future - Analysis Summary J. Glenn, T. Gordon, E. Florescu 1
The Global Reef Expedition: Science without Borders® A. Bruckner 19
Exoplanets S. Howard 33
Innovations in STEM Education S. James and C. Marrett 55
Membership Application 67
ISSN 0043-0439
Issued Quarterly at Washington DC
Washington Academy of Sciences
Founded in 1898
e
1
F
Board of Managers
Elected Officers
President
Gerard Christman
President Elect
James Cole
Treasurer
Larry Millstein
Secretary
Terrell Erickson
Vice President, Administration
Jim Disbrow
Vice President, Membership
Sethanne Howard
Vice President, Junior Academy
Dick Davies
Vice President, Affiliated Societies
Victor Miriel
Members at Large
Denise Ingram
Michael Cohen
Paul Arveson
Frank Haig, S.J.
Neal Schmeidler
Catherine With
Past President Mark Holland
Affiliated Society Delegates
Shown on back cover
Editor of the Journal
Jacqueline Maffucci
Associate Editor
Sethanne Howard
Academy Office
Washington Academy of Sciences
Room 113
1200 New York Ave. NW
Washington, DC 20005
Phone: 202/326-8975
The Journal of the Washington Academy of
Sciences
The Journal \s the official organ of the Academy.
It publishes articles on science policy, the history
of science, critical reviews, original science
research, proceedings of scholarly meetings of
its Affiliated Societies, and other items of interest
to its members. It is published quarterly. The last
issue of the year contains a directory of the
current membership of the Academy.
Subscription Rates
Members, fellows, and life members in good
standing receive the Journal free of charge.
Subscriptions are available on a calendar year
basis, payable in advance. Payment must be
made in US currency at the following rates.
US and Canada $30.00
Other Countries $35.00
Single Copies (when available) $15.00
Claims for Missing Issues
Claims must be received within 65 days of
mailing. Claims will not be allowed if non-
delivery was the result of failure to notify the
Academy of a change of address.
Notification of Change of Address
Address changes should be sent promptly to
the Academy Office. Notification should
contain both old and new addresses and zip
codes.
POSTMASTER:
Send address changes to WAS, Rm 113,
1200 New York Ave. NW
Washington, DC 20005
Journal of the Washington Academy of
Sciences (ISSN 0043-0439)
Published by the Washington Academy of
Sciences 202/326-8975
email: was@washacadsci.org
website: www.washacadsci.org
iRRARV
N ' 201?
MC2
AxiA^MWrn
Letter troin the Editor
As the school year begins, I want to take a moment to'coAlihend'an;^
members who serve as mentors to the next generation of scjeritists. We all
know the importance of encouraging young students to pursue the
sciences. The Washington Academy of Sciences has been actively
engaged in this through various programs, including the Junior Academy
and CapSci. I want to extend this to the Journal and encourage you to look
upon the Journal as another tool to entice students to enter the world of
research. I would like to devote an issue each year, or perhaps a section
each issue, towards student publications. We at the Journal are happy to
work with students to develop ideas, language, and anything else that they
might need, and in the end. we’d like to showcase their talents. We
welcome all genres of manuscript, to include primary research, literature
reviews, opines, or other. We are happy to entertain suggestions. I ask you
to please help me by encouraging your students to pursue this opportunity
to publish. I am sure that not only will it benefit our students, but our
readers as well.
With that, I give you the Fall 2011 Journal of the Washington Academy of
Sciences. We offer four articles. The first article is a general overview of
environmental trends as studied by The Millennium Project and reported
in their State of the Future-201 1 report. Once we have you thinking about
these overall trends, we then travel to the ocean to learn about the research
mission of The Global Reef Expedition. This is a truly amazing project
that has cross-cultural cooperation to study the coral reefs, so as to
understand the extent of damage to these structures, in the hopes that we
can then help to protect them. Following this. Exoplanets takes us in the
opposite direction to learn about the meticulous science behind the search
for planets. Finally, we end where I began this letter, with an article
entitled Innovations in STEM Education.
We hope that you enjoy.
Jacqueline Maffucci
Editor, The Journal of the Washington Academy of Sciences
Fall 2011
11
Foreword
The following paper is an offering by the Reverend Frank Flaig, SJ.
Rev. Haig is an emeritus professor of physics at Loyola University,
Baltimore. He is a long time member of the Academy, serves on the
Board, and is an active supporter of all Academy activities. He has
published in the Journal before, most recently in 2006, Volume 92-2: “The
Role of Academies of Science in the Critical Examination of New Ideas:
Looking at Gaia.”
His brother was General Alexander Haig. General Haig served his
country in many capacities. He was the Secretary of State under President
Ronald Reagan and White House Chief of Staff for President Richard
Nixon. He served as Vice Chief of Staff of the Army, the second-highest
ranking officer in the Army, and as Supreme Allied Commander Europe
commanding all US and NATO forces in Europe. A veteran of the Korean
and Vietnam Wars, General Haig was a recipient of the Distinguished
Service Cross, the Silver Star with oak leaf cluster, and the Purple Heart.
Rev. Haig generously donated a portion of General Haig’s estate to
the Academy to be used for office operating funds. The Academy is more
than grateful for this donation and asked Rev. Haig to write the following
article for the Journal in honor of his brother.
Washington Academy of Sciences
Ill
The Dedicatory Gift for the Office of the Washington
Academy of Sciences
The Reverend Frank Haig, SJ
Loyola University
The estate of General Alexander Meigs Haig, Junior, has made a
special bequest to endow the expenses of the office of the Washington
Academy of Sciences. Naturally one would like to know why General
Haig is associated with such a gift and what its meaning could be.
General Haig was a soldier but a soldier in the American tradition. So was
George Washington. So was Dwight Eisenhower. So was George
Marshall. All of these wanted to protect not just the physical security of
our country but also a whole way of life and culture.
In thinking about this situation I would like to do something a bit out of
the ordinary for a scientific journal. I would like to present the homily
given on the occasion of the funeral of General Haig. If we consider the
occasion and the location we can understand that the words are more
religious in tone than would be the norm for a scientific publication. I beg
your indulgence on that point because the talk really tells us something
profound about General Haig and why he would be delighted to support
the Washington Academy of Sciences and its activities.
The Funeral Homily for General Alexander M. Haig, Jr.
March 2, 1910
In the modem Catholic tradition there are three readings presented at a
funeral liturgy. Let us take a sentence from each and see how they give us
light and hope and some insight into the career of Alexander Haig.
First, the prophet Isaiah proclaims: '‘On this mountain he will destroy the
veil that veils all peoples.”
And Our Lord says: "A city set on a hill cannot be hidden.”
And Saint Peter announces: “ ... you rejoice with inexpressible joy touched
with glory, because you are achieving faith's goal, your salvation.”
What is that veil that veils all peoples, the web that is woven over all
nations, of which the prophet speaks? Isaiah, like every sacred poet, I
Fall 2011
IV
would imagine, does not want to limit our thoughts. And so, he is probably
examining ignorance, sin, and death. Alexander Haig held firmly to the
ideals of the American tradition. He believed in the enlightenment that
suffuses our great documents. He strove against the sins of treason and
cowardice. It may sound strange to hear but as a soldier he was against
war and wanted peace. In one famous action in North Korea as a young
captain he saved 14,000 people. His dream was a dream of peace although
not peace through weakness.
He always thought that the way to bring other peoples to the wisdom of
our democratic system was for our nation to be a city set on a hill.
Democracy is not sold by having bigger guns than others. It is made
attractive by showing a life and a culture that others would freely want to
share.
And so, we come to that wonderful expression of Saint Peter in our second
reading: "joy touched with glory."
Whose joy are we speaking of? Well, first, Al’s joy at a life lived with
honor and dignity; our joy in looking at a career that honors our country
and our church; our nation’s joy at the lifework of one of its outstanding
leaders.
It is not a simple joy. All the faithful have a joy if they live a life inspired
by the Spirit of Christ. You can see that truth in Al’s life with his family,
with his wonderful wife Patricia and his children, Alex and Brian and
Barbara. In addition, that joy leads the faithful close to Christ and His
Father. It is a joy that is rich and fruitful and luxuriant and even
extravagant. A joy touched with glory.
A1 Haig was a splendid soldier, a brilliant and effective business leader, a
citizen who understood the classic trio of family, faith, and flag, a
profound patriot, the sort of public servant every nation prays to have and
rejoices when that prayer is favorably answered.
Washington Academy of Sciences
V
INSTRUCTIONS TO AUTHORS
1. Manuscripts should be in Word (Office 03/07) and not PDF.
2. They should be 6,000 words or fewer (exceptions may be made by
the Editor). If there are seven or more graphics, reduce the number
of words.
3. Graphics (photographs, drawings, figures, tables) must be in
graytone only (no color accepted), and be easily resizable by the
editors to fit the Journal’s page size. Do not wrap text around the
graphics.
4. References (and bibliography, if included) may be in the format
generally acceptable for the disciplinary or professional field
represented by the manuscript. They must be accurate, complete,
and consistent in format throughout the paper.
5. Include both an e-mail address and a postal address for the author
(or primary author) including title and institutional affiliation if
any.
6. Papers are peer reviewed.
7. Send Manuscripts by e-mail as an attachment, or on a CD, to
Joumal@washacadsci.org or directly to the editor, Jacqueline
Maffucci - iamaffucci@gmail.com. Hard copy cannot be accepted.
Manuscripts can be accepted by any of the Board of Discipline
Editors.
Emanuela Appetiti - anthropology at eappetiti@hotmail.com
Elizabeth Corona - systems science at elizabethcorona@ gmail.com
Jim Eigenreider - science education at iim@deepwater.org
Terrell Erickson - environmental natural sciences at
ten^ell.ericksonl@wdc. nsda.gov
Mark Holland - botany at maholland@:salisbur\'.edu
Kiki Ikossi - engineering at ikossi@ieee.org
Carol Lacampagne - mathematics at clacampagne@earthlink.net
Raj Madhaven - engineering at rai.madhaven@ nist.gov
Kent Miller - computer sciences at kent.l.miller@alumni.cmu.edu
Jean Mielczarek - physics and biology at mielczar@phvsics.gmu.edu
Robin Stombler - health at rstombler@:auburnstrat.com
Alain Touwaide - history of medicine at atouwaide@ hotmail.com
Steve Tracton - atmospheric studies at straction@:hotmail.com
Fall 2011
VI
AFFILIATED INSTITUTIONS
The National Institute For Standards and Technology
Meadowlark Botanical Gardens
The John W. Kluge Center of the Library of Congress
Potomac Overlook Regional Park
Koshland Science Museum
American Registry of Pathology
Living Oceans Foundation
Washington Academy of Sciences
1
Foreword
The Millennium Project was founded in 1996 after a three-year
feasibility study with the United Nations University, Smithsonian
Institution, Futures Group International, and the American Council for the
UNU. It is now an independent non-profit global participatory futures
research think tank of futurists, scholars, business planners, and policy
makers who work for international organizations, governments,
corporations, NGOs, and universities.
The Millennium Project manages a coherent and cumulative process
that collects and assesses judgments from over 2,500 people since the
beginning of the project selected by its 40 Nodes around the world. The
Millennium Project has been scanning a variety of sources to produce
monthly reports on emerging environmental issues with potential security
or treaty implications. More than 300 items have been identified during
the past year and about 2,000 items since this work began in August 2002.
The full text of the items and their sources, as well as other Millennium
Project studies are included in Chapter 9 on the CD and are available at
cost on The Millennium Project’s Web site. The work is distilled in its
annual State of the Future, Futures Research Methodology series, and
special studies. The following is excerpted from the 2011 State of the
Future Report available from www.millenniumproject.org.
The 2011 State of the Future ends with some brief conclusions.
The readers are invited to draw their own conclusions and share them
at mp-public@mp.eim3.net (after signing up at
http://www.millennium-project.org/millennium/mp-public.html). The
Millennium Project is on Tinkedin and Twitter @MillenniumProj.
Fall 2011
2
201 1 State of the Future - Analysis Summary
Excerpted by: Jim Disbrow
Principles: Jerome Glenn, Theodore Gordon, Elizabeth Florescu
Millennium Project
The world is getting richer, healthier, better educated, more
peaceful, and better connected and people are living longer, yet half the
world is potentially unstable. Food prices are rising, water tables are
falling, corruption and organized crime are increasing, environmental
viability for our life support is diminishing, debt and economic insecurity
are increasing, climate change continues, and the gap between the rich
and poor continues to widen dangerously.
There is no question that the world can be far better than it is — IF
we make the right decisions. When you consider the many wrong
decisions and good decisions not taken — day after day and year after
year around the world — it is amazing that we are still making as much
progress as we are. Hence, if we can improve our decision making as
individuals, groups, nations, and institutions, then the world could be
surprisingly better than it is today.
Now that the Cold War seems truly cold, it is time to create a
multifaceted compellingly positive view of the future toward which
humanity can work. Regardless of the social divisions accentuated by the
media, the awareness that we are one species, on one planet, and that it is
wise to learn to live with each other is growing, as evidenced by the
compassion and aid for Haiti, Pakistan, and Japan; the solidarity with
democracy movements across the Arab world; the constant global
communications that connect 30% of humanity via the Internet; and the
growing awareness that global climate change is everyone’s problem to
solve.
Fifty years ago, people argued that poverty elimination was an
idealistic fantasy and a waste of money; today people argue about the
Washington Academy of Sciences
3
best ways to achieve that goal within 50 years. Twenty-five years ago,
people thought that civilization would end in a nuclear World War III;
today people think everyone should have access to the world’s knowledge
via the Internet, regardless of income or ideology.
The 2011 State of the Future offers no guarantee of a rosy future.
It documents potentials for many serious nightmares, but it also points to
a range of solutions for each. If current trends in population growth,
resource depletion, climate change, terrorism, organized crime, and
disease continue and converge over the next 50 - 100 years, it is easy to
imagine an unstable world with catastrophic results. If current trends in
self-organization via future Internets, transnational cooperation, materials
science, alternative energy, cognitive science, inter- religious dialogues,
synthetic biology, and nanotechnology continue and converge over the
next 50- 100 years, it is easy to imagine a world that works for all.
The coming biological revolution may change civilization more
profoundly than did the industrial or information revolutions. The world
has not come to grips with the implications of writing genetic code to
create new lifeforms. Thirteen years ago, the concept of being dependent
on Google searches was unknown to the world; today we consider it quite
normal. Thirteen years from today, the concept of being dependent on
synthetic life forms for medicine, food, water, and energy could also be
quite normal.
Computational biophysics can simulate the physical forces among
atoms, making medical diagnostics and treatment more individually
accurate. Computational biology can create computer matching programs to
quickly reduce the number of possible cures for specific diseases, with
millions of people donating their unused computer capacity to run the
matching programs (grid computing). Computational media allows
extraordinary pixel and voxel detail when zooming in and out of 3D images.
Computational engineering brings together the world’s available information
and computer models to rapidly accelerate efficiencies in design. All these
are changing the nature of science, medicine, and engineering, and their
acceleration is attached to Moore’s law; hence, computational everything will
continue to accelerate the knowledge explosion. Tele-medicine, tele-
education, and tele-everything will connect humanity, the built environment,
and computational everything to address our global challenges.
The earthquakes, tsunamis, and nuclear disasters in Japan exposed the
need for global, national, and local systems for resilience — ^the capacity to
anticipate, respond to, and recover from disasters while identifying future
Fall 2011
4
technological and social innovations and opportunities. Related to resilience
is the concept of collective intelligence — maybe the “next big thing” to help
us make better decisions.
After 15 years of The Millennium Project’s global futures research, it
is increasingly clear that the v^^orld has the resources to address its
challenges. What is not clear is whether the world will make good decisions
fast enough and on the scale necessary to really address the global
challenges. Hence, the world is in a race between implementing ever-
increasing ways to improve the human condition and the seemingly ever-
increasing complexity and scale of global problems.
So, how is the world doing in this race? What’s the score so far? A
review of the trends of the 28 variables used in The Millennium Project’s
global State of the Future Index provides a score card on humanity’s
performance in addressing the most important challenges; see Box 1 and
Figures 1 and 2. Some data in Figures 1-3 had to be adjusted for graphic
illustration purposes; those adjustments are indicated in the respective
labels in brackets.
Figure 1. Where we are winning
Literacy, adult (%)
GDP/capita [/100]
School enrol., second. (%)
Life expectancy
Internet users (%)
Women in parliaments (%)
R&D expenditures (%) [*10]
Physicians /1 .000 people [*10]
GDP/unit of energy
HIV (%) riO]
Malnutrition (%)
Nuclear proliferation
Pop Growth (%) [*10]
Lack of water (%)
Major armed conflicts
Poverty (%. 1.25/day)
Debt (% of GNI) [*10]
Infant mortality (%/1 .000 birth)
10
20
30
40
50
60
70
90
IOC
Washington Academy of Sciences
5
I
j
i Box 1. The World Score Card
! ^
Where we are winning
I . Improved water source (percent of population with access)
I 2. Literacy rate, adult total (percent of people age 15 and above)
! 3. School enrollment, secondary (percent gross)
I 4. Poverty headcount ratio at $ 1 .25 a day (PPP) (percent of population)
, (low- and mid-income countries)
1 5 . Population growth (annual percent) (A drop is seen as good for
I some countries, bad for others)
I 6. GDP per capita (constant 2000 US$)
I 7. Physicians (per 1,000 people) (surrogate for health care workers)
j 8. Internet users (per 1,000 people)
9. Infant mortality (deaths per 1,000 live births)
10. Life expectancy at birth (years)
I I . Women in parliaments (percent of all members)
12. GDP per unit of energy use (constant 2000 PPP $ per kg of oil
equivalent)
13. Number of major armed conflicts (number of deaths >1,000)
14. Undernourishment (percent of population)
I 15. Prevalence of HIV (percent of population 15^9)
16. Countries having or thought to have plans for nuclear weapons
(number)
17. Total debt service (percent of GNI) (low- and mid-income
countries)
18. R&D expenditures (percent of national budget)
Where we are losing
19. Carbon dioxide emissions (kt)
20. Global surface temperature anomalies
2 1 . People voting in elections (percent of population)
22. Levels of corruption (15 largest countries)
23. People killed or injured in terrorist attacks (number)
24. Number of refugees (per 100,000 total population)
Where there is uncertainty
25. Unemployment, total (percent of total labor force)
26. Non-fossil-fuel consumption (percent of total)
27. Population in countries that are free (percent of total global
population)
28. Forestland (percent of all land area)
Fall 2011
6
Figure 2. Where we are losing
Global Temp. AfX)m.[*1(X)]
CO2 emissions (mmt)
Terrorism victims [/5000]
Corruption (15 ctries)
Refugees (per 100,000) [/10]
Voting (%)
0 10 20 30 40 50 60 70 80 90 100
Figure 3. Where trends are not clear
Forest area (%)
Population in "free" ctries (%)
Non-fossil fuel consumption (%)
Unemployment (%)
Figure 4. 2011 State of the Future
Washington Academy of Sciences
7
An international Delphi panel selected over a hundred indicators
of progress or regress for the 15 Global Challenges, with some of
the driving Key Questions identified below. Indicators were
then chosen that had at least 20 years of reliable historical data and
later, where possible, were matched with variables used in the
International Futures model. The resulting 28 variables shown in Box
1 were integrated into the State of the Future Index (SOFI) with a
10-year projection. SOFIs have also been computed for countries and
could be applied to sectors like communications, health, water, and so
forth.
15 Global Challenges
facing humanity
Sustainable development
and climate change
by The Millennium Project
iiiillenniuni-projcct i>rg
8
Key Questions of the 1 5 Global Challenges
1 . How can sustainable development be achieved for all while addressing
global climate change?
2. How can everyone have sufficient clean water without conflicts?
3. How can population growth and resources be brought into balance?
4 . How can genuine democracy emerge from authoritarian regimes?
5 . How can policymaking be made more sensitive to global long-term perspectives?
6. How can global convergence of information and communications
technologies work for everyone?
7. How can ethical market economies be encouraged to help reduce the
gap between rich and poor?
8. How can the threat of new and reemerging diseases and immune
microorganisms be reduced?
9. How can the capacity to decide be improved as the nature of work and
institutions change?
10. How can shared values and new security strategies reduce ethnic
conflicts, terrorism, and the use of weapons of mass destruction?
1 1 . How can the changing status of women help improve the human
condition?
12. How can transnational organized crime networks be stopped from
becoming more powerful and sophisticated global enterprises?
13. How can growing energy demands be met safely and efficiently?
14. How can scientific and technological (S&T) breakthroughs be
accelerated to improve the human condition?
15. How can ethical considerations become more routinely incorporated
into global decisions?
All questions are addressed in the full version of the 2011 State of the
future: http://www.millennium-proiect.org .
Washington Academy of Sciences
9
The 201 1 SOFI in Figure 4 shows that the 10-year future for the
world is getting better. However, in many of the areas where we are
winning we are not winning fast enough, such as reductions in HIV,
malnutrition, and debt. And areas of uncertainty represent serious
problems: unemployment, fossil fuel consumption, political freedom,
and forest cover.
Some of the areas where we are losing could have quite serious
impacts, such as corruption, climate change, and terrorism.
Nevertheless, this selection of data indicated that 10 years from now,
on balance, will be better than today.
Some Factors to Consider
Atmospheric CO2 is at 394.35 ppm as of May 2011, the highest in at
least 2 million years. Each decade since 1970 has been warmer than the
preceding one; 2010 tied 2005 as the warmest year on record. The world is
warming faster than the latest Intergovernmental Panel on Climate Change
(IPCC) projections. Even the most recent estimates may understate reality,
since they do not take into account permafrost melting.
According to Food and Agriculture Organization (FAO) of the
United Nations’ (UN) Livestock’s Long Shadow report, the meat industry
adds 18% of human-related greenhouse gases (GHGs), measured in
CO2 equivalent, which is higher than the transportation industry. A
large reinsurance company found that 90% of 950 natural disasters in
2010 were weather-related and fit climate change models; these
disasters killed 295,000 people and cost approximately $130 billion.
Humanity’s material extraction increased eight times during the
twentieth century. Today our consumption of renewable natural
resources is 50% larger than nature’s capacity to regenerate. In just 39
years, humanity may add an additional 2.3 billion people to world
population. There were 1 billion humans in 1804; 2 billion in 1927;
6 billion in 1999; and 7 billion today. China is trying to become the
green-growth giant of the world; it is too big to achieve reasonable
standards of living for all its people first and then clean up later. Its
next Five Year Plan (2011-15) allocated $600 billion for green
growth initiatives.
Some believe the global ecosystem is crashing due to climate
change, drying rivers and lakes, biodiversity loss, soil erosion, coastal
dead zones, and collapsing bee populations unable to fertilize the
Fall 2011
10
food chain. Lester Brown in Plan B 4.0 argues that nothing less than
cutting CO2 by 80% by 2020, keeping population to no more than 8
billion by 2050, restoring natural ecosystems, and eradicating
poverty will save the ecosystem, and he proposes lowering income
taxes as carbon taxes go up.
Since half of the largest 100 economies in the world are
corporations, the former executive secretary of the United Nations
Framework Convention on Climate Change (UNFCCC) argues that
political leaders must give the business community a more central role
in the transition to the green economy.
Falling water tables worldwide and increasing depletion of
sustainably managed water have led some people to introduce the
concept of “peak water,” similar to peak oil. Fossil water - fossil
fuels: both will peak, then what? It takes 2,400 liters of water to
make a hamburger. Since 1990, an additional 1.3 billion people
gained access to improved drinking water and 500 million got better
sanitation. Yet 884 million people still lack access to clean water
today (down from 900 million in 2009), and 2.6 billion people still
lack access to safe sanitation. Half of all hospital patients in the
developing world are there for water-related diseases.
As fertility rates fall and longevity increases, the ability to meet
financial requirements for the elderly will diminish; the concept of
retirement and social structures will have to change to avoid
intergenerational conflicts. There were 12 persons working for every
person 65 or older in 1950; by 2010, there were 9; and by 2050, the
elderly support ratio is projected to drop to 4. There could be 150
million people with age-related dementia by 2050. Advances in brain
research and applications to improve brain functioning and
maintenance could lead to healthy long life, instead of an infirmed
long life.
Food prices are the highest in history and are likely to continue
a long-term trend of increases if there are no major innovations in
production and changes in consumption, due to the combination of
population growth, rising affluence (especially in India and China),
the diversion of corn and other grains for biofuels, soil erosion,
aquifer depletion, loss of cropland, falling water tables and water
pollution, increasing fertilizer costs (high oil prices), market
speculation, the diversion of water from rural to urban areas,
increasing meat consumption, global food reserves at 25-year lows.
Washington Academy of Sciences
11
and climate change’s increasing droughts and flooding, melting
mountain glaciers that reduce water flows, and eventually saltwater
invading croplands. New approaches like saltwater agriculture,
growing pure meat without growing animals, various forms of agro-
ecology to reduce cost of inputs, and increasing vegetarianism would
help.
Nearly 30% of the population in Moslem-majority countries is
between 15 and 29 years old. Many who are without work and tired
of older hierarchies, feeling left behind, and wanting to join the
modern world brought change across North Africa and the Middle
East this year. This demographic pattern is expected to continue for
another generation, leading to both innovation and the potential for
continued social unrest and migration.
The social media that helped the Arab Spring Awakening is
part of a historic transition from many pockets of civilizations
barely aware of each other’s existence to a world totally connected
via the current and future forms of the Internet. More data went
through the Internet in 2010 than in all the previous years combined,
and more electronic than paper books were sold by Amazon.
Humanity, the built environment, and ubiquitous computing are
becoming a continuum of consciousness and technology reflecting
the full range of human behavior, from individual philanthropy to
organized crime. New forms of civilization will emerge from this
convergence of minds, information, and technology worldwide.
The number and percent in extreme poverty is falling. The
world economy grew 4.9% in 2010 while the population grew 1.2%;
hence, the world Gross Domestic Product (GDP) per capita grew
3.7%. Nearly half a billion people rose out of extreme poverty
($1.25 a day) between 2005 and 2010. Currently this figure is about
900 million or 13% of the world. The World Bank forecasts this to
fall to 883 million by 2015 (down from 1.37 billion in 2005). UNDP’s
new Multidimensional Poverty Index finds 1.75 billion people in
poverty. In either case, the number of countries classified as low-
income has fallen from 66 to 40. However, the gap between rich and
poor within and among countries continues to widen. According to
Forbes, Brazil, Russia, India and China (the BRICs) produced
108 of the 214 new billionaires in 2011. There are a total of 1,210
billionaires in the world now, of which 1 15 are citizens of China and
101 are Russian. The factors that increase the price of food, water.
Fall 2011
12
and energy are increasing; this has to be countered to address world '
poverty.
The world financial crisis and European sovereign debt
emergencies continue to shift power to Asia, yet its leadership has
not yet begun to help create that multifaceted general view of the
future that humanity can work toward together. China became the
second largest economy, passing Japan in 2010, and has more
Internet users than the entire population of the United States. By
2030 India is expected to pass China as most populous country in
the world. Together these two account for nearly 40% of humanity
and are increasingly becoming the driving force for world economic
growth.
World health is improving, the incidence of diseases is falling,
and people are living longer, yet many old challenges remain and
future threats are serious. During 2011 there were six potential
epidemics. The most dangerous may be the NDM-1 enzyme that can
make a variety of bacteria resistant to most drugs. New HIV
infections declined 19% over the past decade; the median cost of
antiretroviral medicine per person in low-income countries has
dropped to $137 per year; and 45% of the estimated 9.7 million
people in need of antiretroviral therapy received it by the end of
2010. Yet two new HIV infections occur for every person starting
treatment. Over 30% fewer children under five died in 2010 than in
1990, and total mortality from infectious disease fell from 25% in
1998 to less than 16% in 2010. People are living longer; health care
costs are increasing, and the shortage of health workers is growing,
making tele-medicine and self-diagnosis via biochip sensors and
online expert systems increasingly necessary.
Advances in synthetic biology, mail-order DNA, and future
desktop molecular and pharmaceutical manufacturing could one day
give single individuals the ability to make and deploy biological
weapons of mass destruction. To counter this, advances in sensors to
detect molecular changes in public spaces will be needed, along with
advances in human development and social engagement to reduce
the number of people who might be inclined to use these
technologies for mass murder.
Washington Academy of Sciences
13
Another troubling area is the emerging problem of information
and cyber warfare.
Governments and military contractors are engaged in an
intellectual arms race to defend themselves from cyberattacks from
other governments and their surrogates. Because society’s vital
systems now depend on the Internet, cyberweapons to bring it down
can be thought of as weapons of mass destruction. Information
warfare’s manipulation of media can lead to the increasing mistrust
of all information.
Meanwhile, old style wars have decreased over the past two
decades, cross-cultural dialogues are flourishing, and intra-state
conflicts are increasingly being settled by international interventions.
Today, there are 10 conflicts with at least 1,000 deaths per year
(down from 14 last year): Afghanistan, Iraq, Somalia, Yemen, NW
Pakistan, Naxalites in India, Mexican cartels, Sudan, Libya, and one
classified as international extremism. The U.S. and Russia continue
to reduce nuclear weapons while China, India, and Pakistan are
increasing them. According to the Federation of American
Scientists, by February 2011 there were 22,000 nuclear warheads, of
which 2,000 are ready for use by the U.S. and Russia. The number
and area of nuclear-free zones is increasing, but the number of
unstable states grew from 28 to 37 between 2006 and 2011. Much of
Central America could be called a failed or failing state in that
organized crime controls people’s lives more than governments do.
Africa’s population could double by 2050, with a growing number of
unemployed youth and over 13 million AIDS orphans, increasing the
likelihood of social instabilities and future conflicts.
With the potential collapse of Yemen, oil piracy along the
Somali coast could increase. Ninety percent of international trade is
carried by sea; 489 acts of piracy and armed robbery against ships
were reported to IMO in 2010, up from 406 in 2009.
Fall 2011
14
Investments into alternatives to fossil fuels are rapidly
accelerating around the world to meet the projected 40-50% increase
in demand by 2035.
China has become the largest investor in “low-carbon energy,”
with a 2010 budget of $51 billion. Three Mile Island, Chernobyl,
and now Japan’s Fukushima nuclear disasters have left the future of
that industry in doubt and strengthened the anti-nuclear movement in
Japan and Europe.
Without major breakthroughs in technological and behavioral
changes, the majority of the world’s energy in 2050 will still come
from fossil fuels. Therefore, large-scale carbon capture and reuse has
to become a top priority to reduce climate change. Energy
efficiencies, conservation, electric cars, tele-work, and reduced meat
consumption are near-term ways to reduce energy GHG production.
Automakers around the world are in a race to make lower-cost plug-
in hybrid and all electric cars. Engineering companies are exploring
how to take CO2 emissions from coal power plants to make
carbonates for cement and grow algae for biofuels and fish food.
China is exploring tele-work programs to reduce long commuting,
energy, costs, congestion.
Empowerment of women has been one of the strongest drivers
of social evolution over the past century, and many argue that it is
the most efficient strategy for addressing the 15 Global Challenges.
Only two countries allowed women to vote at the beginning of the
twentieth century; today there is virtually universal suffrage, the
average ratio of women legislators worldwide has reached 19.2%,
and over 20 countries have a woman head of state or government.
Patriarchal structures are increasingly challenged, and the movement
toward gender equality is irreversible.
Although the world is waking up to the enormity of the threat
of transnational organized crime, the problem continues to grow.
Washington Academy of Sciences
15
while a global strategy to address this global threat has not been
adopted. World illicit trade is estimated at $1.6 trillion per year (up
$500 billion from last year), with counterfeiting and intellectual
property piracy accounting for $300 billion to $1 trillion, the global
drug trade at $404 billion, trade in environmental goods at $63
billion, human trafficking and prostitution at $220 billion, smuggling
at $94 billion, weapons trade at $12 billion, and cybercrime costing
billions annually in lost revenue. These figures do not include
extortion or organized crime’s part of the $1 trillion in bribes that
the World Bank estimates are paid annually or its part of the
estimated $1.5-6. 5 trillion in laundered money. Hence the total
income could be $2-3 trillion — about twice as big as all the military
budgets in the world.
The increasing complexity of everything in much of the world
is forcing humans to rely more and more on computers. In 1997
IBM’s Deep Blue beat the world chess champion. In 2011 IBM’s
Watson beat top TV quiz show knowledge champions. What’s next?
Just as the autonomic nervous system runs most biological decision
making, so too computer systems are increasingly making the day-
to-day decisions for civilization.
The acceleration of Science and Technology ( S&T)
continues to fundamentally change the prospects for civilization, and
access to its knowledge is becoming universal. Computing power
and lowered costs predicted by Moore’s Law continues with the
world’s first three-dimensional computer chip introduced by Intel for
mass production.
China currently holds the record for the fastest computer with
Tianhe-1, which can perform 2.5 petaflops per second; IBM’s Mira,
ready next year, will be four times faster.
Is it possible that the acceleration of change will grow beyond
conventional means of ethical evaluation? Will we have time to
understand what is right and wrong as one change after the next
makes it difficult to just keep up? For example, is it ethical to clone
ourselves, or bring dinosaurs back to life, or invent new life forms
from synthetic biology? These are not remote possibilities in a
distant future; the knowledge needed to do them is being developed
now. Despite the extraordinary achievements of S&T, future risks
from their continued acceleration and globalization needs to be
better forecasted and assessed. At the same time, new technologies
Fall 2011
16
also make it easier for more people to do more good at a faster pace
than ever before. Single individuals initiate groups on the Internet,
organizing actions worldwide around specific ethical issues. News
media, blogs, mobile phone cameras, ethics commissions, and Non-
Governmental Organizations (NGOs) are increasingly exposing
unethical decisions and corrupt practices, creating an embryonic
global conscience. Our failure to inculcate ethics into more of the
business community contributed to the global financial crisis and
resulting recession, employment stagnation, and widening rich-poor
gap.
Future Arts, Media, and Entertainment
The explosive, accelerating growth of knowledge in a rapidly
changing and increasingly interdependent world gives us so much to
know about so many things that it seems impossible to keep up. At
the same time, we are flooded with so much trivial news that serious
attention to serious issues gets little interest, and too much time is
wasted going through useless information. How can we learn what is
important to know in order to make sure that there is a good future
for civilization? Traditionally, the world has learned through
education systems, art, media, and entertainment — and now with
advances of communication and entertainment technologies, we have
even more information and media at our fingertips on any number of
ever-growing delivery systems.
Inspired by the Florentine Camerata Society, a sixteenth-
century “think tank” responsible for the creation of the art form we
know today as the European opera. The Millennium Project created
the Arts and Media Node. The Node invited futuristic artists, media,
and entertainment professionals and other innovators around the
Washington Academy of Sciences
17
world to suggest and discuss future elements or seeds of the future
of arts, media, and entertainment. After a month of online
discussions, 34 elements were chosen and put into a Real-Time
Delphi for an online international assessment. Writers, producers,
performing artists, arts/media educators, and other professionals in
entertainment, gaming, and communications were nominated by the
40 Millenniumi Project Nodes around the world to share their views.
One distillation of the views of the participants shows that the future
of arts, media, and entertainment will be a global, participatory, tele-
present, holographic, augmented reality conducted on future versions
of mobile smart phones that engage new audiences in the ways they
prefer to be reached and involved.
Environmental Security
Environmental security is increasingly dominating national and
international agendas, shifting defense and geopolitical paradigms
because it is increasingly understood that conflict and environmental
degradation exacerbate each other. The traditional nation-centered
security focus is expanding to a more global one due to geopolitical
shifts, the effects of climate change, environmental and energy
security, and growing global interdependencies.
The Millennium Project defines environmental security as
environmental viability for life support, with three sub-elements:
preventing or repairing military damage to the environment,
preventing or responding to environmentally caused conflicts, and
protecting the environment due to its inherent moral value.
Conclusion
This year’s State of the Future is an extraordinarily rich
distillation of information for those who care about the world and its
future. Since healthy democracies need relevant information, and
since democracy is becoming more global, the public will need
globally relevant information to sustain this trend. We hope the
annual State of the Future reports can help provide such information.
The insights in this fifteenth year of The Millennium Project’s
work can help decision makers, opinion leaders, and educators who
fight against hopeless despair, blind confidence, and ignorant
indifference — attitudes that too often have blocked efforts to
improve the prospects for humanity.
Fall 2011
18
This page intentionally left blank
Washington Academy of Sciences
19
The Global Reef Expedition: Science without Borders®
Andrew W. Bruckner
Khaled bin Sultan Living Oceans Foundation, Washington. D.C.
Abstract
The Global Reef Expedition (GRE) is a five year research program to map,
characterize and assess the resilience of coral reef ecosystems and develop
tools for conservation and management. In 2011, the Khaled bin Sultan
Living Oceans Foundation embarked upon a Global Reef Expedition,
completing three research projects in the Bahamas (Cay Sal Bank, the
Inaguas and Ftogsty reef, and Andros and Abaco Islands) and one in St.
Kitts and Nevis. Research locations in 2012 include Jamaica, Navassa,
Colombia, the Galapagos and French Polynesia, followed by sites off
remote Pacific islands, the Coral Triangle, the Indian Ocean and Red Sea.
The primary goals of this five year multidisciplinary research and education
mission are to map, characterize and assess coral reefs worldwide, educate
local communities and stakeholders on the importance of these ecosystems,
and provide resource management agencies with tools and information to
aid in conservation and management. The GRE targets remote, understudied
reefs that are affected minimally by direct human impacts, with comparative
work done on coral reefs off populated coastlines. High resolution
multispectral satellite imagery (DigitalGlobe’s WorldView-2 satellite) is
used for navigation, to identify survey sites, and as a platform for habitat
characterization. On the ground efforts include (1) the identification and
characterization of each habitat type and mapping of the spatial distribution
of different habitats; (2) evaluation of the diversity, demographics and
health of the species found in these habitats; and (3) characterization of
environmental factors and ecological processes that are likely to enhance
the resilience of these ecosystems. These data and resulting tools are
incorporated into a Geographic Information System (GIS) database and
provided to local managers and other stakeholders for use in spatially-based
ecosystem management approaches.
Introduction
Shallow water coral reefs are found in tropical areas, roughly
between the Tropics of Cancer and Capricorn. They are most prolific in
environments with suitable temperatures (16-30° C) and salinity (30-35
ppt), high light penetration, oligotrophic waters with minimal
sedimentation and turbidity, and adequate water flow, occurring from just
below the water’s surface to a maximum of 50-75 m depth. Coral reefs are
estimated to cover from 284,300 km^ (Spalding et al. 2001) to about
920,000 km^ when including associated seagrass beds, mangroves and
Fall 2011
20
Other shallow marine habitats (Costanza et al. 1997), with 91% of this area
in the Indo-Pacific.
Coral reefs are the most complex ecosystem in the marine
environment. This complexity is expressed in both the variety of
interconnected benthic habitats and a vast array of associated biota, with
representatives from 32 of the 34 described animal phyla. At least one-
third of all known marine fishes spend at least some portion of their lives
in coral reef habitats (Sale 2002). The high diversity is largely due to the
heterogeneous nature of coral reef habitat, which can accommodate large
size-ranges of reef fishes and numerous functional niches in relatively
small areas.
Coral reefs are also one of the few ecosystems that are built upon
biogenic substrates created by the dominant organisms found on reefs. The
reef substrate consists of limestone originating primarily from the
skeletons of stony corals and crustose coralline algae, which has
undergone significant modifications in form and area on relatively short
time scales. Through grazing activities, bioerosion, and physical breakage
during storms, coral skeletons are progressively eroded to produce rubble
and sand. Other organisms and a host of chemical, biological, and physical
processes cement this material together to form a durable reef substrate,
resulting in intricate structures that have enormous surface heterogeneity
at a wide variety of spatial scales (Choat and Bellwood 1991).
While the diversity of reef fishes is influenced by the complexity of
the reef habitat created by the corals, fishes are also an important, dynamic
component of this unique ecosystem. Through interactions at virtually all
trophic levels, coral reef fishes modify the reef community structure and
help maintain the health of the associated habitat forming corals, and they
are major conduits for the movements of energy and nutrients into, within,
and out of the reef ecosystem (Hobson 1991; Bellwood and Wainwright
2002). The ecological importance of coral reef fishes also extends beyond
the boundaries of the coral habitat. Many reef fishes that are pelagic
piscivores and planktivores often feed, and become prey, far away from
the coral reef, and the pelagic egg, larval, and juvenile stages form a vast
prey resource for predators in oceanic waters.
Coral reefs are a rare but critically important resource. Although they
occupy less than 1.2% of the world’s continental shelf area and only
0.09% of the total area of the world’s ocean, at least 109 countries,
territories, and states are directly dependent on the resources and services
they provide (Birkeland 1997; Spalding et al. 2001). Many economies are
Washington Academy of Sciences
21
dependent on their products, including sources of protein, biomedical
compounds and traditional medicines, and raw materials for construction,
as well as their value in terms of employment, recreation, coastal tourism,
and coastal protection from storm damage and erosion. Coral reefs are a
significant part of many countries natural heritage and are also of great
value to the world overall, as they are hotspots of marine biodiversity.
Costanza et al. (1997) estimated reef ecosystems globally provide US$375
billion each year from living resources and ecosystem services.
Nevertheless, the value of reefs is dependent on their continued
functioning as ecosystems.
The Global Coral Reef Crisis
Over the past decade, reefs worldwide have witnessed a rapid decline
in their health with concurrent losses of corals, declines in fish stocks, and
phase shifts from high productivity coral-dominated ecosystems to low
productivity algal reefs (Hughes 1994). Recent estimates suggest that 20%
of the world’s reefs are degraded beyond the potential for recovery, 24%
are under imminent risk of collapse and another 26% are under a longer-
term threat of collapse (Wilkinson 2008). Furthermore, many of the reefs
around the world no longer resemble reefs of 30 years ago.
This global crisis has been attributed to unprecedented and increasing
rate by overfishing, use of destructive fishing gear and techniques, land
based pollution, coastal development, and other human impacts. These
localized human impacts are exacerbated by recent large-scale
disturbances associated with climate change, including episodes of mass
bleaching, disease outbreaks, plagues of coral-eating predators, and more
severe hurricanes (Harvell et al. 2007; Baker et al. 2008; Rotjan and
Lewis 2008). The synergistic effects of these man-made and natural
factors are causing dramatic, long-lasting changes in community
composition and structure, and concurrent losses of ecosystem services
and products from reefs (Hughes 1989, 1994; Knowlton 1992; Aronson et
al. 2004; McClanahan et al. 2007).
The Western Atlantic has been affected most severely by global and
local-scale threats and these ecosystems have exhibited the most dramatic
declines in coral reef health. Until 1980, Caribbean coral reefs were
dominated by three species of corals - large stands of elkhom coral
{Acropora palmata) extended for 100s of meters forming dense thickets in
the shallow reef crest. In shallow, protected back reef areas and on the
tops of reef spurs at depths of 5-15 meters the intertwined branches of the
fragile staghorn coral {Acropora cervicornis) created complex thickets.
Fall 2011
22
often interspersed with lobate colonies of star coral (Montastraea
annularis), and providing juvenile habitat and refuge for resting schools of
grunts, snappers, and other reef fish. On the fore reef, star coral
{Montastraea annularis complex) grew into mountainous pinnacles, often
5-10 meter in height or taller, with extensive caves, crevices, and channels
between the corals. Deeper, sheets of lettuce coral {Agaricia lamarki), star
coral, brain corals and other corals with a plating morphology formed
overlapping shingles that extended to the base of the reef, at depths of SO-
SO m. Many Caribbean reefs had 50-70% living coral cover, with colorful
sponges, gorgonians, and other invertebrates filling the spaces between
corals.
By the mid-1980s, Caribbean reefs began to change. The long-spined
black urchin {Diadema antillarum) suffered a Caribbean-wide mass
mortality event in 1982-1983. White band disease ravaged stands of
elkhom and staghorn coral, with disease outbreaks spreading throughout
the region in the 1980s and 1990s. Since 1995, new diseases have emerged
and these are primarily targeting the long-lived, massive corals like star
coral. Regional scale coral bleaching first occurred during the 1982-1983
el nino event, and it has progressively increased in severity, with
destructive regional and global-scale destructive bleaching events
occurring in 1997-1998, 2005 and 2009-2010. Concurrently, overfishing
and destructive fishing practices have led to depletion of groupers and
other top predators, and a progressive pattern of fishing down the food
chain is underway. Terrestrial runoff and land-based pollution are
deteriorating water quality, resulting in blooms of fleshy macroalgae that
outcompete corals. Poorly managed coastal development projects that
include dredging, burial of reefs, and the removal of mangroves and
seagrass beds further degrade coral reef habitats and eliminate important
nursery areas. These, and a host of other human impacts, are exacerbating
the newest and most significant threat to coral reefs - climate change.
Persistence of reefs as coral-dominated systems and continued
functioning after large, stochastic perturbations depends on four primary
factors: the extent of damage, synergistic impacts of anthropogenic and
natural stressors, the health and resilience of reef building corals, and the
communities’ capacity for recovery (Smith et al. 2008). Ecological
recovery and rapid restoration of normal reef processes has been
documented in systems with intact functional groups and a high degree of
spatial heterogeneity and connectivity (Nystrom et al. 2008), while
degraded, overfished reefs fail to recover even after several decades of
protection. During the Global Reef Expedition, research will emphasize
A
Washington Academy of Sciences
23
these four factors. The Foundation is targeting those nations that lack
capacity and infrastructure to collect the scientific data they urgently need
to fill knowledge gaps and contribute to new, ecosystem-based
management approaches.
Operations
The Tiving Oceans Foundation (LOF) is a private operating, public
benefit foundation based in Washington DC. TOF scientists and partners
have conducted coral reef research over the last ten years with projects in
the Pacific and Indian Oceans, the Red Sea, Mediterranean and Caribbean.
The primary research platform is the Golden Shadow, a 67 m motor yacht
that is graciously provided by HRH Prince Khaled bin Sultan of Saudi
Arabia (Figure 1). The vessel carries multiple surface support vehicles,
including a 38-foot dive boat, various tenders and the Golden Eye, a
Cessna caravan float plane. The Golden Shadow has a stem elevator
platform used to launch and recover the Golden Eye, as well as its various
tenders, and can handle loads up to 12 tons. The platform is invaluable for
diving access and recovery in difficult sea conditions. The Golden Shadow
also contains a fully functional dive locker with a recompression chamber,
a small laboratory outfitted with aquaria, an ocean chemistry system and
other laboratory equipment, a Seakeepers weather and seawater
monitoring system, and a satellite communication system.
Figure 1 . Golden Shadow with one of our dive platforms, the Golden Osprey.
The Foundation relies on a core group of scientists from academic
institutions worldwide for field activities, supplementing this with
graduate students and Post Docs through our fellowship program, and
partner with local and regional scientists, managers and stakeholders to
implement the project. The GRE includes research, training for divers
(Figure 2) and future scientists and managers, and education for
communities, students, other local stakeholders, and the public about the
importance of coral reefs and how they can contribute to their
Fall 2011
24
preservation. The GRE began in April 201 1 completing three missions in
the Bahamas and one in St. Kitts and Nevis. Over the next five years, LOT
will circumnavigate the globe to examine additional reefs in the
Caribbean, followed by locations in the Pacific Ocean, Indian Ocean, and
the Red Sea.
Figure 2. Diver measuring corals along a transect tape. The diver is holding a one meter
bar and a slate.
Research Objectives
The long-term goal of the GRE is to increase scientific understanding
of fundamental processes that shape coral reef ecosystems in the context
of linkages and interactions at a “landscape” scale. High resolution
mapping and habitat characterization underpin all Expedition studies and
take advantage of the improvements made in recent years with
multispectral satellite imagery. Using imagery collected by the new
WorldView-2 satellite (DigitalGlobe Inc.) for key areas that are currently
unmapped and identified as vital for conservation by resource
management agencies in each country, these ecosystems are surveyed and
ground-truthed using visual, photographic and acoustic sampling
techniques. These data are incorporated into a Geographic Information
System (GIS) database and are used to create high resolution habitat maps
that depict the location, size, and distribution of various coral reef habitats,
seagrass beds, mangroves, algal communities, sand flats, and other
shallow marine habitats. Together, these tools form the basis for marine
spatial management.
The EOF has also developed a standardized protocol to assess the
health and resilience of reefs that allows comparison and ranking of reef
Washington Academy of Sciences
25
condition within individual locations, across geographic gradients, and
between ocean basins. While typical monitoring programs provide data on
cover of various organisms and the diversity and abundance of fishes,
these often have limited taxonomic specificity (e.g., growth forms of
corals are recorded instead of genera- or species-based data). Monitoring
programs only rarely include an examination of the population dynamics
of reef-building corals (e.g., size structure and amount of partial mortality)
and fail to incorporate other parameters that allow an examination of
relationships among coral, algae, and other functional groups.
The protocol LOT is applying during the GRE includes four major
components: (1) an assessment of the primary biotic compartments that
make up the reef community; (2) ecological interactions that drive
dynamics within and among these groups; (3) habitat and environmental
influences that directly affect the reefs; and (4) external drivers of change,
including anthropogenic and climate factors. Representative sites are
selected using high resolution multispectral satellite images and habitat
maps, with sampling undertaken across gradients of human pressure and
environmental regimes. Coral assessments focus on the diversity, benthic
cover, size structure, extent of recent and old partial mortality, levels of
recruitment, and condition of reef-building corals (Table la). Fish
assessments include quantitative surveys of the abundance and size
structure of more than 100 species of reef fishes, emphasizing species of
major importance in the ecological functions of the reef ecosystem and
fisheries targets, including key herbivores, piscivores, scavengers, coral
feeders, sessile invertebrate feeders, planktivores, and detritivores (Table
1 b). In addition to measures of substrate condition and cover and biomass
of algae, more than 30 physical, environmental, and anthropogenic
resilience indicators are also assessed (Table Ic).
In addition to the coral reef assessments, key ecological processes that
shape these ecosystems - such as recruitment, herbivory, and bioerosion -
are compared and contrasted across biophysical gradients and between
geographic localities. An evaluation of specific parameters - such as
diversity, productivity, habitat structure, anthropogenic stressors (fishing
pressure), and various oceanographic parameters - is undertaken to
determine their influence on ecological states of the reefs. For example, a
number of different herbivore functional groups are recognized that
mediate coral-algal dynamics, but each functional group contributes in a
different manner, and their diversity, biomass and vulnerability to stresses
(e.g. fishing) strongly affects how robustly each functional group
Fall 2011
26
contributes to reef resilience. By assessing species assemblages of major
herbivores, characterizing feeding patterns of different species and
relationships between algal diversity and palatability by herbivorous
fishes, and quantifying algal diversity and biomass, relative patterns of
algal production, extent of herbivory, and risks of shifts towards
macroalgal domination are assessed for each site. These data provide an
indication of the consequences of fisheries that target herbivores and
possible management measures to minimize impacts on reef health.
As LOT explores reefs around the world, representative sites in each
country are established as Legacy Sites. These are similar to forest plots
established by forestry biologists: large patches of reef that are
permanently marked, digitally photographed to create high resolution
mosaics illustrating each coral, sponge and other benthic organisms, and
characterized in detail. By revisiting these sites every 2-5 years, it is
possible to evaluate changes, such as the survival and growth of coral
recruits and patterns of recovery from acute disturbances. These sites can
also form a framework for future monitoring and will improve
understanding of the fates of these systems under new management
schemes and future climate change impacts.
Application of Findings to Management
Through research conducted during the GRE, LOT will acquire the
knowledge and develop tools needed to assist resource managers in (1)
implementing marine spatial planning with emphasis on development of
marine protected areas, (2) promoting sustainable use of coral reef
resources, and (3) preventing catastrophic declines of coral reefs and
losses of ecosystem services that may be induced by large-scale global
disturbances. The combined results of this work will contribute to a
regional understanding of the current status of the coral reefs, and are
intended to be used to predict future directions of reef health and
categorize areas according to their level of threat and resilience. The
knowledge gained through the GRE will help determine the role of
individual species and community interactions in supporting fully
functional reef ecosystems, and the environmental, biological, and
physiological conditions that allow particular species to better survive
under substandard conditions. It will also allow modeling and forecasting
the key drivers of ecosystem health and components that can promote
ecosystem recovery following periodic acute disturbances. By
incorporating the information into landscape-scale management
Washington Academy of Sciences
27
approaches, the likelihood that reefs can persist under future scenarios of
climate change can be greatly enhanced.
Today is like no other time in the past. Scientists know much more
than ever before about coral reefs. The keystone species found here are
known and the GRE will further clarify their importance in structuring
these ecosystems. The major stressors have been identified, and their
contribution to reef degradation will be further clarified through global
assessments of reef health across geographical, biophysical, and
anthropogenic gradients. Together with the resulting habitat maps, these
data can help identify decisive actions that can be taken to mitigate these
impacts. Through the GRE, EOF will fill knowledge gaps, share this
information with the world in a timely manner, and provide technology
and tools to resource managers that can aid in implementing the
appropriate conservation and sustainable management approaches.
References
Aronson, R. B., I. G. MacIntyre, C. M. Wapnick, and M. W. O’Neill. 2004. Phase shifts,
alternative states, and the unprecedented convergence of two reef systems. Ecology
85: 1876-1891.
Baker, A. C., P. W. Glynn, and B. Riegl. 2008. Climate change and coral reef bleaching:
An ecological assessment of long-term impacts, recovery trends and future outlook.
Estuarine, Coastal, and Shelf Science 80: 435^71.
Bellwood, D. R., and P. C. Wainwright. 2002. The history and biogeography of fishes on
coral reefs. Pages 5-32 in P. F. Sale, editor. Coral reef fishes, dynamics and diversity
in a complex ecosystem. Academic Press, San Diego, California.
Birkeland, C. 1997. Chapter 1: introduction. Pages 1-10 C. E. Birkeland, editor. Life
and death of coral reefs. Chapman and Hall, New York.
Choat, J. H., and D. R. Bellwood. 1991. Reef fishes: their history and evolution. Pages
39-66 in P. F. Sale, editor. Coral reef fishes, dynamics and diversity in a complex
ecosystem. Academic Press, San Diego, California.
Costanza, R., R. d’Arge, R. de Groot, S. Farber, M. Grasso, B. Hannon, K. Limburg, S.
Naeem, R. V. O’Neill, J. Paruelo, R. G. Raskin, P. Sutton, M. van den Belt. 1997.
The value of the world’s ecosystem services and natural capital. Nature 387: 253-
260.
Harvell, D., E. Jordan -Dahlgren, S. Merkel, E. Rosenberg, L. J. Raymundo, G. Smith, E.
Weil, and B. L. Willis. 2007. Coral disease, environmental drivers, and the balance
between coral and microbial associates. Oceanography 20: 172-195.
Hobson, E. S. 1991. Trophic relationships of fishes specialized to feed on zooplankters
above coral reefs. Pages 69-95 in P. F. Sale, editor. The ecology of fishes on coral
reefs. Academic Press, San Diego, California.
Hughes, T. P. 1989. Community structure and diversity of coral reefs - the role of history.
Ecology 70: 275-279.
Hughes, T. P. 1994. Catastrophes, phase shifts, and large-scale degradation of a
Caribbean coral reef Science 265: 1547-1551.
Fall 2011
28
Knowlton, N. 1992. Thresholds and multiple stable states in coral reef community
dynamics. American Zoology 32: 674-682.
McClanahan, T.R., M. Ateweberhan, N. A. J. Graham, S. K. Wilson, C. R. Sebastian, M.
M. M. Guillaume, and J. H. Bruggemann. 2007. Western Indian Ocean coral
communities: bleaching responses and susceptibility to extinction. Marine Ecology
Progress Series 337: 1-13.
Nystrom, N., N. A. J. Graham, J. Lokrantz, and A.V. Norstrom. 2008. Capturing the
cornerstones of coral reef resilience: linking theory to practice. Coral Reefs 27: 795-
809.
Rotjan, R. D. and S. M. Lewis. 2008. The impact of coral predators on tropical reefs.
Marine Ecology Progress Series 367: 73-91.
Sale, P. F. 2002. Management of coral reef fishes. Pages 359-360 in P. F. Sale, editor.
Coral reef fishes, dynamics and diversity in a complex ecosystem. Academic Press,
San Diego, California.
Smith, L., J. Gilmour, and A. Fleyward. 2008. Resilience of coral communities on an
isolated system of reefs following catastrophic mass-bleaching. Coral Reefs 27: 197-
205.
Spalding, M. D., C. Ravilious, and E. P. Green. 2001. World atlas of coral reefs.
University of California Press, Berkeley, California.
Wilkinson, C. 2008. Status of Coral Reefs of the World: 2008. Townsville, Australia:
Global Coral Reef Monitoring Network and Reef and Rainforest Research Center.
Washington Academy of Sciences
29
Table la. Resilience assessments for stony corals.
Fall 2011
30
Table lb. Other bioindicators of resilience.
Washington Academy of Sciences
31
Fall 2011
32
Table Ic. Physical and environmental resilience indicators.
Washington Academy of Sciences
33
Exoplanets
Sethanne Howard
USNO, Retired
Abstract
Are there planets around other stars? This question finally has been answered
in our lifetime. The answer is a very definite yes! However, most of the
planets detected to date are large, Jupiter-mass or greater. The question we
would really like to answer is: Are there Earth-like planets in a habitable
zone around other stars? In other words, are we alone? That question is much
more difficult to answer. But, we are making progress towards that goal. An
extra-solar planet, or exoplanet, is a planet outside our Solar System. There
are well over 500 candidate extra-solar planets identified as of May 6, 201 1.
Introduction
The SEARCH for other planets and the search for other life are of
long standing interest to humanity. “’Tis not probable we are the only
fools in the universe,” is a quote from Entretiens sur la pluralite des
mondes by de Fontenelle in 1686. People often speculated on other life
and planets.
The centuries-old quest for other worlds like our Earth has been
rejuvenated by the intense excitement and popular interest surrounding the
discovery of hundreds of planets orbiting other stars.
There is now clear evidence for substantial numbers of three types of
exoplanets: gas giants, hot-super-Earths in short period orbits, and ice
giants. The following website tracks the day-by-day increase in new
discoveries and provides information on the characteristics of the planets
as well as those of the stars they orbit: The Extra-solar Planets
Encyclopedia at http://exoplanets.eu.
There are a few general questions we can address right away. Perhaps
the first one is: what is a planet? Among other duties the International
Astronomical Union (lAU) is the governing body for naming and defining
astronomical objects as established by international treaty. The official
definition of “planef ’ used by the lAU only covers the Solar System and
thus takes no stance on exoplanets. As of April 2011, the only definitional
statement issued by the lAU that pertains to exoplanets is a working
definition issued in 2001 and modified in 2003. This definition contains
the following criteria:
Fall 2011
34
• Objects with true masses below the limiting mass for
thermonuclear fusion of deuterium (currently calculated to be 13
Jupiter masses for objects of solar metallicity) that orbit stars or
stellar remnants are “planets” (no matter how they formed). The
minimum mass/size required for an extra-solar object to be
considered a planet should be the same as that used in our Solar
System.
• Substellar objects with true masses above the limiting mass for
thermonuclear fusion of deuterium are “brown dwarfs,” no matter
how they formed or where they are located.
• Free-floating objects in young star clusters with masses below the
limiting mass for thermonuclear fusion of deuterium are not
“planets,” but are “sub-brown dwarfs” (or whatever name is most
appropriate).
The prose is a bit lawyer-like; however, basically a planet is a body that
cannot fuse deuterium and has less mass than about 13 Jupiters.
Now that we have a working definition of a planet, another question is:
do planets exist? We know of one example for sure - our own Solar
System. So it certainly is possible. We also know, in general, how planets
form:
• Stars form in collapsing molecular clouds. Disks of material appear
to be a natural result of the cloud collapse. We can see these disks
around young stars forming in our Galaxy.
• If these disks contain enough material (besides hydrogen and
helium gas) then planets can form as the disk cools and condenses.
Finally then, what does our Solar System tell us about planetary
systems in general?
• Planets orbit in the same plane as expected if they formed in a
rotating disk.
• Planets (except Mercury) have almost circular orbits.
• Small rocky planets form near the Sun (or central star); gas giants
far from the Sun. We explain this by temperature: near the Sun it
was too warm for water to condense; far from the Sun water ice
was stable, adding to the material available to form planets - and
gas molecules were moving more slowly allowing the growing
planets to trap large amounts of hydrogen and helium gas from the
disk.
Washington Academy of Sciences
35
Since these characteristics are consistent with planet formation from a
circum-stellar disk, we would expect other planetary systems to be similar
to our own.
Early History of Planet Searches
Astrometry is the oldest search method for extra-solar planets.
Astrometry^ is the precise measurement of stellar positions and motions
and is the most fundamental aspect of astronomical work. The search
method dates back at least to statements made by William Herschel in the
late 18^^ century. He claimed that an unseen companion was affecting the
position of the star he cataloged as 70 Ophiuchi (star number 70 in the
constellation of Ophiuchus).
The first known formal astrometric calculation for an extrasolar planet
was made by Captain W. S. Jacob in 1855 at the East India Company’s
Madras Observatory. He reported that orbital anomalies made it “highly
probable” that there was a “planetary body” in the 70 Ophiuchi system.* In
the 1890s Thomas J. J. See of the University of Chicago and the United
States Naval Observatory stated that the orbital anomalies proved the
existence of a dark body in the 70 Ophiuchi system with a 36-year period
around one of the stars.** However, Forest Ray Moulton (probably around
1915) proved that a three-body system with those orbital parameters
would be highly unstable. During the 1950s and 1960s, Peter van de Kamp
of Swarthmore College made another series of detection claims, this time
for planets orbiting Barnard’s Star.***
Astronomers now generally regard all the early reports of detection as
erroneous. For two centuries claims had circulated of the discovery of
unseen companions in orbit around nearby star systems that all were
reportedly found using this method, culminating in the 1996
announcement of multiple planets orbiting the nearby star Talande 21185
by astronomer George Gatewood. None of these claims survived scrutiny
by other astronomers, and the astrometric technique fell into disrepute. All
claims of a planetary companion of less than 0.1 solar mass made before
1996 using this method are likely spurious.
In 2002, astronomers did succeed in using astrometry to characterize a
previously discovered planet around the star Gliese 876 (star number 876
in the Gliese catalog of stars).
One potential advantage of the astrometric method is that it is most
sensitive to planets with large orbits. This makes it complementary to
other methods that are most sensitive to planets with small orbits.
Fall 2011
36
However, very long observation times are required — years, and possibly
decades, as planets far enough from their star to allow detection via
astrometry also take a long time to complete an orbit.
The first published, confirmed discovery of an exoplanet was made in
1988 by the Canadian astronomers Bruce Campbell, G. A. H. Walker, and
S. Yang.^'" Although they were cautious about claiming a planetary
detection, their radial-velocity observations (see below for a description of
this technique) suggested that a planet orbited the star Gamma Cephei.
Partly because the observations were at the very limits of instrumental
capabilities at the time, widespread skepticism persisted in the
astronomical community for several years about this and other similar
observations. Another source of confusion was that some of the possible
planets might instead have been brown dwarfs, objects that are
intermediate in mass between planets and stars. The following year,
however, additional observations were published that supported the reality
of the planet orbiting Gamma Cephei, though subsequent work in 1992
raised serious doubts. Finally, in 2002, improved techniques allowed the
planet’s existence to be confirmed.
Another early detection was in 1992, with the discovery of two
confirmed terrestrial-mass planets orbiting the pulsar PSR B 1257+12. The
first confirmed detection of an exoplanet orbiting a main-sequence star
was made in 1995, when a giant planet, 51 Pegasi b, was found in a four-
day orbit around the nearby G-type star 51 Pegasi. The frequency of planet
detections has increased since then.
Why Can’t We See Planets The Way We Can See Mars?
Any planet is an extremely faint light source compared to its parent
star. In addition to the intrinsic difficulty of detecting such a faint light
source, the light from the parent star causes a glare that washes it out. As
viewed from a distant star, the Sun is 600,000,000 times as bright as
Jupiter, 2.5 billion times as bright as Saturn, and 25 billion times as bright
as Earth.
Planets are also distant. Because of the great distances to the stars, the
small angular separation between a star and its planet is difficult to
resolve.
For these reasons, a planet would be lost in the glare of its host star.
Only a very few exoplanets have been observed directly.
Washington Academy of Sciences
37
Instead, astronomers generally had to resort to indirect methods to
detect extra-solar planets. At the present time, several different indirect
methods have yielded success.
Indirect Methods of Detection
There are four main indirect methods, the first two are the primary
ones: (1) Detecting Planets by Radial Velocity Searches; (2) Detecting
Planets via Transits, (3) Detecting Planets through Gravitational
Microlensing; (4) Detecting Planets through Timing.
Detecting Planets by Radial Velocity Searches
From Newton’s laws we know that the gravitational pull is mutual; a
planet pulls on its star just as the star pulls on the planet. Thus, the two
objects will orbit around their common center of mass; while the planet
executes one orbit around the star, the much more massive star executes a
small wobble as well. For example, Jupiter’s gravitational pull causes the
Sun to wobble at a speed of about 12 meters/sec. We can detect the motion
of the star pulled by an unseen planet via the Doppler shift of the star’s
spectral lines; z.e., the tiny variations in the radial velocity of the star with
respect to Earth.
Radial velocity is the velocity of an object in the direction of the line
of sight {i.e. its speed straight towards or away from an observer). In
astronomy, radial velocity most commonly refers to the spectroscopic
radial velocity. The spectroscopic radial velocity is the radial component
of the velocity along the line-of-sight between the emitter and the
observer, so the frequency of the light decreases for sources that are
receding (redshift) and increases for sources that are approaching
(blueshift) - the Doppler shift. Astrometric radial velocity is the radial
velocity as determined by astrometric observations (for example, a secular
change in the annual parallax).
The Doppler shift of a spectral line depends on the ratio of the object’s
velocity to the speed of light. One determines the velocity, v, by
measuring the wavelength shift (AX) from the central wavelength X:
AA _ V
A c
Radial velocity can be used to estimate the masses of the binary
systems and some orbital elements, such as eccentricity and semi-major
axis. The same method is also used to detect planets around stars, in the
Fall 2011
38
way that the radial velocity variation determines the planet’s orbital
period, while the resulting size of the displacement allows the calculation
of the lower bound on a planet’s mass. Radial velocity methods alone may
only reveal a lower bound, since a large planet orbiting at a very high
angle to the line of sight will perturb its star radially as much as a smaller
planet with an orbital plane on the line of sight. It has been suggested that
planets with high eccentricities calculated by this method may be
mimicking two planet systems of circular or near-circular resonant orbit.
The graph in Figure 1 illustrates the sine curve created using Doppler
spectroscopy to observe the radial velocity of an imaginary star which is
being orbited by a planet in a circular orbit. Each dot is a measurement of
the position of the star’s spectral line (as shifted from its central position).
Observations of a real star will produce a similar graph, although
eccentricity in the orbit will distort the curve. When Vstar is the velocity of
parent star, the observed Doppler velocity is K= Vstar sin(i), where i is the
inclination of the planet’s orbit to the line perpendicular to the line-of-
sight.
Figure 1. The Doppler shift of an imaginary star
To observe such a small Doppler shift, a very special accurate
spectrograph is required, as well as a sophisticated computer analysis. An
Olympic sprinter runs the 100-meter dash in 10 seconds, or 10 m/sec. You
or I could run at 3 m/sec (briefly!). And that’s the sensitivity of these
specialized spectrometers.
A team led by astronomers Geoff Marcy and Paul Butler developed
,such an instrument. In 1995, they put their instrument on the Keck 10-
Washington Academy of Sciences
39
meter telescope and began systematic observations of several hundred
stars, expecting to accumulate years of data to detect planets on orbits like
Jupiter. Well, within a year, the first planet detection was announced - a
planet with a 4-day period. Soon, a number of Jupiter-mass planets on
very short orbits were discovered.
The velocity of the star around the center of mass is much smaller than
that of the planet, because the radius of its orbit around the center of mass
is so small. Velocity variations down to 1 m/s can be detected with
modem spectrometers, such as the HARPS (High Accuracy Radial
Velocity Planet Searcher) spectrometer at the 3.6 meter telescope in La
Silla Observatory, Chile, or the HIRES (High Resolution) spectrometer at
the Keck telescopes in Hawaii.
The major problem with Doppler spectroscopy is that it can only
measure movement along the line-of-sight, and so depends on a
measurement (or estimate) of the inclination of the planet’s orbit to
determine the planet’s mass. If the orbital plane of the planet happens to
line up with the line-of-sight of the observer, then the measured variation
in the star’s radial velocity is the true value. However, if the orbital plane
is tilted away from the line-of-sight, then the tme effect of the planet on
the motion of the star will be greater than the measured variation in the
star's radial velocity, which is only the component along the line-of-sight.
As a result, the planet’s tme mass will be higher than expected. We do not
know the orientation of the orbit: how much it is tilted to our line of sight.
Since we can only measure the motion towards or away from us, we do
not necessarily measure the full velocity of the star.
To correct for this effect, and so determine the tme mass of an extra-
solar planet, radial velocity measurements must be combined with
astrometric observations, which track the movement of the star across the
plane of the sky, perpendicular to the line-of-sight. Astrometric
measurements allow researchers to check whether objects that appear to be
high mass planets are more likely to be brown dwarfs.
The radial velocity method has certain selection effects: it is easier to
detect more massive objects and easier to detect objects on close-in orbits.
So, we don’t yet know whether smaller, Earth-like planets are common.
Also, the unknown tilt of the orbit plane means that the 'planet’ mass is a
lower limit.
A further problem is that the gas envelope around certain types of stars
can expand and contract, so these stars are variable. This method is
Fall 2011
40
unsuitable tor finding planets around these types of stars, as changes in the
stellar emission spectrum caused by the intrinsic variability of the star can
swamp the small effect caused by a planet.
fhe radial velocity method is best at detecting very massive objects
close to the parent star — so-called “hot Jupiters” - which have the
greatest gravitational effect on the parent star, and so cause the largest
changes in its radial velocity. Observation of many separate spectral lines
and many orbital periods allows the signal to noise ratio of observations to
increase, thus increasing the chance of observing smaller and more distant
planets; planets like the Earth remain undetectable with current
instruments.
This has been a veiy' productive technique used by planet hunters. The
method is distance independent, but requires high signal-to-noise ratios to
achieve high precision, and so is generally only used for relatively nearby
stars out to about 160 light-years from Earth. It easily finds massive
planets that are close to stars, but detection of those orbiting at great
distances requires many years of observation. Planets with orbits highly
inclined to the line of sight from Earth produce smaller wobbles, and are
thus more difficult to detect. One of the main disadvantages of the radial-
velocity method is that it can only estimate a planet’s minimum mass. The
posterior distribution of the inclination angle depends on the true mass
distribution of the planets. The radial-velocity method can be used to
confirm findings made by using the transit method (see the section on
this). When both methods are used in combination, then the planet’s true
mass can be estimated.
Results of Radial Velocity Searches So Far
Two groups (Marcy et al, at Berkeley; Mayor et al. at Geneva) have
now been searching for planets with this technique for almost 15 years.
They use special spectrometers with very stable platforms and calibration
combined with the largest optical telescopes on Earth. The limiting
velocity sensitivity is now down to just 1.5 meters/sec!
At least 50 stars have been found to have multiple planets. Because the
detection of multiple periodicities in the radial velocity curve requires
many more data points, the number of multiple planet systems is likely to
increase as the research teams acquire more data.
The exoplanets detected by this technique (> 520 objects as of 3/2011)
have characteristics very different from our solar system:
Washington Academy of Sciences
41
1 . Large gas giants are found very close to the central star.
2. Most of the detected planets have large masses, comparable to the
mass of Jupiter; many are larger than Jupiter.
3. Many planets are on very elliptical orbits (about half of the
sample).
One method for detecting smaller planets is to observe stars less
massive than the Sun. Stars with mass 0.5 to 0.3 that of the Sun are quite
common. They are fainter - and not as hot - as the Sun, but could still
have a habitable zone 0.1 - 0.2 AU from the star.
In the past three years, 12 planets have been detected with masses of 2
to 15 times the mass of Earth, small enough that these may be rocky
planets. (Jupiter's mass is 318 x Earth’s mass). Two of them orbit within
the potential habitable zone around one of these fainter stars:
One recent example: 4 planets around the small red star GJ58E
Mass of planet
(Earth masses)
1.94
15.6
5.0
7.7
Distance from Star
0.03 AU - least massive planet yet detected!
0.04 AU
0.07 AU inner edge of habitable zone?
0.25 AU habitable zone?
Could it be that these faint red stars have habitable planets? As of 2010
two more candidate exoplanets were discovered around GJ581.
Detecting Planets via Transits
When a planetary-sized object crosses in front of a star, we call it a
transit (since only a small percentage of the light is blocked, it’s not really
an eclipse). We see occasional transits of Venus or Mercury here in our
Solar System. So, another method of detecting a planet is to search for a
small decrease in the amount of light from a star when a planet crosses in
front of it. Of course, only a small fraction of the planetary systems will be
oriented so that a planet crosses in front of a star as seen from Earth.
From ground-based telescopes, quite a few transiting planets have
been detected. The decrease in the amount of light from the star during a
transit is about 0.5% - 2%, corresponding to a fairly large planet. See
Figure 2. From the percentage decrease and the duration of the brightness
decrease, we can determine the size of the planet:
)
i
Fall 201 1
42
• rhe drop in brightness during the transit tells us the fractional area
of the star that is blocked by the transiting planet.
• If we know the true cross-section area of the star (from stellar
models, compared to the Sun), then we can determine the cross-
section area of the planet, and thus its diameter.
When combined with radial velocity measurements to determine the mass
of the planet, we can then calculate the density of the planet, a real
physical quantity that tells us something about the nature of the object (gas
giant versus rocky planet).
While the other methods provide information about a planet’s mass,
this photometric method can determine the radius of a planet. If a planet
crosses (transits) in front of its parent star’s disk, then the observed visual
brightness of the star drops a small amount. The amount the star dims
depends on the relative sizes of the star and the planet. For example, in the
case of HD 209458, the star dims 1.7%.
Figure 2. Example of the transit method. The bottom figure shows the dip in light as the
planet passes in front of the star.
The main advantage of the transit method is that the size of the planet
can be determined from the light curve. When combined with the radial
velocity method (which determines the planet's mass) one can determine
the density of the planet, and hence learn something about the planet's
Washington Academy of Sciences
43
physical structure. The nine planets that have been studied by both
methods are by far the best-characterized of all known exoplanets.
The transit method also makes it possible to study the atmosphere of
the transiting planet. When the planet transits the star, light from the star
passes through the upper atmosphere of the planet. By studying the high-
resolution stellar spectrum carefully, one ean detect elements present in
the planet’s atmosphere. A planetary atmosphere (and planet for that
matter) could also be detected by measuring the polarization of the
starlight as it passed through or is reflected off the planet's atmosphere.
This method has two major disadvantages. First, planetary transits are
only observable for planets whose orbits happen to be perfectly aligned
from the astronomers' vantage point. The probability of a planetary orbital
plane being directly on the line-of-sight to a star is the ratio of the
diameter of the star to the diameter of the orbit. About 10% of planets with
small orbits have such alignment, and the fraction decreases for planets
with larger orbits. For a planet orbiting a sun-sized star at 1 AU, the
probability of a random alignment producing a transit is 0.47%. However,
by scanning large areas of the sky containing thousands or even hundreds
of thousands of stars at once, transit surveys can in principle find extra-
solar planets at a rate that could potentially exceed that of the radial-
velocity method, although it would not answer the question of whether any
particular star is host to planets.
Second, the method suffers from a high rate of false detections. A
transit detection requires additional confirmation, typically from the
radial-velocity method.
The secondary eclipse (when the planet is blocked by its star) allows
direct measurement of the planet’s radiation. If the star’s photometric
intensity during the secondary eclipse is subtracted from its intensity
before or after, only the signal caused by the planet remains. It is then
possible to measure the planet’s temperature and even to detect possible
signs of cloud formations on it. In March 2005, two groups of scientists
carried out measurements using this technique with the Spitzer Space
Telescope. The two teams, from the Harvard-Smithsonian Center for
Astrophysics, led by David Charbonneau, and the Goddard Space Flight
Center, led by L. D. Deming, studied the planets TrES-1 and HD 209458b
respectively. The measurements revealed the planets’ temperatures: 1,060
K (790°C) for TrES-1 and about 1,130 K (860°C) for HD 209458b.
However some transiting planets orbit such that they do not enter
secondary eclipse relative to Earth.
Fall 2011
44
I'he CoRoT mission, developed primarily by the French Space
Agency, was the first space mission dedicated to observing transits. It has
been in orbit around Earth for over 4 years and can reliably detect
brightness changes of 0.1%. The 17^^ CoRoT exoplanet was announced in
2010.
Detecting Planets through Gravitational Microlensing
Gravitational microlensing occurs when the gravitational field of a star
acts like a lens, magnifying the light of a distant background star. This
effect occurs only when the two stars are almost exactly aligned. Tensing
events are brief, lasting for weeks or days, as the two stars and Earth are
all moving relative to each other. More than a thousand such events have
been observed over the past ten years. See Figure 3 for an illustration.
Gravitation Microlensing
Planet
A Lens
Star
Source
Star
Figure 3. Illustration of the microlensing search for a planet
If the foreground lensing star has a planet, then that planet’s own
gravitational field can make a detectable contribution to the lensing effect.
Since that requires a highly improbable alignment, a very large number of
distant stars must be continuously monitored in order to detect planetary'
microlensing contributions at a reasonable rate. This method is most
fruitful for planets between Earth and the center of the Galaxy, as the
Galactic center provides a large number of background stars.
In 1991, astronomers Shude Mao and Bohdan Paczyhski of Princeton
University first proposed using gravitational microlensing to look for
exoplanets. Successes with the method date back to 2002, when a group of
Polish astronomers (Ansezej Udalski, Marcin Kubiak and Michal
Szymahski from Warsaw, and Bohdan Paczyhski) during project OGLE
(the Optical Gravitational Lensing Experiment) developed a workable
technique. During one month they found several possible planets, though
limitations in the observations prevented clear confirmation. Since then.
Washington Academy of Sciences
45
four confirmed extrasolar planets have been detected using microlensing.
As of 2006 this was the only method capable of detecting planets of
Earthlike mass around ordinary main-sequence stars.
A notable disadvantage of the method is that the lensing cannot be
repeated because the chance alignment never occurs again. Also, the
detected planets will tend to be several kiloparsecs away, so follow-up
observations with other methods are usually impossible. However, if
enough background stars can be observed with enough accuracy then the
method should eventually reveal how common Earthlike planets are in the
Galaxy.
On May 18, 201 1 astronomers announced the discovery of a new class
of Jupiter-sized planets floating alone in the dark of space, away from the
light of a star. The team believes these lone worlds are probably outcasts
from developing planetary systems and, moreover, they could be twice as
numerous as the stars themselves. These are free-floating planets.
The discovery is based on a joint Japan-New Zealand microlensing
survey that scanned the center of the Milky Way during 2006 and 2007,
revealing evidence for up to 10 free-floating planets roughly the mass of
Jupiter. The isolated orbs, also known as orphan planets, are difficult to
spot, and had gone undetected until now. The planets are located at an
average approximate distance of 10,000 to 20,000 light years from Earth.
The survey, the Microlensing Observations in Astrophysics (MOA), is
named in part after a giant wingless, extinct bird family from New
Zealand called the moa. A 5.9-foot (1.8-meter) telescope at Mount John
University Observatory in New Zealand is used to regularly scan the
copious stars at the center of our Galaxy for gravitational microlensing
events.
This could be just the tip of the iceberg. The team estimates there are
about twice as many free-floating Jupiter-mass planets as stars. In
addition, these worlds are thought to be at least as common as planets that
orbit stars. This adds up to hundreds of billions of lone planets in our
Milky Way alone. The team sampled a portion of the Galaxy, and based
on these data, can estimate overall numbers in the Galaxy.
The study, led by Takahiro Sumi from Osaka University in Japan,
appears in the May 19 (2011) issue of the journal Nature. The survey is
not sensitive to planets smaller than Jupiter and Saturn, but theories
suggest lower-mass planets like Earth should be ejected from their stars
Fall 2011
46
more often. As a result, they are thought to be more common than free-
tloating Jupiters.
Detecting Planets through Timing
When a double star system is aligned such that - from the Earth’s
point of view - the stars pass in front of each other in their orbits, the
system is called an eclipsing binary star system. The time of minimum
light, when the star with the brighter surface area is at least partially
obscured by the disc of the other star, is called the primary eclipse, and
approximately half an orbit later, the secondary eclipse occurs when the
brighter surface area star obscures some portion of the other star. These
times of minimum light, or central eclipse, constitute a time stamp on the
system, much like the pulses from a pulsar (except that rather than a flash,
they are a dip in the brightness). If there is a planet in circum-binary orbit
around the binary stars, the stars will be offset around a binary-planet
center of mass. As the stars in the binary are displaced by the planet back
and forth, the times of the eclipse minima will vary; they will be too late,
on time, too early, on time, too late, etc. The periodicity of this offset may
be the most reliable way to detect extra-solar planets around close binary
systems.
Direct Imaging
As mentioned previously, planets are extremely faint light sources
compared to stars and what little light comes from them tends to be lost in
the glare from their parent star. So in general, it is very difficult to detect
them directly.
Ultimately, the goal is to image planets around other stars directly and
to wring all the information possible out of those photons (planet
temperature from infrared and atmospheric composition from infrared and
visible light spectra).
The first direct detection was a single pixel in an image using an
opaque disk within the camera (called a coronograph) to block the star’s
light and allow a long exposure image. This was planet Fomalhaut b,
around the bright star Fomalhauf \ The method used a series of selections:
• Observe a star that is larger, hotter, and more massive than the
Sun. Thus the planet forming disk of material was presumably
larger and more massive, and the habitable zone farther from the
Washington Academy of Sciences
47
Star. An orbiting planet may be farther from the star, making it
easier to detect.
• Observe at near-infrared wavelength where the brightness contrast
between star and planet is not quite so extreme.
• Select a young star, where a larger planet has not lost all of its
initial heating and will be brighter in the infrared.
They acquired images of Fomalhaut with space-based and ground-
based telescopes, using an opaque disk in the camera to block the light
from Fomalhaut. Images taken two years apart consistently show a faint
dot that has moved slightly, consistent with the orbital velocity expected at
that distance from Fomalhaut. The planet is about 119 AU from the star.
Since the luminosity of Fomalhaut is 16 times that of the Sun, the planet
would receive illumination from the star comparable to Neptune in our
solar system.
Some projects to equip ground based telescopes with planet-imaging-
capable instruments include: Gemini telescope (GPI), the Very Targe
Telescope (SPHERE), and the Subaru telescope (HiCIAO).
In July 2004, a group of astronomers used the European Southern
Observatory’s Very Earge Telescope array in Chile to produce an image
of 2M 1207b, a companion to the brown dwarf 2M1207. In December
2005, the planetary status of the companion was confirmed. The planet is
believed to be several times more massive than Jupiter and to have an
orbital radius greater than 40 AU.
Up until the year 2010, telescopes could only directly image
exoplanets under exceptional circumstances. Specifically, it is easier to
obtain images when the planet is especially large (considerably larger than
Jupiter), widely separated from its parent star, and hot so that it emits
intense infrared radiation. However in 2010 a team from NASA’s Jet
Propulsion Eaboratory demonstrated that a vortex coronagraph could
enable small scopes to directly image planets. They did this by imaging
the previously imaged HR 8799 planets using just a 1.5 m portion of the
Hale Telescope. See Figure 4. The planet masses are 10, 10, and 7 times of
Jupiter.
Images taken in 2003 and reanalyzed in 2008 revealed a planet
orbiting Beta Pictoris which in 2009 was observed to have moved to the
other side of the star. See Figure 5.
48
Figure 4. Direct image of exoplanets around the star HR8799 using a vortex coronagraph
on a 1 .5m portion of the Hale telescope.
Figure 5. ESO image of a planet near Beta Pictoris. The planet is the bright spot about 1 1
o’clock near the center of the image. The dotted circle (upper right) is the size of the orbit
of Saturn scaled to this star.
Washington Academy of Sciences
49
In September 2008, an object was imaged at a separation of 330AU
from the star IRXS J 160929. 1-2 10524, but it was not until 2010 that it
was confirmed to be a companion planet to the star and not just a chance
alignment. An additional system, GJ 758, was imaged in November 2009,
by a team using the HiCIAO instrument of the Subaru Telescope.
The Kepler Mission
The challenge now is to find terrestrial planets (z.e., those one half to
twice the size of the Earth), especially those in the habitable zone of their
stars where liquid water and possibly life might exist. As of September
2010, Gliese 581 g, fourth planet of the red dwarf star Gliese 581, is the
strongest possibly terrestrial exoplanet orbiting in the habitable zone
surrounding its star, although the existence of Gliese 581 g has been
questioned by another team of astronomers, and it is now listed as
unconfirmed at The Extra-solar Planets Encyclopaedia.
Named for Johannes Kepler, the Kepler Mission was launched March
6, 2009 riding aboard a Delta II rocket. The Kepler spacecraft watches a
patch of space for indications of Earth-sized planets moving around stars
similar to the Sun. There are over 100,000 stars like the Sun in the area.
Using special detectors similar to those used in digital cameras, Kepler
will look for a slight dimming in the stars as planets pass between the stars
and Kepler - the transit method. The observatory’s place in space will
allow it to watch the same stars constantly throughout its multi-year
mission. It is a job only a computer could love.
Figure 6 shows the field of view of the Kepler instrument. It points
between the constellations of Cygnus and Lyra at a dense field of stars.
Data from the Kepler mission have been used to estimate that there are at
least 50 billion planets in our own Galaxy.
Kepler is sensitive enough to detect planets even smaller than Earth.
By scanning a hundred thousand stars simultaneously, it will not only be
able to detect Earth-sized planets, it will be able to collect statistics on the
numbers of such planets around sun-like stars.
The goal is to find Earth-like planets by searching for transits that
cause a brightness decrease of just 1/10,000 with a periodicity of about 1
year - such detections will require all four years of data, in order to record
repeated transits with a constant period.
Fall 2011
50
Figure 6. Field of view of Kepler
As of February 2011, Kepler had identified 1,235 unconfirmed
planetary candidates associated with 997 host stars, based on the first four
months of data from the space-based telescope, including 54 that may be
in the habitable zone. Six candidates in this zone were thought to be
smaller than twice the size of Earth, though a more recent study found that
one of the candidates is likely much larger and hotter than first reported.
• 68 planets with radii <1.25 Earth radii
• 288 planets with radii between 1.25 and 2 Earth radii
• 662 planets with radii between 2 and 6 Earth radii (Neptune-sized)
• 165 Jupiter-sized planets with radii between 6 and 15 Earth radii
o (Jupiter’s radius is 1 1 Earth radii)
• 19 objects larger than 2 Jupiter radii
About 75% of these planets are smaller than Neptune (4 Earth radii).
So far, 408 of the planets are in multiple-planet systems. And these results
are Just from 4 months of data, so only short-period close-in orbiting
planets are included. These are called ‘candidate’ planets because follow-
up observations are not yet available.
The Kepler dataset is so massive that astronomers are studying other
things with the dataset. An international team of astroseismologists, led by
the University of Birmingham, has used data from the NASA Kepler
Mission to sample the ‘stellar music’ of 500 stars similar to the Sun,
Washington Academy of Sciences
51
according to research published 8 April 2011 in the journal Science. The
team used the information from these natural resonances, which is coded
in pulses of starlight, to measure the properties of these stars and will now
be able to compare their findings with predictions based on models of the
Milky Way.
Figure 7. Artist’s conception of Kepler 1 1.
Figure 7 is an artist’s conception of the Kepler 1 1 solar system
compared to our own. It shows the Kepler- 11 planetary system and our
solar system from a tilted perspective to demonstrate that the orbits of
each lie on similar planes. Kepler 1 1 has the fullest, most compact
planetary system yet discovered beyond our own. All six planets orbiting
Kepler 1 1 are larger than Earth, with the largest ones being comparable in
size to Uranus and Neptune. If placed in our solar system, the outermost
planet would orbit between Mercuiy' and Venus, and the other five planets
would orbit between Mercury^ and our sun. The innermost planet, Kepler
1 lb, is ten times closer to its star than Earth is to the Sun.
Fall 2011
52
Summing It Up
exoplanets. eu (May 2011)
Planetary candidates detected by radial velocity or astrometry
419 planetary systems
500 planets
50 multiple planet systems
Planetary candidates detected by transits
1 2 1 planetary systems
128 planets
1 0 multiple planet systems
1,235 candidates from Kepler, 16 confirmed
Planetary candidates detected by microlensing
1 1 planetary systems
12 planets
1 multiple planet systems
Planetary candidates detected by direct imaging
2 1 planetary systems
24 planets
1 multiple planet systems
Planetary candidates detected by timing
7 planetary systems
12 planets
4 multiple planet systems
After detecting hundreds of exoplanets in the 15 years up thru 2010, we
are on the exciting threshold of detecting thousands of exoplanets and
identifying Earthlike candidates. Perhaps someday soon we will know if
there is life out there.
Washington Academy of Sciences
53
‘ Jacob, W.S. (1855). “On Certain Anomalies presented by the Binary Star 70 Ophiuchi.”
Monthly Notices of the Royal Astronomical Society, 15, 228.
" See, T. J. J. (1896). “Researches on the Orbit of 70 Ophiuchi, and on a Periodic
Perturbation in the Motion of the System Arising from the Action of an Unseen
Body.” The AstronomicalJournal, 16, 17.
van de Kamp, P. (1969). “Alternate dynamical analysis of Barnard’s star.”
Astronomical Journal, 74, 757.
Campbell, B.; Walker, G. A. H.; Yang, S. (15 August 1988). “A search for substellar
companions to solar-type Astrophysical Journal, 331, 902.
' Mayor, M. et al. (2009). “The HARPS search for southern extra-solar planets XVI II:
An Earth-mass planet in the GJ 581 planetary system”. Astronomy and Astrophysics,
507,487.
" Kalas, P. et al. (2009). “Fomalhaut b: Direct Detection of a c Jupiter-mass Object
Orbiting Fomalhaut.” Bull. Amer. Astron. Soc., 41, 491.
Fall 2011
1
This Page Intentionally Left Blank
Washington Academy of Sciences
55
Innovations in STEM Education
Sylvia M. James and Cora B. Marrett
National Science Foundation
Abstract
Recent changes in the K-12 career and technical education (CTE) sector
suggest that the effective strategies utilized in this context may also be of
interest to those concerned about science, technology, engineering and
mathematics (STEM) education in the United States. Today’s
comprehensive CTE programs incorporate cutting edge technologies,
sustainable practices, and workforce preparation in collaboration with
local business and industry, often while also preparing students for
postsecondary study. Concurrently, it has become apparent that
preparing students for the future STEM workforce requires the
utilization of all of the educational and community resources at our
disposal, and increasingly educators and policymakers are looking to the
informal sector. High quality, STEM enrichment programs offered
outside of the classroom setting may provide authentic learning
experiences and increase interest and engagement in STEM and related
careers, especially among diverse populations. However, the link
between CTE and informal science education remains relatively
unexplored. Systematic study of career and technical secondary schools
could strengthen our understanding of the various paths through which
successful STEM education might be achieved. Additionally, closer
examination of seemingly common goals in informal science education
programs for youth may offer additional tools to prepare students for a
world in which STEM knowledge and skills are valuable and necessary
assets.
Innovations in STEM Education
The educational system in the United States has come under increasing
scrutiny as school districts grapple with the challenge of preparing students
for citizenry in an increasingly complex and technologically advanced
global economy. Policymakers enact legislation that attempts to provide the
tools for ensuring that high quality education is available to all students.
National and common core standards aim to provide consistency and
continuity in student knowledge and understanding of foundational
concepts. Leaders from business and industry collaborate with academia
to identify twenty-first century skills, taking actions often with the same
Fall 2011
56
enthusiasm evident in a report from the 1990s from the U.S. Department of
Labor: the Secretary’s Commission on Achieving Necessary^ Skills.’
Perhaps the quality education the U.S. seeks can be achieved through a
broad range of innovative strategies. This article explores changes in the
career and technical education sector at the K-12 level, suggesting that the
sector warrants enhanced attention by those concerned about science,
technology, engineering and mathematics (STEM) education in the United
States. The sector deserves closer analyses, given how different the
education is that it now provides from the vocational emphases of the past.
Moreover, the changes in this sector suggest that key aims of STEM
education can be achieved through various mechanisms - an important
condition in a society marked by substantial diversity. Although there is
some systemic work on career and technical secondary schools, the link
between them and the world of informal science education remains
relatively unexplored. Based, however, on findings from informal science
education programs, especially those targeting secondary school youth in
comprehensive enrichment programs, there are reasons to conclude that
stronger ties between this kind of school and supportive activities outside of
the traditional school day could contribute to greater student success. More
systematic study of career and technical secondary schools could strengthen
our understanding of the various paths through which successful STEM
education might be achieved.
Our call for greater cognizance of career and technical education results
from two developments. The first: our visit to a highly transformed career
school in the Washington, DC area - the Phelps Architecture, Construction
and Engineering High School. The second: the conclusion from a recent
report that we know far less than is desired about successful schools, and
especially about how different attributes of schools matter for different
populations of students.” To set the stage for the discussion, we begin with
an overview of the challenges associated with STEM education in the
United States.
STEM Education Challenges
Several indicators imply that the U.S. is not succeeding in preparing its
pre-college students for work and life in a world characterized increasingly
by advances in science and technology. Some of the indicators appear in the
Program for International Student Assessment (PISA). The Program
measures mathematics and science literacy among countries belonging to
the Organization for Economic and Cultural Development (OECD).
Olt-cited are the outcomes from comparisons undertaken in 2009. The
Washington Academy of Sciences
57
outcomes: lower average literacy scores in mathematics for students in the
United States than the average across the OECD countries. That was the
result as well in 2003 and 2006. Although there was improvement from
2006 to 2009, in the latter year the US still lagged behind seventeen other
nations.
The scores in science were slightly more encouraging. In 2009, the level
of science literacy among U.S. students was comparable to the OECD
average and higher than what had been found for the U.S. in 2006.
Nonetheless, U.S. students’ average scores still fell behind those in twelve
of the other OECD countries.*"
A bleak picture emerges as well from the National Assessment of
Educational Progress (NAEP). In 2009, fourth- eighth- and twelfth-graders
were tested in physics as well as the life, and earth and space sciences and
were ranked at one of three levels: basic, proficient, or advanced. Only 34
percent of the fourth-graders performed at or above the proficient level,
meaning that they “demonstrated competency” in difficult material. At the
secondary level, eighth- and twelfth-graders’ scores at the proficient level
were even lower at 30 percent and 21 percent respectively.*'
The results from 2009 cannot be compared with those of prior years,
due to modifications made to the assessment. What has remained
consistent, however, are achievement gaps in science between white
students and other racial and ethnic groups. For mathematics, the results
are somewhat more encouraging. Although the 2009 data continued to
show gaps between population groups, the size of those gaps had narrowed
from previous years.'^
The unfavorable international comparisons have helped promote the
development of specialized STEM schools. Although some of the emerging
career and technical high schools share selected traits with such specialized
schools, contrasts can be found. To highlight the contrasts we sketch first
characteristics of special schools for STEM education.
STEM Schools: A Model for Success
Educational researchers, policymakers, parents and other stakeholders
often laud the potential of STEM schools, institutions in which STEM
disciplines are central and criteria for admissions are selective. The Thomas
Jefferson High School for Science and Technology in northern Virginia,
founded in 1985 from a collaboration involving the Fairfax County School
System and local business, illustrates the specialized school model. The
school offers a comprehensive curriculum, which emphasizes critical
Fall 2011
58
thinking skills and the integration of STEM with humanities, as in the IBET
(Integrated Biology, English, and Technology) course, a requirement for all
incoming freshmen. The freshman class typically includes approximately
480 ninth graders, selected from over 3,300 applicants residing in
Arlington, Fairfax, Fauquier, Loudoun and Prince William Counties, in
addition to the cities of Fairfax and Falls Church. Thomas Jefferson
announces its quest for highly motivated students with good grades
(although not necessarily all A’s) and “a passion” for STEM.'"'
fhe North Carolina School of Science and Mathematics represents
another specialized STEM school. This coeducational, residential
institution was launched in 1978 by the North Carolina General Assembly
to serve junior and senior high school students from all 13 congressional
districts in the state. A competitive admissions process selects only an
estimated 340 students per class, drawn so as to produce demographic
diversity. The college admission rate for graduates: 99 percent.
Contributing to the high rate is the grounding the school provides in core
subjects, coursework in science, mathematics, computer science,
humanities, and laboratory skills as well as in research.'"”
Not all specialized STEM schools are of recent vintage. The Bronx
Fligh School of Science - noted for its four years of laboratory science,
three years of mathematics and foreign language, a year of research during
the sophomore year, and an extensive array of advanced placement courses
- opened its doors in 1938. Today, with over 2900 students, it is one of
eight specialized schools serving a diverse group of gifted students from
across New York City.'"”^
The Bronx High School of Science and similar institutions belong to a
national organization that endeavors to promote collaboration, strong
programs in education, research and practice as well as policies to advance
such schools. The National Consortium for Specialized Secondary Schools
of Mathematics, Science and Technology (NCSSSMST) was established in
1988 to support STEM schools, specifically secondary institutions
“...whose primary purpose is to attract and academically prepare students
for leadership in mathematics, science, and technology.’”''
Washington Academy of Sciences
59
The Vocational School Sector
Phelps High School does not share its history with the Bronx High
School of Science or many of the other institutions within the National
Consortium. Instead, its roots are in the world of vocational education. Such
education traditionally prepared students for the trades - carpentry,
welding, and automobile repair, for example. The description of vocational
education in Wikipedia portrays its job-centered character. ‘‘Vocational
education and training prepares trainees for jobs that are based on manual or
practical activities, traditionally non-academic, and totally related to a
specific trade, occupation or vocation. It is sometimes referred to as
technical education as the trainee directly develops expertise in a particular
group of techniques or technology.”^
Phelps High School emerged as a product of vocational education and
of racial segregation in the District of Columbia. Opened in 1933, Phelps
provided African American students with training primarily for jobs in
construction. The underlying philosophy: “all forms of labor, whether with
the head or hand, are honorable.”^*
But the growing discontent nationally with vocational education and the
civil rights movement changed the fortunes of Phelps. Nationally, students
from all backgrounds moved away from work-centered education to
academic programs offering preparation for higher education. That move
was evident in the District of Columbia, resulting in the closure of one
vocational school after another.
Reportedly, perspectives from the civil rights movement played directly
into the fate of Phelps. A Washington Post article reported in 2008 that
vocational education sputtered in the civil rights era, “attacked by activists
as an attempt to steer black students into blue-collar jobs and out of the
college-prep track, where many whites were.”^*^ Such attacks undoubtedly
made a difference, but even before the civil rights era, it was not uncommon
for African American students and their parents to look askance at the
offerings at Phelps and the students pursuing them. The emphasis on
preparation for the trades simply did not prove alluring to many in the
population.^”'
Changing conditions, along with declining enrollments, a decaying
infrastructure and countless other problems led the District to close Phelps
in 2002. The current Phelps Architecture, Construction and Engineering
High School, opened in 2008 with an orientation different from what had
prevailed earlier.
Fall 2011
60
On Career and Technical Education
I'he new Phelps differs from its predecessor in its infrastructure as well
as its programs. A completely remodeled building houses state-of-the-art
laboratories and green technologies that have earned Phelps the designation
of LEED Silver Certified Green School. The expertise of such
organizations as the Washington Architectural Foundation, Associated
General Contractors, and the Mid-Atlantic Regional Council of Carpenters
contributed substantially to the planning for the modem stmcture of Phelps.
Programmatically, Phelps reflects the transition of vocational education
to career and technical education, or CTE. This form of education aims to
•‘empower students for effective participation in a global economy as
world-class workers and citizens.”^*'" It focuses heavily on the content of
courses, aiming to provide students with the knowledge needed for
additional academic work or immediate employment.
The CTE emphasis at Phelps appears in the eight majors the school
offers, ranging from architecture to welding and sheet metal. The latter
subjects have parallels with vocational education in the past. That is not the
case for the major in architecture or for what the Cisco Networking
Academy offers. In a dedicated CADD laboratory, students can try their
hands at design or use joysticks and flat screen monitors to develop and
assess their skills for operating cranes in the heavy equipment laboratory.
Students learn about sustainable technologies by monitoring energy from
photovoltaic solar arrays, wind turbines, and a geothermal cold-water loop
that are part of Phelps’ infrastmcture.
The technologies and laboratories now available place the students
light-years away from their predecessors at Phelps. But in their courses, the
students are exposed to the conceptual knowledge found in the programs
outside of the CTE sector, since the school offers a complete college
preparatory program as well. In addition, the hands-on instruction, the
simulations, and the design work give the students the authentic encounters
often associated with preparation for and inspiration in STEM education.
Such engagement could be especially significant for students likely to have
limited experiences with STEM activities outside of the school setting or
for those who might consider pursuing STEM degrees at the college level.
The recent study from the National Research Council (NRC) on
successful schools described CTE schools as major contenders in today’s
STEM school arena. Unfortunately, the research base is limited for all
types of STEM schools, including those within the CTE orbit. We offer our
Washington Academy of Sciences
61
impressions of the contributions CTE schools can make, based on our
understanding of programming at Phelps. Those impressions cannot
substitute, of course, for systematic study on how such schools in fact
prepare students for citizenship and work in a world where innovation is
sought.
Beyond the School
For many students experiences outside of the school setting reinforce
the learning pursued within the borders of the institution. For others,
STEM-related activities are highly constrained outside of those borders and
they may benefit from additional access. The NRC publication. Learning
Science in Informal Environments: People, Places and Pursuits, notes that
informal learning encompasses experiences in non-school environments
that can be characterized as lifelong, life-wide, and life deep. A broad range
of venues and social settings may serve as hubs for informal STEM learning
including museums, science centers, zoos, aquariums, libraries, community
settings, and also the home.^'" The potential of the informal sector to
strengthen STEM education accounts for programs the National Science
Foundation supports through the Division of Research on Eeaming in
Formal and Informal Settings (DRL). Next we provide a brief discussion of
theories and perspectives associated with informal learning programs and
sample strategies, while considering the potential for collaboration with
schools.
Theoretical Perspectives and Program Designs
Informal science education programs are often informed by
sociocultural theories of learning, which focus on learning in context;
addressing cultural issues important to learning; and extending the notion of
learning beyond the accumulation of facts to include participation, identity,
access, and engagement. Students take an active role in their learning as
they co-construct knowledge with mentors in learning communities.
Additionally, the use of authentic learning experiences that make
connections to day-to-day activities and promote learning across contexts
provide a seamless connection between informal and formal learning.^''"
The NRC report on informal learning also cites the use of sociocultural
perspectives to support project designs, which are often supplemented by
cognitive learning theories and positive youth development approaches.
Additionally, a summary of study findings indicates positive impacts on
youth STEM and career interests, in addition to academic gains, although
Fall 2011
62
there is still a reliance on a wide assortment of evaluations to support
impacts associated with cognitive gain and skill development/'""
Interestingly, several parallels can be found between the aims of CTE
and informal learning programs for youth such as promoting interest in
S'fEM and STEM careers and providing authentic learning experiences,
lake their formal education counterparts, informal youth programs
frequently emphasize approaches that research suggests are important for
steering students into STEM career paths, by building on interest during
early adolescence and offering intensive research experiences during high
school. Many of the projects target youth who are underserved and
underrepresented in STEM, who would not normally have access to high
quality informal science experiences.
In a longitudinal study of urban girls participating in a year-round
natural science enrichment program, the participants credited the program
activities, including classes, field trips, and STEM content with influencing
their educational and career paths. While all students did not pursue STEM
majors, the college enrollment rate exceeded 93 percent.^^ Youth programs
that enable participants to make contributions to their community through
work experiences and examination of environmental issues are also
common. Apprenticeship models commonly used in CTE schools are
adopted in non-school contexts because they enable students to learn about
the culture and practice of science, develop identities as scientists, use the
language of science in a supportive environment, develop in-depth
understanding of concepts, and access diverse tools.™ Not surprisingly,
social media is a frequently utilized mechanism for supporting STEM
learning in informal settings.^"
Collaborations with Schools
Recognizing the growing interest in and potential associated with
learning in informal learning settings, in 2007, NSF funded a demonstration
project designed to capitalize on the strengths of both formal and informal
learning settings. The Academies for Young Scientists (AYS) program
targeted students in grades K-8, and the 16 projects had the common goal of
integrating STEM across learning contexts for the express purpose of
augmenting interest and awareness of STEM careers.^^*'" At a culminating
conference, several important trends emerged that may contribute to the
existing repertoire of findings related to informal STEM learning. Like
many informal science education experiences, activities were tailored to
meet the needs of the local population, allowing for a great deal of
Washington Academy of Sciences
63
flexibility in design and scaffolding to support participant interest and
exploration in STEM. The programs were structured to allow for voluntary,
self-directed learning that is not typically a hallmark of classroom STEM
instruction, resulting in enrichment experiences for students at both ends of
the academic spectrum - those who are in need of additional STEM
experiences as well as students who already exhibit some mastery in the
subject matter. For example, participation in AYS programs was often
associated with engagement of youth who were not highly motivated by
“school STEM” which was enlightening for their classroom teachers.^'"
Collaborations between formal and informal science education
institutions such as those in AYS are of interest to many. The Center for
Advancement of Informal Science Education (CAISE), released a report in
March 2010 focused on this very topic. The report emphasized the benefits
of such collaborations including stronger science programs that incorporate
best practices associated with informal settings, emergent learning
communities, and increased equity and access for those who may be
underserved and underrepresented in STEM.^'"'
Despite the inference that informal science learning programs may have
some goals in common with CTE schools, there is little empirical evidence
at this time to support this claim or the suggestion that there might be some
benefit to a closer association between the two. However, the potential for
informal science education to strengthen the science learning in schools is
an idea that has wide appeal.
Summary
CTE schools offer excellent examples of interventions that can be
utilized to meet the goal of preparing young people to enter the future
STEM workforce. As these schools continue to emerge as educational
exemplars, they may offer models that can support STEM education goals.
Informal science learning also continues to gain prominence, as an
analysis of research provided solid evidence that learning occurs in
nonschool settings through “everyday experiences, designed settings, and
programs.”^^''" Several federal reports have lauded the strengths of high
quality informal science education programs, including the ability to foster
increased interest, motivation, and the pursuit of STEM education and
career trajectories in youth from diverse backgrounds and under-resourced
communities.^^''"'’
The prevailing theme among educators and policymakers appears to be
that the full responsibility for the preparation of a STEM literate citizeniy’
Fall 2011
I
64
should not fall upon schools, even well-equipped STEM and CTE schools,
especially when nonschool activities are an untapped resource for
cultivating student interest and engagement in STEM. All that remains is to
address the paucity of research that exists to document the impacts of
informal Sl’EM experiences and the cumulative impacts of rich
collaborations between schools and informal science education institutions.
More research is needed to understand CTE schools and other novel
approaches to STEM education and workforce preparation utilized in both
formal and informal learning environments, which are of particular benefit
to students who have traditionally been underserved in STEM.
Notes
' U. S. Department of Eabor, The Secretary’s Commission on Achieving
Necessary Skills. (1991). What work requires of schools: A SCANS
report for America 2000. Retrieved from
http://www.scans.jhu.edu/NS/HTML/AboutCom.htm
“ National Research Council. (2011). Successful K-12 STEM Education:
Identifying effective approaches in science, technology, engineering
and mathematics. Washington, DC: The National Academies Press.
OECD. (2010). PISA 2009 at a glance. Retrieved from
http://dx.doi.org/10.1787/9789264095298-en
National Center for Education Statistics. (2010). The nation 's report
card: Grade 12 reading and mathematics 2009 national and pilot state
results (NCES 2011-455). Washington, DC: U.S. Department of
Education.
National Center for Education Statistics. (2011). The nation ’s report
card: Science 2009 (NCES 201 1-451). Washington, DC: U.S.
Department of Education.
Thomas Jefferson High School for Science and Technology. (2011).
About TJ. Retrieved from http ://www/tj hsst.edu/
North Carolina School of Science and Mathematics. 2010-2011 Profile.
Retrieved from http://www.ncssm.edu/
The Bronx High School of Science. (201 1). Retrieved from
http://www/bxscience.edu/
Washington Academy of Sciences
65
National Consortium for Specialized Secondary Schools of Mathematics,
Science and Technology. (2006). Overview. Retrieved from
http://www.ncsssmst.org/default.aspx
^ Wikipedia. (201 1) Vocational Education. Retrieved from
http://en.wikipedia.org/wiki/Vocational_education
(201 1) Education Design Showcase. Retrieved from
http://ww^.educationdesignshowcase.com/View.esiml?pid=247&lasts
earch=grade%5Fid%3D7
Haynes, V. D. (2008, May 11). ''At D.C. Phelps high, a return to the
future.'' Washington Post. Retrieved from
http://www.washingtonpost.eom/wp-dyn/content/article/2008/05/10/A
R2008051002360.html
For example, one of our group, Howard Goines, had attended the DC
public schools and attested to the disdain his classmates sometimes
expressed towards Phelps.
Cumberland County Schools. (2011). Career and technical education.
Retrieved from http://www.cte.ccs.kl2.nc.us/
National Research Council. (2009). Learning science in informal
environments: People, places, and pursuits. Washington, DC: The
National Academies Press.
Vygotsky, L. S. (1978). Mind in society: The development of higher
psychological processes. Cambridge, MA: Harvard University Press.
Wenger, E. (1998). Communities of practice: Learning, meaning, and
identity. New York, NY : Cambridge University Press.
National Research Council. See note xv.
Roberts, L. F. & Wassersug, R. J. (2009). Does doing scientific research
in high school correlate with students staying in science? A half-century
of retrospective study. Research in Science Education, 39, 251-256.
^ Tai, R. H., Liu, C. Q., Maltese, A. V. & Fan, X. (2006). Planning early
for careers in science. Science, 312: 5777, 1 143-1 144.
Fadigan, K. A. & Hammrich, P.L. (2004). A longitudinal study of the
educational and career trajectories of female participants of an urban
informal science education program. Journal of Research in Science
Teaching, 41 825-860.
Fall 2011
1
66
Richmond, G., & Kurth, L. A. (1999). Moving from outside to inside:
High school students’ use of apprenticeships as vehicles for entering
culture and practice as science. Journal of Research in Science
Teaching, 36(6), 611-691 .
Bull, G., Thompson, A., Searson, M., Garofalo, J., Park, J., Young, C.,
& Lee, J. (2008). Connecting informal and formal learning: Experiences
in the age of participatory media. Contemporary Issues in Technology
and Teacher Education, 8{2). Retrieved from
http://www.citejoumal.org/vol8/iss2/editorial/articlel.cfm
National Science Foundation. (2006). Academies for young scientists
(NSF 06-560). Retrieved from
http://www.nsf.gOv/pubs/2006/nsf06560/nsf06560.htm
Bevan, B., Michalchik, V., Bhanot, R., Rauch, N., Remold, J., Semper,
R., & Shields, P. (2010). Out-of-school time STEM: Building
experience, building bridges. San Francisco: Exploratorium.
Bevan, B., Dillon, J., Hein, G.E., Macdonald, M., Michalchik, V.,
Miller, D., Root, D., Rudder, L., Xanthoudaki, M., & Yoon, S. {2919).
Making science matter: Collaborations between informal science
education institutions and schools. A CAISE inquiry group report.
Washington, DC: Center for Advancement of Informal Science
Education (CAISE).
National Research Council. See note xv.
National Science Board. (2010). NSB Report: Preparing the Next
Generation of STEM Innovators. Retrieved from
http://www.nsf.gov/nsb/publications/pub_summ.jsp?ods_key=nsbl033
President’s Council of Advisors on Science and Technology. (2010).
Prepare and inspire: K-12 education in science, technology,
engineering, and math (STEM) for America's future. Retrieved from
http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-ste
m-ed-fmal.pdf
Washington Academy of Sciences
67
Washington Academy of Sciences
1200 New York Avenue
6th floor
Washington, DC 20005
Please fill in the blanks and send your application to the Washington Academy of
Sciences at the address above. We will contact you as soon as your application has
been reviewed by the Membership Committee. Thank you for your interest in the
Washington Academy of Sciences.
(Dr. Mrs. Mr. Ms.)
Business Address
Home Address
Email
Please indicate your preferred mailing address
Business Home
Present Occupation or Professional Position
Please list memberships in scientific societies - include office held:
Fall 2011
68
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
Acoustical Society of America
American/Intemational Association of Dental Research
American Association of Physics Teachers, Chesapeake Section
American Fisheries Society
American Institute of Aeronautics and Astronautics
American Institute of Mining, Metallurgy & Exploration
American Meteorological Society
American Nuclear Societ>
American Phytopathological Society
American Society for Cybernetics
American Society for Microbiology
American Society of Civil Engineers
American Society of Mechanical Engineers
American Society of Plant Physiology'
Anthropological Society of Washington
ASM International
Association for Women in Science (AWIS)
Association for Computing Machinery'
Association for Science, Technology, and Innovation
Association of Information Technology Professionals
Biological Society of Washington
Botanical Society of Washington
Chemical Society' of Washington
District of Columbia Institute of Chemists
District of Columbia Psychology Association
Eastern Sociological Society
Electrochemical Society
Entomological Society' of Washington
Geological Society of Washington
Historical Society' of Washington. DC
Human Factors and Ergonomics Society
Institute of Electrical and Electronics Engineers, Washington DC Section
Institute of Electrical and Electronics Engineers, Northern Va. Section
Institute of Food Technologies
Institute of Industrial Engineers
Instrument Society of America
Marine Technology Society
Mathematical Association of America
Medical Society of the District of Columbia
Paul Arveson
J. Terrell Hoffeld
Frank R. Haig, SJ.
Ramona Schreiber
David W. Brandt
Michael Greeley
Kenneth Carey
Steven Arndt
Kenneth L. Deahl |
Stuart Umpleby
VACANT
Kimberly Hughes
Daniel J. Vavrick
Mark Holland
Marilyn London
Toni Marechaux
Jodi Wesemann
Kent Miller
F. Douglas Witherspoon
Barbara Safranek
F. Christian Thompson
Emanuela Appetiti
Jim Zwolenlk
Jim Zwolenlk
David Williams
Ronald W. Mandersheid
Robert L. Ruedisueli
F. Christian Thompson
Bob Schneider
VACANT
Michael Eidelkind
Richard Hill
Murty Polavarapu
Isabel Walls
Neal F.Schmeidler
Hank Hegner
Judith T. Krauthamer
Sharon K. Hauge
Duane Taylor
Washington Academy of Sciences
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
National Capital Astronomers
National Geographic Society
Optical Society of America
Pest Science Society of America
Philosophical Society of Washington
Society of American Foresters
Society of American Military Engineers
Society of Experimental Biology and Medicine
Society of Manufacturing Engineers
Soil and Water Conservation Society
Technology Transfer Society
Virginia Native Plant Society, Potomac Chapter
Washington Evolutionary Systems Society
Washington History of Science Club
Washington Chapter of the Institute for Operations Research and Management
Washington Paint Technology Group
Washington Society of Engineers
Washington Society for the History of Medicine
Washington Statistical Society
World Future Society
I
!
i
Jay H. Miller
VACANT
Jim Cole
VACANT
Peg Kay
Denise Ingram
VACANT
VACANT
VACANT
Bill Boyer
Clifford Lanham
VACANT
Jerry L.R. Chandler
Albert G. Gluckman
Russell R. Vane III
VACANT
Alvin Reiner
Alain Touwaide
Mike Cohen
Russell Wooten
Washington Academy of Sciences
Floor
1200 New York Ave. NW
Washington, DC 20005
Return Postage Guaranteed
NONPROFIT ORG
US POSTAGE PAID
MERRIFIELDVA 22081
PERMIT# 888
III.mmI.Im.IImII.ImI.mI.II.ImII 1. 1. . I. Ml... 1. 1. 1
^hc^h=***********j^jXE£) ADC 207
SERIAL RECORDS DIVISION
ERNST MAYR LIBRAY
MUSEUM COMP ZOOLOGY HARVARD UNIVERSITY
26 OXFORD ST
CAMBRIDGE, MA 02138-2902
W/^5
MCZ
LIBRARY
2"i;
HARVARD
UNIVERSITY
Volume 97
Number 4
Winter 2011
Journal of the
WASHINGTON
ACADEMY OF SCIENCES
Washington Academy of Sciences
Founded in 1898
Board of Managers
Elected Officers
President
Gerard Christman
President Elect
James Cole
Treasurer
Larry Millstein
Secretary
Terrell Erickson
Vice President, Administration
Jim Disbrow
Vice President, Membership
Sethanne Howard
Vice President, Junior Academy
Dick Davies
Vice President, Affiliated Societies
Victor Miriel
Members at Large
Denise Ingram
Michael Cohen
Paul Arveson
Frank Haig, SJ.
Neal Schmeidler
Catherine With
Past President Mark Holland
Affiliated Society Delegates
Shown on back cover
Editor of the Journal
Jacqueline MaffuccI
Associate Editor
Sethanne Howard
Academy Office
Washington Academy of Sciences
Room 113
1200 New York Ave. NW
Washington, DC 20005
Phone: 202/326-8975
The Journal of the Washington Academy of
Sciences
The Journal \s the official organ of the Academy.
It publishes articles on science policy, the history
of science, critical reviews, original science
research, proceedings of scholarly meetings of
its Affiliated Societies, and other items of interest
to its members. It is published quarterly. The last
issue of the year contains a directory of the
current membership of the Academy.
Subscription Rates
Members, fellows, and life members in good
standing receive the Journal free of charge.
Subscriptions are available on a calendar year
basis, payable in advance. Payment must be
made in US currency at the following rates.
US and Canada $30.00
Other Countries $35.00
Single Copies (when available) $15.00
Claims for Missing Issues
Claims must be received within 65 days of
mailing. Claims will not be allowed if non-
delivery was the result of failure to notify the
Academy of a change of address.
Notification of Change of Address
Address changes should be sent promptly to
the Academy Office. Notification should
contain both old and new addresses and zip
codes.
POSTMASTER:
Send address changes to WAS, Rm 113,
1200 New York Ave. NW
Washington, DC 20005
Journal of the Washington Academy of
Sciences (ISSN 0043-0439)
Published by the Washington Academy of
Sciences 202/326-8975
email: was@washacadsci.org
website: www.washacadsci.org
MCZ
UBRARV
WIAR 2 6 ^0^-
Editor’s Comments
> a'-' ARD
I am excited to present to you the Winter 2011 issue of the Joumat'”^^
of the Washington Academy of Sciences, which focuses on service-
learning. Honestly, before this issue, I had never heard of the term service-
learning. It is a hands-on, integrated approach to learning that connects
students with their community and fosters a cooperative partnership for
reciprocal learning; both the community and the student benefits.
This issue focuses on the service-learning program that is on-going
at George Washington University. The first article introduces the theory
and history behind service-learning, to get the reader familiar with the
subject at hand. The second article illustrates a specific example of the use
of service-learning in an undergraduate classroom at GWU. Finally, the
third article introduces the reader to the use of service-learning
internationally, emphasizing both the benefits and the challenges in
implementing such a program abroad.
My hope is that this issue will familiarize you with service-
learning, and perhaps inspire you to consider service-learning in your own
lives as academics, scientists, and community members. While reading
these articles, I was reminded of my first job after graduating from my
undergraduate institution. It wasn’t until I applied my training to hands-on
activities {i.e. my first job out of college) that I really solidified my
understanding of these principles. Service-learning is a means of
introducing this hands-on application earlier, to enhance the student’s
learning experience, as well as foster a lasting relationship with
community interests. Although we focus here on undergraduate and
graduate applications, this is a learning technique that can be applied at
any age.
Further information can be found at: George Washington University’s
Center for Civic Engagement and Public Service
http://www.gwu.edu/explore/campuslife/studentinvolvement/serviceengag
ement/serviceleaming and The National Service-Learning Clearinghouse
http://www.serviceleaming.org /
Enjoy!
Jacqueline Maffucci
Winter 201 1
INSTRUCTIONS TO AUTHORS
1. Manuscripts should be in Word (Office 03/07/10) and not PDF.
2. They should be 6,000 words or fewer (exceptions may be made by
the Editor). If there are 7 or more graphics, reduce the number of
words.
3. Graphics (photographs, drawings, figures, tables) must be in
graytone only (no color accepted), and be easily resizable by the
editors to fit the Journal’s page size. Do not wrap text around the
graphics.
4. References (and bibliography, if included) may be in the format
generally acceptable for the disciplinary or professional field
represented by the manuscript. They must be accurate, complete,
and consistent in format throughout the paper.
5. Include both an e-mail address and a postal address for the author
(or primary author) including title and institutional affiliation if
any.
6. Papers are peer reviewed.
7. Send Manuscripts by e-mail as an attachment, or on a CD, to
Joumal@washacadsci.org or directly to the editor. Dr. Jacqueline
Maffucci - iamaffucci@gmail.com. Hard copy cannot be accepted.
Manuscripts can be accepted by any of the Board of Discipline
Editors.
Emanuela Appetiti - anthropology at eappetiti@hotmail.com
Elizabeth Corona - systems science at elizabethcorona@gmail.com
Jim Eigenreider - science education at iim@deepwater.org
Terrell Erickson - environmental natural sciences at
terrell.erickson 1 @wdc. nsda.gov
Mark Holland - botany at maholland@salisbur\ .edu
Kiki Ikossi - engineering at ikossi@ieee.org
Carol Lacampagne - mathematics at clacampagne@eaithlink.net
Raj Madhaven - engineering at raj .madhaven@nist.gov
Kent Miller - computer sciences at kent.l.miller@alumni.cmu.edu
Jean Mielezarek - physics and biology at mielczar@physics.gmu.edu
Robin Stombler - health at rstombler@aubumstrat.com
Alain Touwaide - history of medicine at atouwaide@hotmail.com
Steve Tracton - atmospheric studies at straction@hotmail.com
Washington Academy of Sciences
Service-Learning as a Method of Instruction
Stuart Umpleby
The George Washington University
Abstract
Service-learning is a new educational method that is expanding the involvement
of universities in their neighboring communities. It also tends to promote the
civic and moral development of students. This paper explains what service-
learning is and how it is consistent with the history of universities in general and
particularly universities in the U.S. with their focus on applied knowledge. The
paper describes how service-learning has developed in the U.S. and how it is
practiced in a School of Business. The article presents some important lessons
learned from conducting service-learning at The George Washington University.
Service-Learning Defined
Service-learning is now being practiced at many levels of
education in the United States. Common service-learning activities include
the following: Middle school students (11-14 years old) may help to clean
up a part of the city and then write essays about keeping the city clean or
the importance of caring for the environment. High school students (15-18
years old) sometimes help to deliver meals to elderly or terminally ill
people and then write essays on what life is like for people in different
stages of life. Undergraduate and graduate students in the School of
Business at The George Washington University often do group projects
with local organizations. Students in management work in teams of 3 to 5
as consultants to non-governmental organizations (NGOs), government
agencies or businesses. These projects are the ‘‘laboratory” part of a
management course. The client is a second instructor. Students have an
opportunity to observe an organization while helping the organization to
improve its processes. Students write a paper in which they describe the
work they did and use as many concepts from the course as they can,
thereby connecting the concepts in the textbook with their personal
experiences.
Service-learning can be defined as “service performed by students,
aimed at attending to a real need of the community, and oriented in an
explicit and planned way to enhance the quality of academic learning.”
(Tapia, et al., 2006, p. 68) A service experience should be personally
Winter 2011
2
meaningful and beneficial to the community. In addition, there should be
clearly identified learning objectives, student involvement in selecting or
designing the service activity, a theoretical base, integration of the service
experience with the academic curriculum and opportunities for student
reflection. (Furco and Billig, 2002, pp. 7-8)
Individuals may cognitively process knowledge in one of four
ways: personal experiences, reflective observations, abstract
conceptualizations, or active experimentations. Based on their
personalities, individuals may prefer one learning style over another. A
major strength of service-learning projects is that they contain both
personal experiences and reflective opportunities. Thus students are likely
to be responsive to service-learning activities regardless of their learning
style. (Lester, et ai, 2005, p. 279)
Three reasons can be given for encouraging service-learning: aid
the community, more effective learning and moral development.
Advocates of service-learning argue that a key value of service lies in its
ability to foster heightened moral awareness. Service-learning projects
expose students to community needs. Service activities are an opportunity
to infuse the message that organizations can “do well by doing good.”
Service-learning experiences therefore can be seen as an instructional
technique that encourages individuals to be socially responsible and
engage in moral actions. (Lester, et al., 2005, p. 279)
Service-learning is not simply a pedagogy. Rather, service-learning
is a means to empower students and educational institutions to become
more aware of the needs of the communities of which they are a part and
to become engaged and civically active in mutually beneficial ways.
Community-based service that relates to course and curricular content is
becoming increasingly embedded in curricula. Evidence is beginning to
show that service-learning has not only begun to transform education, but
it also has transformed the lives of many of the students involved. (Casey,
et al., 2006, p. xi)
An Historical Commitment to Community Service
Universities have a long history of making important contributions
to their surrounding communities. European universities emerged in
decentralized medieval society and became more widespread in the
fifteenth and sixteenth centuries due in part to actions by city authorities
and regional authorities. Universities were supported because of the
Washington Academy of Sciences
3
recognition that the growth and dissemination of knowledge was of value
to the community. (Florax, 1992, pp. 275-276)
The U.S. has a tradition of people organizing efforts to serve public
interests. In his famous nineteenth century study of American society, de
Tocqueville noted Americans’ habit of forming voluntary associations to
advance their own and the community’s interests. De Tocqueville
suggested that such associations were crucial to the vitality of American
society, pointing out that their activities served to shape the participants’
recognition of the coincidence of personal and public interest, which he
called ‘‘the principle of interest rightly understood.” (Pritchard, 2002, p. 4)
Universities in the U.S. have been said to engage in the three
activities of education, research and service. The role of universities in
providing service to society beyond simply educating the next generation
has a long histoiy\ In 1862, the U.S. government passed the Morrill Act,
which established agricultural and engineering extension services at state
universities. Under this act, the federal government gave land to the states.
The states were to sell the land and use the money to buy stocks that
would generate perpetual income to support the universities. The
universities were to teach students, conduct research on improved
methods, and communicate the results of the research directly to farmers
and businessmen through “extension agents.” Extension agents were
similar to traveling salesmen for new agricultural and engineering
methods. Hence, the activity of service was changed and universities took
a more active role in providing service to society.
Students and faculty members at U.S. universities have been doing
volunteer work with community organizations for many years. For
example, Russell Ackoff, his colleagues and students worked not only
with business clients but also with community leaders in the
neighborhoods near the University of Pennsylvania. (Ackoff, 1974) These
consulting activities were discussed in class and were part of the
curriculum. However, the rapid growth of service-learning as a teaching
method is rather recent. The growth of service-learning in the U.S. can be
described as passing through several stages.
1 . Students have long worked in groups to complete a large
assignment. This method of learning is a step beyond lectures,
exams, and term papers.
2. At least by the 1970s some students were doing projects which
were not just hypothetical projects or laboratory exercises. Rather,
students worked on projects with real clients with real problems.
Winter 201 1
4
3. The term “service-learning” was invented and defined as a
pedagogical method.
4. Books and articles on service-learning began to appear in the
educational literature.
5. Articles on service-learning began to appear in discipline-oriented
journals. Hence, publications about service-learning spread beyond
schools of education to the journals of other disciplines.
Increased attention to service in the educational curriculum arises
at a time when modem industrial economies have become more
knowledge intensive. Universities are important social institutions that
contribute to economic growth. So, combining education, research and
service, rather than keeping them separate is arising in part due to an effort
to couple the knowledge creating activities of the university more closely
to the community.
“From community colleges to major research universities, relations
to surrounding communities are central to the higher educational agenda.
The institutions of higher education profiled in this book are using various
strategies to revitalize local neighborhoods while concurrently fulfilling
some aspect of their educational mission.” (Maurrasse, 2001, p. 181)
Service-learning is one example of this heightened commitment to
community service.
Another reason for the spread of service-learning is the
motivations of faculty members.
I started assigning group projects with real clients in 1978, soon
after arriving at GW. Although there certainly is a role for textbook
problems, I feel that students learn more from working on real problems
than on hypothetical problems.
One indication of the spread of service-learning in the U.S. is the
growth in membership of Campus Compact, which was founded in the
mid 1980s by the presidents of three universities - Brown University,
Georgetown University and Stanford University. Their intent was to
persuade the presidents of other universities to encourage faculty, students
and staff to engage in service activities. The “compacf’ is a statement that
university presidents are asked to sign. If the president signs then that
university becomes a member of Campus Compact (www.compact.org)
and becomes publicly committed to engaging in service-learning
activities. Figure 1 shows the growth in the number of university
presidents who have signed the compact since 1985.
Washington Academy of Sciences
5
Figure 1: The Growth of Campus Compact
The American Emphasis on Applied Knowledge
Service-learning can be seen as an extension of a long-standing
commitment in the U.S. to practical knowledge. Some countries
emphasize theoretical knowledge to the neglect of applied knowledge.
Richard Feynman, who won a Nobel Prize in physics, described his
experience of teaching in Brazil. Fie was puzzled by the observation that
his students could answer some questions quickly and accurately, but
other questions, which seemed the same to him, they could not answer at
all.
After a lot of investigation, I finally figured out that the students had
memorized everything, but they didn’t know what anything meant.
When they heard “light that is reflected from a medium with an
index,” they didn’t know that it meant a material such as water...
Everything was entirely memorized, yet nothing had been translated
into meaningful words. (Feynman, 1984, pp. 212-213)
Service-learning provides a way of relating textbook assignments
and classroom discussions to personal experiences.
Winter 201 1
6
Thomas Ehrlich, former president of Indiana University and
former chair of Campus Compact, has described a debate over the nature
of a liberal education which occurred in the U.S. in the 1930s. On one side
was Robert M. Hutchins, president of the University of Chicago and his
colleague Mortimer Adler. They argued for focusing the undergraduate
curriculum on a selection of “Great Books.” They claimed that the study
of the works of major Western thinkers would lead to a set of principles
covering all aspects of human life.
On the other side philosopher John Dewey argued that this claim
was dangerous because the notion of fixed truths requires a seal of
authenticity from some human authority, which leads away from
democracy and toward fascism. He also argued that purely intellectual
study should not be separated from practical study or from the great
practical problems confronting society.
Such separation can only weaken the intellect and undercut the
resolution of those problems. Study Aristotle, Plato, Aquinas, and the
others, Dewey urged, but recognized that contemporary learning from
their writings requires the application of their insights to contemporary
issues... At the time of the debate and for most of the next half-century
leaders in higher education generally concurred that Hutchins won the
argument. The premise of service-learning is.... however, that Dewey was
right and Hutchins was wrong. Service-learning is the various pedagogies
that link community service and academic study so that each strengthens
the other. The basic theory of service-learning is Dewey’s: the interaction
of knowledge and skills with experience is key to learning. (Ehrlich, 1996,
pp. xi-xii)
Service-learning is one of several trends in pedagogy that together
mark a shift in undergraduate education from an emphasis on teaching to
one on learning. Among the other trends are a focus on problems rather
than disciplines, an emphasis on collaborative rather than individual
learning, .... and careful articulation of learning outcomes coupled with
assessment of learning success. (Ehrlich, 1996, p. xiii)
Robert Coles (1993) makes the case for the impact on moral
character that derives from community service in conjunction with guided
reflection, a necessary ingredient of service. Service-learning can enhance
interpersonal skills that are key in most careers - skills such as careful
listening, consensus building, and leadership. Dewey wrote that education
should be the primary means of social progress, not just a means to
develop the intellect for its own sake. Democracy depends on an involved
Washington Academy of Sciences
7
citizenry. Lee Shulman suggested in 1991 that service learning may
become the “clinical practice of the liberal arts.” (Ehrlich, 1996, p. xv)
The Benefits of Service-Learning
Service-learning is now being studied from several points of view,
depending on the interests of researchers. Key topics that are being
discussed concern implementation of service-learning in curricula,
methods of implementation, establishment of collaboration with the
community, and benefits of service-learning for all parties (students,
faculty, community and educational institution).
The motivation of faculty members to adopt service-learning as a
method of instruction has been studied by Barbara Holland (2003). She
found that there are different sources of faculty motivation. Faculty
members might be motivated by personal values, values that inspire their
commitment to a life of service, the success of their discipline and the
quality of their teaching and research. Hence, service-learning and
collaboration with the community can be a result of either individual or
professional goals.
Measuring the outcomes of service-learning for the various parties
has been attempted by many authors. In their studies, they pay most
attention to the outcomes for students. The most difficult to measure or
identify are the outcomes for educational institutions. The benefits for the
community are obvious. Students do work that would increase the
expenses of community organizations if the work were done by employees
or professionals who were paid for their work. Clearly, both students and
client organizations benefit. Some participants benefit more than others
but certainly implementation of service-learning as part of a course will
have positive impacts on students, faculty, community and educational
institutions.
Janet Eyler and her colleagues have summarized the research on
service-learning in higher education over the past few years. Among their
findings, each of which is annotated with references, are the following:
• Service-learning has a positive effect on student personal
development such as sense of personal efficacy, personal identity,
spiritual growth, and moral development.
• Service-learning has a positive effect on interpersonal
development, the ability to work well with others, and leadership
and communication skills.
Winter 2011
8
• Service-learning has a positive effect on reducing stereotypes and
facilitating cultural and racial understanding. However, a few
studies suggest that service-learning may subvert as well as
support course goals of reducing stereotyped thinking and
facilitating cultural and racial understanding.
• Service-learning has a positive effect on sense of social
responsibility and citizenship skills.
• Students and faculty report that service-learning has a positive
impact on students’ academic learning.
• Students and faculty report that service-learning improves
students’ ability to apply what they have learned in the “real
world.”
• Service-learning participation has an impact on such academic
outcomes as demonstrated complexity of understanding, problem
analysis, critical thinking, and cognitive development.
• Students engaged in service-learning report stronger faculty
relationships than those who are not involved in service-learning.
• Service-learning improves student satisfaction with college.
• Students engaged in service-learning are more likely to graduate.
• Faculty using service-learning report satisfaction with quality of
student learning. They report commitment to research. They
increasingly integrate service-learning into courses.
• Colleges and universities report that community service positively
affects student retention and enhances community relations.
• Communities report satisfaction with student participation and
enhanced community relations. (Eyler, et al., 2003, pp. 15-19)
Implementation of Service-Learning
Service-learning in the curriculum can be implemented in several
ways. (Enos and Troppe, 1996) Service-learning can be a fourth-credit
option (add a fourth credit to a regular three-credit course), a stand-alone
module (three credits) or part of a normal course. In terms of its place in
the curriculum, service-learning can be incorporated into an introductory
course, a required course, or an elective course. Service-learning can be
included as course clusters, as capstone projects, etc. Each university
needs to adjust the implementation of service-learning depending on the
field and the abilities of students. Service-learning can be implemented in
every field but not in every course.
Washington Academy of Sciences
9
Establishing partnerships between a university and the community
is very important. Partnerships are usually established in three stages:
designing partnerships based on values, building collaborative working
relationships among partners, and sustaining the partnerships. (Torres and
Schaffer, 2000) In many service-learning activities students work as
individuals on tasks arranged by leaders of non-governmental
organizations (NGO) and university administrators. However, in graduate
management classes students often do group projects with organizations
where one student is employed.
Service-learning at The George Washington University
Service-learning is widely practiced at The George Washington
University. In the past few years, more than 30 faculty members in 17
departments have integrated service-learning into their course offerings.
(Benton-Short and Morrison, 2007, pp. 4-5) The Center for Civic
Engagement and Public Service is the clearinghouse for service-learning
activities at The George Washington University. The Center’s staff
members work to support service-learning across all academic
departments by providing resources, support, and information to faculty,
students, administrators and community partners. The staff has established
more than 60 campus-community partnerships with local schools,
agencies, and community organizations. Faculty engaged in service-
learning may access the Center as a resource for identifying a community
partner for service-learning projects. (Benton-Short and Morrison, 2007,
pp. 8-9)
Benefit to Students
By doing group projects students experience the psychological
sequence of working in a team - forming, storming, nooning, and
performing. (Tuckman, 1965) Students are able to apply what they have
learned in the classroom. They gain experience with organizations and the
problems they face. They learn not only to solve well-formulated textbook
problems but also to identify ill-defined problems in an organizational
setting. Students gain confidence in their ability to solve organizational
problems.
Winter 2011
10
The Assignment
Service-learning courses contain key elements that set them apart
from traditional classes. The main differentiator of a service-learning
course is that part of the course occurs outside of the classroom and in the
community. Service-learning courses possess a greater amount of
complexity in terms of the number of stakeholders involved and the
quality, resonance, and nature of knowledge transfer and competence
building. Within a service-learning course, a student’s learning will go
beyond the course subject matter to include capacity building, team work,
leadership, communication and citizenship. (Faculty Service-Learning
Toolkit, 2007)
Students in the School of Business at GW are doing service-
learning projects as part of some courses. The assignment is to improve
the functioning of some organization. The students use the knowledge and
methods that they have acquired from the textbook and classroom
discussions. Before the students start to work on a project, the professor
provides specific guidelines and recommendations on how to do the
project. These guidelines help students do the project effectively. See the
website http://www.gwu.edu/~rpsol/service-leaming.
Students also receive instmctions for working on the project
effectively and achieving the project goals. At the end of the semester,
when students finish the project, they prepare a final report which is
presented both to the client and in class in front of their classmates.
Students are given instmctions on how to prepare the final report. The
client completes an evaluation form and sends it to the instmctor. The
guidelines help students to develop an appropriate path for doing the
projects so they do not lose time. The guidelines also make the projects
more comparable.
Types of Projects
From 1992 to 2007 my students worked on 70 projects for
different clients - local and state government, nonprofit organizations,
businesses, and universities. The projects can be classified according to
the type of organization. The distribution of the projects in terms of
numbers and percentages is listed in Table 1. (Levkov and Umpleby,
2009)
Washington Academy of Sciences
11
Table 1: Clients of student projects
Students used their skills and knowledge to do different kinds of
tasks in their project activities. Short descriptions of a few projects show a
wide range of activities:
• An international non-governmental organization needed help
finding specific solutions to improving processes in the areas of
marketing, strategic management, and human resources.
• Students worked with a U.S. government agency to incorporate
improvements into the new budget development process for the
fiscal year 2008 budget cycle.
• A department of the city government sought recommendations for
the proposed restructuring of the information systems department
and suggestions for how the staff could keep their technical skills
current.
• Students worked with a U.S. government agency to find the best
governance model for managing a web portal.
• Students worked with an office at GW to create a mentor program
for incoming, international students pursuing an undergraduate
degree.
Choosing a Project
There are several possible ways that students can find a project.
Students can be assigned a project by the instructor, they can choose a
project from a list of possible projects or they can find a project through
their work place or through friends. In my classes, students usually work
with an organization where one student is employed. I also suggest
possible projects and clients.
Depending in part on the class the students do a wide variety of
projects, for example improving office procedures, creating a cross-
cultural training program, revising personnel procedures, conducting a
Winter 2011
12
survey of customers or employees, building a website, or guiding a
strategic planning process.
Integration of the Internet into Service-Learning
Email has made it much easier for students to work together on a project.
Since the internet is now worldwide, students are choosing to do projects
with clients in other countries. Usually the client is a friend or relative of a
student in the group. Here are a few examples of international projects.
• One member of a group was a Korean student who had a brother
who worked in Mexico. The brother’s firm made auto parts at a
factory in Mexico. The Korean managers were having difficulty
communicating with the Mexican workers due to cultural
differences. So, in a Cross-cultural Management class a group of
students created a training program for the Korean managers and
Mexican workers, so they would better understand the cultural
differences between Korea and Mexico.
• A group of students in an organizational behavior course worked
with Somali television. Somalia was a failed state. For several
years it had had no government, because the Somali government
officials had moved to Kenya to avoid the chaos in Somalia. But
many organizations continued to function, including Somali
television. The owner lived in Tondon. My students worked with
the owner via email on two projects. First, they found a code of
journalistic ethics, which was used in training the journalists in
Somalia. Second, they obtained an organization chart from a
television station in Washington, DC, and sent it to the owner in
Tondon along with recommendations on how to organize the
people working at the station in Somalia.
• A student from Ethiopia shared the class notes on quality
improvement methods with the Ethiopian government official in
charge of quality improvement in Ethiopia. Via email she
explained the class notes and provided additional books and
articles.
Lessons learned
The work of the students is invariably rated very highly by the
clients. What students are able to accomplish in one semester is quite
impressive. The most frequent suggestion from clients is that students
should work with them more closely.
Washington Academy of Sciences
13
When projects are conducted by graduate students, usually they
decide to work with an organization in which one or more students are
working. Several benefits result when the student chooses the client:
• Increased trust between the students and the client
• Better collaboration
• More knowledge of the organization and the processes and
problems within the organization
• Less difficulty defining and analyzing the problems and
developing solutions
• A greater likelihood that the recommended changes will be
implemented, because essential support for implementation
continues with the student employee.
We have also learned that projects work better when the person
desiring that the project be done is the same person the students work
with. In the DC government projects we found that sometimes a superior
wanted the project to be done, but the students worked with a person lower
in the chain of command. In these cases the immediate client often seemed
to feel that the students were there to observe and to report to the higher
level manager. This perception sometimes led to non-cooperation, which
interfered with completing the project in a timely fashion.
Conclusion
In the United States service-learning has proven to be an effective
means both for education and for community development. Service-
learning is a new pedagogical method which is spreading rapidly in the
U.S. and in other countries. Research shows that it improves the
effectiveness of education and has a beneficial effect on students’ sense of
social responsibility. The work that students do is beneficial to
neighboring communities and organizations. Service-learning aids
learning, is a way for universities to contribute to their communities, and
helps to instill democratic values.
Winter 2011
14
References
Ackoff, Russell L. (1974). Redesigning the Future: A Systems Approach
to Societal Problems. New York: Wiley.
Benton-Short, Lisa and Emily Morrison. (2007). “Service-Learning
Advisory Board Report on Service-Learning,” A working paper. The
George Washington University, Washington, DC.
Casey, Karen McKnight, et al. (2006). Advancing Knowledge in Service-
Learning: Research to Transform the Field. Greenwich, CT:
Information Age Publishing.
Coles, Robert. (1993). The Call of Service: A Witness to Idealism. Boston:
Houghton Mifflin.
Ehrlich, Thomas. (1996). “Forward,” in Jacoby, Barbara (ed.). Service-
Learning in Higher Education: Concepts and Practices. San
Francisco: Jossey-Bass, pp. xi-xvi.
Enos, Sandra L. and Marie Troppe. (1996). “Service-Learning in the
Curriculum,” in Barbara Jacoby and Associates (eds.). Service
Learning in Higher Education: Concepts and Practices, San
Francisco: Jossey-Bass.
Eyler, Janet S., et al. (2003). “At a Glance: What We Know About the
Effects of Service-Learning on College Students, Faculty, Institutions,
and Communities, 1993-2000,” in Campus Compact. Introduction to
Service-Learning Toolkit: Readings and Resources for Faculty.
Second Edition. Providence, RI: Brown University, pp. 15-19.
“Faculty Service-Learning Toolkit,” (2007). Community Campus
Partnership for Health and National Service-Learning Clearinghouse.
Feynman, Richard P. (1984). Surely YoiTre Joking, Mr. Feynman!’':
Adventures of a Curious Character. New York: W.W. Norton and
Company.
Florax, Raymond. (1992). The University: A Regional Booster? Economic
Impacts of Academic Knowledge Infrastructure. Brookfield, VT:
Ashgate Publishing.
Furco, Andrew and Shelley H. Billig (eds.). (2002). Service-Learning:
The Essence of the Pedagogy. Greenwich, CT: Information Age
Publishing.
Washington Academy of Sciences
15
Holland, Barbara. (2003). "Factors and Strategies that Influence Faculty
Involvement in Public Service,” in Campus Compact. Introduction to
Service-Learning Toolkit: Readings and Resources for Faculty.
Providence, RI: Brown University, pp. 253-256.
Lester, Scott W., et al. (2005). "Does Service-Learning Add Value,
examining the Perspectives of Multiple Stakeholders,” in Academy of
Management Learning and Education, Special Issue: Service-
Learning, Vol. 4, No. 3, pp. 278-294.
Levkov, Nikola and Stuart Umpleby. (2009). "How Service-Learning is
Conducted in a School of Business,” in CEA Journal of Economics,
Vol. 4, No. 2, pp. 26-34, Center for Economic Analyses, Skopje,
Macedonia.
Maurrasse, David J. (2001). Beyond the Campus: How Colleges and
Universities Form Partnerships with Their Communities. New York:
Routledge.
Pritchard, Ivor A. (2002). “Community Service and Service-Learning in
America: The State of the Art,” in Furco, Andrew and Shelley H.
Billig (eds.). Service-Learning: The Essence of the Pedagogy.
Greenwich, CT: Information Age Publishing.
Tapia, Maria Nieves, et al. (2006). “Service-Learning in Argentina
Schools”, in Casey, Karen McKnight, et al. (eds.). Advancing
Knowledge in Service-Learning: Research to Transform the Field.
Greenwich, CT: Information Age Publishing.
Torres, Jan and Julia Schaffer. (2000). “Benchmarks for
Campus/Community Partnerships,” in Campus Compact. Introduction
to Service-Learning Toolkit: Readings and Resources for Faculty.
Providence, RI: Brown University, pp. 101-104.
Tuckman, Bruce. (1965). “Developmental Sequence in Small Groups,” in
Psychological Bulletin. 63 (6): pp. 384-99.
Winter 201 1
This page intentionally left blank
Washington Academy of Sciences
Preparing “academic citizens:” Service-learning in Research
Universities
17
Phyllis Mentzell Ryder
George Washington University
Abstract
Service-learning is often promoted as a way of increasing civic engagement and
concern for public life. While these are excellent goals, their prominence may
obscure the ways that service-learning benefits students as burgeoning scholars.
1 argue that conducting research with and for community organizations helps
students understand that academic work is accountable to both public and
academic communities. At the same time, comparing the community-building
strategies of both public and academic communities introduces students to the
rhetorical moves by which scholars signal their disciplinary affiliations. A
reflexive approach to service- learning emphasizes students’ roles as scholars as
much as it emphasizes their role as citizens.
Introduction
As a philosophy of teaching embraced across many disciplines, service-
learning is employed to promote students’ civic and academic engagement.
Studies of K-12 service learning programs show that students who participate in
service-learning are more politically engaged, more tolerant of others, and even
fifteen years later are more likely to remain active in community work and to vote
than those who do not participate in such classes (Corporation for National and
Community Service 2007). Moreover, students develop better problem-solving
skills, understand complex ideas more fully, feel more connected to their schools,
and receive higher grades on content area tests (Corporation for National and
Community Service 2007). When college students take service-learning classes,
they have better attitudes, skills, and more understanding of social issues (Eyler,
Giles and Braxton, 1997).
As a strategic institutional initiative, the growth of service-learning can be
understood as a response to the broader public portrayal of “the academy” as a
distant, elite place that is out of touch with the “real world.” In times when public
budgets are highly scrutinized, research pursuits that are seen as too cerebral are
regularly ridiculed by those who would have universities focus on pragmatic
concerns. Articles written for specialized academic journals are criticized for their
dense, academic language and convoluted sentences; the complexity of academic
Winter 2011
18
thought is shrugged off as elitist obfuscation. Legislatures and businesses call on
higher education to focus on what matters to them, such as training students to be ;
future workers and entrepreneurs. Connecting academic work with community .
work can effectively counter some of those concerns by making visible the •
usefulness of fields that aren’t readily seen as “pragmatic.” And indeed, many I
students value service-learning courses precisely because of the connections they |
can make between academic content and the “real world.” |
To break down this conceptual division between “academic” and “public” i
work, service-learning helps extend the definition of “public work.” Often we talk I
about “public” life as that which happens outside of the university; we profess !
that what we are teaching “in here” can better serve students and communities i
when they work “out there.” But I think we present a disingenuous view of the
relationship between academic and public life when we treat the two as so
completely separate. If we reflect more fully on the intersections of these arenas i
and use that overlap, we will help our students understand that they can draw on a I
similar set of intellectual, community-building tools to navigate both worlds.
A longitudinal study of students at Harvard University reveals that when ;
students viewed their writing and research projects as activities with a purpose ;
beyond the specific class, they grew more as writers and felt more engaged with |
their studies (Sommers and Salz 2004). Students were more receptive to feedback >
and more engaged in their work. My approach to service-learning tries to harness
this motivation and to provide a framework through which students can look at i
work both inside and outside the academy and see how it draws on similar j
rhetorical approaches and meets similar needs. The goal is to teach academic „
writing in such a way that students recognize it as public work. ,
i
In this paper I first offer a precautionary story about setting up service- '
learning partnerships: I argue that professors need to pay special attention to what ;
communities can teach us. Then I explain the common rhetorical features of i
academic articles and the publications of community organizations: both scholars |
and community organizations draw on rhetorical strategies to build their public
support. Both work hard to convince their audiences that their methods of making ;
and sharing knowledge are valuable. Both make implicit arguments about what is :
worth paying attention to, what an audience should do, and how people in the '
audience are connected to each other. After briefly explaining these rhetorical
strategies, I will show how they appear in both a community document (a “report !
card” compiled by two local environmental organizations about the state of the ?
Anacostia River) and a scholarly article (a chemistry article that studies how i
PCBs flow down the Anacostia River). Because students in service-learning !
I
I
n
Washington Academy of Sciences
19
courses are surrounded by both academic and public ways of working, talking,
and writing, they have an opportunity to study how people indicate these
underlying values within both kinds of writing. When service-learning professors
show students this overlap, we not only prepare them for writing in both places,
but we also give them tools to analyze and reflect on the values that are inherent
in the writing they may be asked to do in either place. We can show them how to
move back and forth successfully and with integrity.
Epistemological Questions: Who makes knowledge and how?
The term “service-learning” designates many different relationships
between classrooms and communities. Sometimes students perform direct service
with a community organization (serving as tutors for an after-school program, for
example, or serving food at a shelter). Sometimes students take on research and
writing tasks commissioned by an organization (conducting community surveys
about health issues or local development; creating marketing materials or
researching best practices). Sometimes students work individually; sometimes the
whole class takes on a project together. In this article, I work with a model in
which students conduct research (either individually or collaboratively) on behalf
of the organization. This research may be commissioned by the organization, or it
may be developed in response to a need that the student identifies after working at
the site. This model of service-learning, sometimes called community-based
research, requires the student to collaborate closely with the community
organization to identify parameters, understand local context, and gain access to
community resources.^
If we and our students enter into a commitment to produce research for or
with a community organization, that community relationship must be approached
mindfully. In particular, we need to be careful about how we understand what
counts as “knowledge.” The tension between academic and community
knowledge is constant. Faculty and students have the luxury of time and resources
to conduct research that nonprofit staff may not be able to carry out, but members
of community organizations have a much fuller understanding of the context and
history that will affect what knowledge is useful and appropriate to their settings.
As we design service-learning courses, we can include units that investigate how
community organizations draw on academic scholarship in developing and
carrying out their programs, and units that examine where academic scholarship is
supplemented — or even corrected — by knowledge-on-the-ground. We should
regularly emphasize the question of who makes knowledge and how? What kind
of knowledge is valued where? What is knowledge supposed to do? What
responsibilities and obligations do knowledge-makers have?
Winter 2011
20
1
Understanding the relationship between academic perspectives and
community perspectives requires careful listening and preparation. When I first
started teaching service-learning courses, I made the mistake of organizing my
course around academic theories that I expected would be illustrated through the
community work my students were engaged in. My course drew on theories of
participatory democracy and rhetorics of social protest; we investigated the
qualities of direct democracy and how citizens might be empowered to rally for
change. Our readings were about grassroots democracy that pressured government
for policy and funding changes. The community organizations I worked with,
though, drew on different models of social change: they developed on-going,
measurable projects that might transform a neighborhood slowly. They developed
after-school tutoring programs for middle school students; they brought people
together to clear hiking trails and plant gardens. Because of the design of my
course, my students left the semester feeling as if these community organizations
were adequate but not particularly important. They didn’t see community
organizations as vital to democratic life. As a teacher, I had to pause and
reconsider. Instead of beginning with academic theories, I had to begin with the
community organizations themselves and understand that democratic vision,
which I then built into my course. Whereas my first course design reinforced the
hierarchy of the academy judging the community, my second course design began
by assuming the community had the knowledge and the theory. Through my class,
the students and I worked to understand their perspective and integrate it with the
academic literature. At the same time, we came to see how community
perspectives pushed and challenged academic scholarship.
Another misfire offers a second cautionary reminder. The activities we
develop together and the way we prepare our students to enter into those
partnerships need to attend to the particular historical and contemporary tensions
in a place. David Coogan (2006) describes a service-learning writing class that
was invited to help increase parental involvement in Chicago’s public schools. He
observes that the class chose an ineffective rhetorical approach because they had
not attended carefully enough to the historical dynamics of the community’s
experiences. They had not understood what kind of change the parents felt was
possible. What Coogan’ s example makes clear is that all public work is contested
work: all communities struggle to define who they are and how they come
together.
The admonition to pay attention to local context applies to service-
learning partnerships in all academic fields: chemistry students working with
environmental organizations to study waterways, education students working with
Washington Academy of Sciences
21
tutoring programs to boost academic achievement— we all must prepare for our
partnerships by understanding how past government, community, and cultural
dynamics might affect the work we undertake. At the same time, we must
consider how the context of the university, with its own agendas and
epistemological ideals, impacts the relationships. Making all of these components
visible to students is an exciting and important aspect of service-learning courses.
Rhetorical Moves of Public Making
Mindful of my admonition to learn from the communities we work with, I
want to show that analyzing public writing can help us see academic writing in
new ways. Because the methods and audiences for academic writing can be so
specialized, it’s easy to assume that academic writing is distinct and isolated from
any writing that would be used in community organizations. An academic paper
that analyzes how specific chemicals flow down a river is a very different sort of
document than a brochure from a community organization working to clean up
that river is. However, the chemistry article, like the brochure, is a response to a
particular rhetorical situation. Neither the article nor the brochure is a simple
transcription of fact. Just as the brochure is crafted to draw on the values of its
local audience, the academic article is carefully crafted to demonstrate the
author’s understanding and affirmation of the values of the academic community.
Both authors have to persuade their audiences that what they say is important and
worth reading, that the author is the most qualified person to learn from, and that
the author has taken into account the values and expectations of that audience.
Moreover, both texts project how the audience is expected to advance this work,
to continue what it takes to find the answers to the problem posed. In this sense,
academic writing is a form of public writing.^^
I find that I can better name and explain these rhetorical strategies when
my students and I begin by looking at public writing, especially the documents
produced by nonprofit community organizations. Community organizations make
their mission and vision very explicit; in websites and pamphlets, we can readily
see overt statements about their purpose and methods. Moreover, it’s easy for
students to understand why community organizations need to regularly convey
their worldview — their sense of how the world is and how it should be — and why
community organizations need to assert that they have the capacity and the right
methods for getting to that vision. Beginning with community documents, we can
then begin to see the similar rhetorical strategies in academic work.
We can break down the public-making strategies by looking at several
components of public writing: purpose (what creates the need for this work?).
Winter 2011
22
agency and capacity (who can do the work?), and interdependence (with whom do
they do this work?). Looking at each of these components of writing can help us
uncover the worldview that the text is advancing. I offer a brief glossary of these
concepts here.
Purpose: Community organizations and academics alike have to signal the
reasons for doing this work in this place at this time; in so doing, they delineate
what they see as possible and why it matters. Community organizations are
explicit about their purpose in mission statements; these are often incorporated
into their publications in some way. Those mission statements define the problem
or ideals that motivate them (sometimes emphasizing the urgency of a bad
condition, sometimes highlighting a vision they strive toward). All of the
documents that the community organization produces flow from this mission and
vision.
At the university, the mission and vision is not made quite so explicit.
Some college departments and universities publicize their mission statements, and
these ideals might be conveyed in strategic plans or Presidential addresses. If we
consider that the most frequent publications of universities are academic articles
and books, a value built into universities when such publication is a requirement
for tenure or promotion, then the purpose for university work is to create new
knowledge. In academic articles, we find, the '‘problem” being addressed is a gap
in knowledge, a misunderstanding of how the world works. Academic writing is
motivated by a need to correct or advance what has come before.
In both communities, the purpose for writing is defined in such a manner
that it suggests who can do the work and what actions need to take place.
Agency and Capacity: One of the most critical steps in bringing people
together as a public who will take action together is to convince the audience that
they are the ones who can do the work, and that they have the right tools to do so.
Nonprofit organizations might define agency and capacity in various ways.
Sometimes they highlight civic power (the importance of voting and monitoring
the public policy decisions of those in power); sometimes they highlight
consumer power (boycotting, pressuring corporations to change their behaviors);
sometimes they highlight do-it-yourself community power (such as removing
trash from a river, creating after-school programs, or putting on a community
fair).
Academics locate their agency and capacity within their disciplinary
research methods, methods designed to ensure objective and thorough analysis.
The author convinces the audience that the methods are appropriate for this
Washington Academy of Sciences
23
particular question under review. At the same time, the author invites researchers
to continue to investigate the problem and the methods for answering it. In
providing an answer to the gap in knowledge, the researcher contributes to an on-
going scholarly conversation, and invites further reflection on the methods or the
outcomes of the research. In this way, academic writing perpetuates its own vision
of what matters: it perpetually demands that scholars create new, more, better
knowledge. Just as a community document rallies its audience to take specific
action, so the scholarly article serves to motivate and extend the work of the
academy.
Interdependence: Public writing speaks to multiple audiences: it addresses
individuals who are part of a broader public, equally invested in the issue at hand.
It has to signal to the reader that he or she is part of that larger group and that by
working together, the reader can accomplish more than he or she can individually.
The rhetorical strategy that accomplishes this is to address both friends and
strangers simultaneously. One might name leaders and associations that are part
of the cause already and also name concrete opportunities for new people to get
involved. One might describe people currently in the community who have
similar values or experiences as those who have not yet joined, so that they can
see themselves already there.
In academic writing, the audience is an academic public that is committed
to creating knowledge together over the long term. As Joseph Harris (2006)
explains convincingly in Rewriting: How to Do Things with Texts, academics
draw on the works of those who have come before by extending their ideas,
countering their findings, or drawing on their methodology. Especially in the
review of literature sections, but often throughout the analysis, academics name
colleagues working on the same kinds of questions. These moves demonstrate that
knowledge-making is clearly a collaborative enterprise. Indeed, all the rules for
accurate and complete citation are in place to ensure that the collaborative motive
is not compromised by sloppy or unethical attribution. They are tools that
academics use to reinforce and perpetuate their interdependence; they signal that
no individual scholar can arrive at a full, complex view of the world — we need
each other.
Faculty teaching service-learning courses can help students understand
how writers invoke the public value of their work for both community and
academic contexts. The framework helps us to the common rhetorical strategies in
academic and community work. Moreover, examining these strategies can help us
name the aspects of service-learning that can be difficult. Sometimes, the
conventions that ensure that one’s place in the academic public can be at odds
Winter 2011
with the conventions of the communities where we work and vice-versa. Students
need to pay critical attention to the distinctions among those conventions if they
hope to cross boundaries effectively.
Anacostia Watershed Society and the George Mason University Chemistry
Department: A Study in Public and Academic Writing
The service-learning context can offer helpful, specific ways to explicate
these rather abstract concepts. Using concrete examples, such as websites and
reports, we can compare the rhetoric of academics and community organizations
and show how those moments in the texts that convey purpose, capacity, agency
and interdependence also convey the underlying value systems within each group.
To illustrate. I’ll offer a case study of the Anacostia Watershed Society in
Washington, DC, their State of the River report, and a study about polychlorinated
biphenyls (PCBs) in the Anacostia River conducted by George Mason University
chemistry professors and published in the Journal of Environmental Science and
Health.
Overview of the river and the two communities who report about it
The Anacostia River flows through the eastern Washington, DC, bordered
by Wards 5, 6, 7 and 8. It is fed by watersheds in Maryland’s Montgomery and
Prince George’s Counties. Near the southern tip of DC, it feeds into the Potomac;
then it joins the Chesapeake Bay and drains into the Atlantic. The Department of
Health of Washington bans any recreation that would provide primary contact
with the river, and it has declared the fish and shellfish too contaminated to eat.
The pollution in the river has been attributed to overflow from DC sewage
treatment centers during storms, other DC and area stormwater drainage
problems, agricultural chemicals flowing down from Maryland farmlands, and
pollution and leaks from industries and or contaminated sites of now defunct
industries.
One community organization that would like to see the river clean is the
Anacostia Watershed Society (AWS). The mission of The Anacostia Watershed
Society is “to protect and restore the Anacostia River and its watershed
communities by cleaning the water, recovering the shores, and honoring the
heritage in order to make the Anacostia River and its tributaries swimmable and
fishable for the health and enjoyment of everyone in the community.” The
organization maintains on-going partnerships with area universities, and students
are invited to help with trash clean-up and removal of invasive plants. AWS
partners with local science classes at all levels, as well as with local
Washington Academy of Sciences
25
environmentally-oriented community organizations. They focus on recreational
activities (to help people develop a familiarity with and commitment to the river),
stewardship activities (removing non-native plants, picking up trash, building rain
gardens, monitoring water quality) and advocacy work (identifying sources of
pollution and applying pressure on appropriate targets to address them).
An academic researcher who studies the pollution in the Anacostia River
is professor of chemistry and biochemistiy^ at George Mason University, Dr. Greg
Foster. Like the AWS, the goals of the Foster laboratory include tracking down
sources of contaminants and aiding in river clean-up. The GMU chemistry and
biochemistry department announces that students working with Dr. Foster can
expect to contribute to projects along the Anacostia River:
Students in the Foster research laboratory investigate the sources,
reactions and transport of contaminants in the aquatic
environment. [One of the] ongoing lines of active research . . .
involves determining the amounts and sources of polychlorinated
biphenyls (PCBs) in storm runoff in the Anacostia River.
In addition to the concrete tasks of identifying pollutants and identifying
best practices for removing them. Dr. Foster contributes to the field of chemistry
and biochemistry by developing analytical methods. A biographical blurb on
George Mason’s Research Groups page describes Foster’s agenda this way:
“divided among assessing urban regions as sources of organic contaminants to
coastal air- and watersheds in the Chesapeake Bay region, developing
technologies to remove contaminants that harm the aquatic environment, and
developing analytical methods.”
This last point is a critical distinction, as it indicates Dr. Foster’s
commitment to the university as a place to develop and refine research methods.
Whereas the AWS hires chemists and biologists to monitor and track the impact
of pollutants. Dr. Foster’s purpose is also to evaluate and improve the
methodology used to do such work. In this way, his affiliation to the university,
and to the broader goals of knowledge creation, exceeds the practical focus on
cleaning up the river.
Analyzing Community and Academic Documents about the Anacostia River
To illustrate how scholars and community members build a sense of
public purpose and capacity in their writing, we can compare the State of the
River Report Card, a publication from the Anacostia Watershed Society, with an
Winter 2011
26
article co-written by Dr. Foster in the Journal of Environmental Science and
Health that also evaluates pollution in the Anacostia River.
The State of the River Report Card is an 8-page 8 ‘A x 1 1 glossy pamphlet
with a full color photo of a Great White Egret and a Great Blue Heron standing in
the Anacostia River. Most of the pages include large graphics and little text. The
first page offers a short welcome statement from the President of the Anacostia
Watershed Society and the Riverkeeper and Executive Director of the Anacostia
Riverkeeper (AR), the organizations that jointly wrote the report. They explain,
“This annual report card is your guide to how well our communities,
environmental groups, and governments are meeting the goal of a fishable and
swimmable Anacostia River as soon as possible It provides a benchmark of
the core river health parameters based on scientific data and policy efforts” (p. 1).
Below the letter, but above the signatures, is a photo of the two leaders, standing
in front of the Anacostia River. The mission statements of the two organizations
are listed. At the bottom of the page, in small print, is a series of “disclaimers”:
these footnotes report the assumptions behind their methodology, with
acknowledgements like, “All available, professionally collected data was used.
The data sets include those collected by DC government, Maryland Department of
Natural Resources, and the Anacostia Watershed Society.” Acronyms are
explained, and explanations about rate calculations are provided.
The remaining pages lay out the findings, using short paragraphs and
plenty of graphics. The second page shows a map of the Anacostia River with
named bridges indicated. Three large “Fail” notations are stamped along the river.
The page defines the three main areas studied and the main impediments to clean
water, and the parameters used to assess the water quality. The third page includes
a chart, delineating the water quality according to each of the assessment areas
and the number of years estimated to meet the water quality standards. Page four
is a political report card, evaluating public policy around stormwater
management, toxics, trash and overall plan for DC, two Maryland counties, the
State of Maryland and the Federal government. The ratings are indicated visually
by thumb-up or thumb-down hands. The final pages provide brief (one-to-two
phrase) explanations of the problems that the river associations seek to address,
with photographs and one or two sentences describing solutions, which include
environmental site design (such as rain gardens) along with political pressure and
legal action to address trash, toxics and bacteria.
The Journal of Environmental Science and Health article, in contrast, is a
dense seven and a half page document followed by a page and a half of footnotes.
The descriptive title is “Polychlorinated biphenyls in stormwater runoff entering
Washington Academy of Sciences
27
the tidal Anacostia River, Washington, DC, through small urban catchments and
combined sewer outfalls.” The abstract announces that the major findings
contradict previous assumptions about the primary sources of PCB contamination:
“The present study suggests that input of PCBs from Tower Beaverdam Creek is
likely to be greater than those from the two major branches (Northeast and
Northwest Branches) that were believed as primary source areas.”
The article includes an introduction, which reviews the current state of the
Anacostia River, drawing primarily on government reports and identifying the
exigency for this project: “To achieve the first goal of the Anacostia River action
plan — the reduction of pollutant loadings — it is essential to understand the
sources and behavior of pollutants in stormwater runoff, which is regarded as one
of the major pathways delivering urban pollutants to surface water” (p. 568). The
paper argues for the necessary methodology to complete such a task, “A
quantitative understanding of the sources of PCBs in stormwater runoff and its
transport dynamics in the Anacostia River will be essential in developing cost-
effective stormwater runoff control strategies employing effective best
management practices” (p. 568).
After reviewing the sample collection and the strategies and materials used
for extracting PCBs, the majority of the article discusses the level of PCBs in
stormwater runoff, noting that most occur as particles, and then evaluating how
the particles behave in the water. The analysis includes explanations about the
techniques used to evaluate the data, equations that express the relationships of
materials in the stormwater, graphs that show the linear regressions of some
findings. The analysis refers frequently to the previous studies of PCBs in the
Anacostia River and elsewhere, both to give credit when this study uses their
techniques and to indicate how the current study extends the findings or methods
of those works. It concludes with a review of the findings, acknowledgements of
the funding sources for the research, and thirty-five bibliographic endnotes.
Purpose: Both documents arise from a concern about pollution in the
Anacostia River. The State of the River responds to a political need to document
progress (or lack of progress) from DC, Maryland counties, Maryland state and
federal agencies in stemming the causes or cleaning up the pollution. It provides a
water quality report that looks at four indicators of pollution; by highlighting the
number of years until the water quality will meet set standards, it again heightens
the need to take action. The document is created to rally people to take action and
to encourage them to take action with AWS and AR.
Identifying the purpose in academic work can be harder for students, who
often criticize academic work for merely explaining problems and not laying out
Winter 2011
28
solutions. The scholarly article is also motivated by a concern about pollution in
the Anacostia River and it draws on government sources to clarify the urgency of
the problem. The main focus of the article, though, is not the lack of progress but
a gap in the understanding of the problem itself. The goal is not to rally people to
make change, but to rally people to study the problem more closely so the action
later on taken will be most effective. The article seeks to “understand the sources
and behaviors of pollutants in stormwater runoff,” and while the ending does
suggest some potential best practices to address the sources and behaviors that the
researchers found, the majority of the document lays out the methods and analysis
that allow them to better describe one particular part of the problem. The article is
designed to build on previous research and provide evidence-based data that can
contribute to someone else’s solution design.
Agency and Capacity: We can readily see how the authors of the State of
the River indicate that they have the capacity to take on the problems they’ve
identified. The introductory letter from the Executive Director of Anacostia
Riverkeeper and the President of Anacostia Watershed Society ends with an
upbeat assertion that “We can clean [the River] up if we work together!” After
listing the current failed water quality standards and lack of action by the
politicians, the document outlines the solutions that the nonprofits advance to
address the issues. The problems are depicted visually with photographs of the
river. For the problem “Stormwater,” we see a photo of the River under normal
flow and a photo of the same site with high and turbulent waters. The solution is
“Environmental Site Design,” such as raingardens that “maintain a site’s original
drainage pattern as much as possible by capturing and infiltrating rainwater.”
Similarly, the page on the problem of “Toxics, Trash and Bacteria” uses a
photograph of trash in the water and a picture of an industrial building tagged as a
“legacy toxic site.” Here, the solution is “education, restoration projects, and legal
action.” Finally, the document lists the eight actions individuals can take,
including supporting the organizations by donating and volunteering. The reader
of the document is invoked as someone who has power to pressure the political
entities who received the thumbs-down ratings, and who can support the work of
the organization through donations, volunteering, and a few individual actions.
Finding the indicators of agency and capacity in the academic article is
easier when students understand the audience to be other academics, whose job is
to scrutinize the methodology and analysis of any study. As Frank Haig and Peg
Kay argue in “The Role of Academies of Science in the Critical Examination of
New Ideas,” professional communities “provide a willing but intelligent audience
to which an innovator can make a presentation” (p. 61). Once presented, “a new
Washington Academy of Sciences
29
idea has to find its way to acceptance. The path may be long and conflicted. The
opposition may be intense and tortuous. The process, however, is necessary to
ensure the emergence of a founded confidence on the part of the broad scientific
community” (p. 61).
Once we understand this context, it’s easy to find the places where the
PCB report authors assert who has the capacity “to understand the sources and
behavior of pollutants in stormwater runoff’ (p. 568). Throughout the article, the
authors anticipate their skeptical audience by reaffirming the capacity of their
disciplinary and interdisciplinary methodology to explain pollutant sources and
behaviors. They regularly anticipate concerns about how they have executed the
techniques, acknowledging and justifying whenever they have made variations on
previous methods. Moreover, they regularly qualify their assertions; they
understand that for their argument to be persuasive, it cannot exceed the capacity
of the methods. In this way, they provide an active and important role for their
academic readers, just as AWS and AR do for their civic readers.
Interdependence: Examining the rhetoric of interdependence in a
document can help us gain a deeper understanding of how the authors imagine the
members of their audience should relate to each other to accomplish the work at
hand. The nonprofit authors of the State of the River depend on public attention
and action to achieve their political and environmental goals, so their readers must
come away from the document feeling as if they are part of a broader community
that is invested in the cause. The rhetorical challenge in such a document is to
address newcomers and current participants at the same time, and to help them see
their relationship with each other. Some of the gestures that invoke such
interdependence are easy to spot — as in the closing of the introductory letter, “we
can clean it up if we work together! Will you join us?” The audience here is the
newcomer, interested in becoming part of the “us” crowd. The solutions proposed
are described as “our” solutions and what “we” do. However, sometimes the
reader is treated as separate from the organizations, a move that seems to
undermine the message of interdependence. The organizations are given agency
and capacity for certain kinds of change, while the reader is invited to take
different steps. The steps that “you” can take include actions readers can do
individually, without the nonprofits. While the final action is “support the
Anacostia Riverkeeper and the Anacostia Watershed Society,” the overall
phrasing in this section suggests that the reader may not need the organization in
order to accomplish the same goals.
Just as public texts should convince their audiences that they are already
potential actors in that community, so in academic context, the readers should see
Winter 2011
30
that they have a significant, important role in the broader goal of knowledge-
making. Through the exchange of scholarly articles, scholars depend on each
other to help create, critique and extend knowledge. The bigger goal of arriving at
new understanding does not happen alone, and scholars regularly and explicitly
acknowledge their dependence on other scholars throughout their work. We can
see these efforts throughout the PCB article, as the authors name those scholars
who have asked similar questions in other geographical areas: “For comparison
with the present study, some other studies are summarized below” (p. 570). They
also confirm their results by comparing them with similar studies: “This high
enrichment of PCBs in the particle phase is quite common in urban stormwater
samples. Eganhouse and Sheerblom found high concentrations of PCBs in the
particle phase (50-98%) in combined sewer overflow samples” (p. 570). The
general academic habits of citation and attribution are part and parcel of this need
to acknowledge interdependence. By acknowledging how our methods and
analysis have been influenced by previous scholars, we signal that we understand
that on-going knowledge requires us to read each other critically, to explain the
heritage of our ideas, and to offer a clear roadmap for future scholars so that they
can replicate our work. If we skip any of these steps, we risk violating the
expeetations for our roles as members of the academic community.
Public and Academic Writing in Service-Learning classes
To keep my argument relatively concise and straightforward, I limited my
analysis to two documents related to the same public concern. When faculty
develop partnerships with community organizations working on public issues, and
when we help students conceptualize research projects that can be useful to those
organizations, I suggest that we ask students to prepare both academic- and
community-orientated documents based on their research. The challenge of
transforming their analysis to meet the different expectations of each community
allows them to explore the parallels and distinctions in those rhetorical
conventions. I further contend that we provide a helpful framework for
understanding the variations they find. A productive rhetorical framework will
demonstrate that at a broader level, academics regularly signal their participation
in academic communities as they write, and they follow conventions that reveal
their understanding of what aeademics value and their understanding about why
academia itself is valuable to the broader public. To accomplish this, academics
project their purpose, their ageney, their capacity to address the problem, and the
audience’s interdependence with the author and each other. By comparing the
rhetoric of community organizations with the rhetoric of academia, students in
service-learning courses can see that both groups draw on a repertoire of '
Washington Academy of Sciences
31
community-building strategies. Naming and analyzing these distinctions can help
students consider their own responsibilities and potential as “academic citizens.”
References
Anacostia Riverkeeper and Anacostia Watershed Society (2010). State of the River
Report. Washington, D.C.: Author.
Coogan, D. (Jun 2006). Service learning and social change: the case for materialist
rhetoric. College Composition and Communication, 57(4), 667-693.
Corporation for National and Community Service. (2007). The impact of service -
learning: A review of current research. New York: Author.
Deans, T. (2000). Writing partnerships: Service-learning in composition. Urbana, 111.:
National Council of Teachers of English.
Eyler, J. S., Giles, D. E., Jr., & Braxton, J. (1997). The Impact of Service-Learning on
College Students. Michigan Journal of Community Service Learning, 4, 5-15.
Haig, F. R., and Kay, P. (2006). The role of academies of science in the critical
examination of new ideas: looking at Gaia. Journal of the Washington Academy of
Sciences Summer, 61-67.
Harris, J. (2006). Rewriting: How to do Things with Texts. Logan, Utah: Utah State
University Press.
Hauser, G. A. (1999). Vernacular voices: The rhetoric of publics and public spheres.
Columbia: University of South Carolina Press.
Hwang, H., and Foster, G. D. (2008). Polychlorinated biphenyls in stormwater runoff
entering the tidal Anacostia River, Washington, DC, through small urban
catchments and combined sewer outfalls. Journal of Environmental Science and
Health Part A, 43, 567-575. doi:l 0.1 080/1 0934520801 893527
Ryder, P. M. (201 1). Rhetorics for community action: Public writing and writing publics.
Lanham, Md.: Lexington Books.
Sommers, N., and Saltz, L. (2004). The novice as expert: Writing the freshman year.
College Composition and Communication, 56(1), pp. 124-149.
Warner, M. (2005). Publics and counterpublics. Cambridge: Zone Books.
' In Writing Parmerships, Thomas Deans offers a helpful overview and useful advice about
“writing for” and “writing with” communities in service-learning courses.
" For more on theories of public rhetoric, see Hauser, Ryder, Warner. For more analysis of the
public function of the academy, see the final chapter in Ryder.
Winter 2011
This page intentionally left blank
Washington Academy of Sciences
Service-Learning in Croatia and the region: progress,
obstacles and solutions
33
Nives Mikelic Preradovic
University of Zagreb, Croatia
Abstract
The goal of the paper is to discuss the possibilities of transforming
universities in Croatia and the region into places that take account of the
emerging community trends and current challenges that our students
should be capable of dealing with once they finish their studies. Service-
learning (SL) was introduced in the largest faculty of the University of
Zagreb (Faculty of Humanities and Social Sciences) in 2006-07 through
a series of faculty workshops and through academic courses. The goals
and requirements of this teaching and learning method were based upon
our U.S. experience, gained at the George Washington University. Since
then around 50 SL projects in the IT field have been completed and
evaluated. Service learning was also introduced in the Faculty of
Economy at the University of Rijeka in 2008. In 2009 it was added as a
regulation of the Croatian National Youth program 2009-2013,
approved by the Croatian Government. Also, in the same year the
Croatian translation of the term “service-learning” (“drustveno korisno
ucenje”) offered by the author of this paper became accepted as a
common term at the JFDP (Junior Faculty Development Program)
Regional Conference. Although the workshops were extremely popular
both in Croatia and the region, and although SL courses achieved
remarkable student enrollment in a short time, the number of faculty
who have so far implemented it as a teaching strategy is very low. This
paper discusses the reasons for faculty resistance to engage in SL and
some possible solutions.
Introduction
In this paper we present the recent development and evaluation of
service-learning (SL) in Croatia and the region, explaining the advantages
and drawbacks of the application of SL in the Bologna Process in our
universities.
The Bologna Process is a European reform process driven by the
46 countries aiming at establishing a European Higher Education Area
(EHEA). The Process officially started in 1999, when 29 countries signed
the Declaration in Bologna (hence the name of the whole Process). The
Declaration states the following objectives: adoption of a system of easily
comparable degrees based on two main cycles, undergraduate and
Winter 2011
34
graduate; establishment of a system of credits - European Credit Transfer
System (ECTS) and promotion of European co-operation in quality
assurance.
Croatia joined the Bologna Process in 2001, in Prague, where the
ministers adopted the so-called Prague Communique, introducing several
new elements in the Process: students were recognized as full and equal
partners in the decision making process; the social dimension of the
Process was stressed and the idea that higher education is a public good
and a public responsibility was highlighted.
The purpose of this paper is threefold. The introductory part gives
an overview of the most important problems facing higher education in
Croatia today and presents a remarkable solution to these problems -
service-learning. The second part identifies the problem of integration of
service-learning in the curriculum and provides suggestions to advance
service-learning in Croatia and to improve student confidence and
knowledge of the world by combining service learning and e-leaming
teaching methods. Finally, the third part of the paper describes the
progress of service-learning in Croatia and the benefits it brings to
students and the whole educational system.
Challenges to the Croatian Educational System
The two most important educational issues facing higher education
in Croatia and the region’ today are: theoretical knowledge without skills
and a weak connection between the university and the community and
between the university and the labor market. Higher Education (HE)
institutions in Croatia and the neighboring countries have worked for
many years on their curriculum to keep pace with the scientific
advancement of the field. The community, on the other hand, has its own
development, emerging trends and problems that our students should be
capable of dealing with once they finish their studies. The development of
community, labor market and HE institutions each has taken a different
track.
The London Communique, with the working title Towards the
European Higher Education Area: Responding to Challenges in a
Globalized World is a document of the Ministers of Higher Education in
the countries participating in the Bologna Process. It reviews the progress
made in their countries since meeting in Bergen in 2005. The ministers
emphasized the need for an attractive and competitive labor market in
Europe and pointed out the major problems faced by higher education
Washington Academy of Sciences
35
institutions: preparing students to become active citizens in a democratic
society, as well as preparing them for their constantly changing future
careers, enabling their personal development and stimulating research and
innovation.
All the above mentioned issues are far more complex in Southeast
Europe than they appear to be in other European countries. A detailed
insight into those educational issues in Croatia results from a survey” on
the implementation of the Bologna Process carried out at 5 Universities in
Croatia (University of Zagreb, University of Rijeka, University of Zadar,
University of Split and JJ Strossmayer University of Osijek). The survey
involved a total of 3261 students in their second year of study.
We learned from the survey that 40% of the students study only to
pass the exams at the end of the course, not to acquire knowledge and
more than 50% of them never engaged in class discussion (either because
they never had a chance for it or because they do not feel comfortable
expressing their opinions in front of the large class). For one third of the
students (31%) the biggest problem is low level or almost no practical
work. Many lecturers think that an academic institution is not a place
where students should get knowledge at the application level, but rather on
a more abstract, theoretical level. Therefore, the majority of them
emphasize lecturing and theory rather than application and discussion, and
it is not a rare case that a law student never visits a court or that a language
teacher never tries teaching a class before he or she gets the diploma.
One of the changes that the Bologna Process initiated was
interactive teaching and focusing more on student’s skills, competences
and the practical implications of course material. The most frequently
mentioned student expectations of the Bologna Process in Croatia, which
unfortunately have not been fulfilled to date, are: work in small groups,
teamwork, fieldwork and practical classes. In addition, the results of the
Gallup survey (European Commission: Eurobarometer, 2009) revealed
that 76% of Croatian students strongly agree that they need more
opportunities to acquire skills to meet the demands of today’s workplace -
communication skills, teamwork and learning to learn. Also, 66% of
higher education students in Croatia strongly agree that study programs
should focus on teaching specialized knowledge. Finally, between 70%-
78% of students in Croatia also said enhanced personal development was a
very important goal of higher education. The survey’s fieldwork was
carried out from 12 February to 20 February 2009. Almost 15,000
randomly-selected students in higher education institutions were
Winter 2011
36
interviewed in the 27 Member States of the EU, Croatia, Iceland, Norway
and Turkey.
Regarding opportunities to find a job after getting a degree, most
of the students in Croatia do not have confidence in the current
educational system. The biggest and justified concern our students have is
linked to the weak connection between the labor market and HE. The fact
is that at the moment there are 315,438 unemployed people in Croatia (an
unemployment rate of 18.7%).'" This is one of the largest obstacles for the
country's development. Unemployment continues to grow annually, with a
strong increase among highly-skilled laborers, limiting the
competitiveness of the domestic economy and economic recovery. These
are the most important problems facing higher education in Croatia today.
These issues should be targets for further research and development of the
Croatian Higher Education System.
Service-Learning: A Strategy for the Croatian Educational System
This paper proposes service-learning teaching and strategy as a
way to address these community and educational challenges. The
approach emphasizes the integration of service learning into the
curriculum in Southeast Europe and Croatia in particular.
Service-learning (SL), a teaching strategy that integrates
meaningful community service with academic learning, is a remarkable
solution for bringing community, labor market and HE institutions in
Croatia and the region more closely together to satisfy the goals of the
Bologna Process.
Through service-learning our students could learn not only how to
connect course theory and practice, but also how to help others, give of
themselves, and enter into caring relationships with others in their
community. The goal of service-learning is to assist students to see the
relevance of their new knowledge in the real world. That is what they are
missing at the moment.
Although well developed in North America, SL is for the most part
still absent in Europe. The Community Learning Program that has been
developing since 2001 in the Dublin Institute of Technology was, until
recently, the only European example of service learning.
Service-learning was introduced in the final year of graduate study
in the Faculty of Humanities and Social Sciences, the largest faculty of the
University of Zagreb (Croatia), in 2006-07 through a series of faculty
Washington Academy of Sciences
37
workshops and academic courses. The goals and requirements for this
teaching and learning method were based upon my U.S. experience gained
at the George Washington University, where I was a Junior Faculty
Development Program scholar.
Up to that point, students of information sciences learned the
theoretical concepts and applied them to imaginary or simulated
circumstances, but rarely managed to apply the acquired knowledge to the
real world. The service-learning projects provided them with structured
time to rethink and implement ideas that they had during their 5 -year
study, but never had an opportunity to transform them into “hands-on”
experiences and observe the results.
In the first two years, about 50 ST projects in the IT field were
completed and evaluated (Mikelic Preradovic, Kisicek & Boras, 2010).
After the successful project outcomes in the test phase, service-
learning was introduced into the final year of undergraduate study as well,
as a part of a new curriculum under the Bologna Process in the Academic
Year 2007-2008. Students were surveyed at the end of that academic year
and the results showed that such placement in real (vs. theoretical)
learning situations was very important in increasing the confidence and
self-esteem they felt they needed once they entered the labor market.
These projects will serve as an excellent reference and indication of their
creativity and ability to engage intellectually, emotionally and socially.
Regarding the unemployment issue, we believe that an innovative
framework to addressing high-skilled youth unemployment would be to
combine service-learning, community development, and career
development into SL projects that would increase student's levels of
personal and social development, core skills and employability and build
life-long connections between students and their communities.
Service-learning can assist our students with developing work/life
skills, knowledge and career passions which could improve their future
work prospects through increased community awareness. Furthermore, it
can increase their ability to develop and match specific and transferable
skills with the requirements of today's labour market. Finally, service-
learning can help (at least as a partial solution) to meet the educational
needs of long-term unemployed people and to develop learning
opportunities in response to identified need.
Despite the obvious benefits ST brings to students and the whole
educational system, it is not yet popular among the faculty in Croatia and
Winter 2011
38
the region, at least not as popular as e-leaming.*'' In the next part we offer
potential explanations and examine how we might overcome the barriers
to the adoption of service-learning in Croatia and Southeast Europe.
Service-Learning: Challenges to Integration in Croatia
With so many obvious benefits, one might think that faculty
members in Croatia would universally embrace service-learning with great
enthusiasm. Unfortunately, that is not the case.
Although the service-learning workshops in Croatia were
extremely popular and received strong support from the Dean in the
Faculty of Humanities and Social Sciences, the number of faculty who
have since implemented SL as a teaching strategy is very low.
Perhaps the reasons for this can be found in the workshop exit
surveys. When asked if they plan on incorporating SL into their teaching,
a certain number of attendants expressed worry that this teaching strategy
is more time-consuming and requires more devotion than traditional
seminar teaching. They also mentioned logistical difficulties in
implementing SL, since their class is usually big and it is hard to organize
students in groups with similar levels of motivation that will work
productively at the same pace.
Regarding the logistics, teaching loads in Croatia are really heavy.
By way of comparison, a single bachelor course at the Faculty of
Philosophy, University of Zagreb has 60 students enrolled every year on
average, while for the same course at Georgetown University in
Washington, DC, there are up to 10 students enrolled per year.
Apart from the SL issues, another issue is that our faculty teachers
lack autonomy in curriculum design. The Croatian Ministry of Science,
Education and Sports designs the curriculum and dictates what shall be
taught and, unfortunately, the faculty members have a relatively minor
role in that process. Consequently, faculty teachers have less freedom to
innovate in their teaching, but also little tradition and motivation to
innovate.
Although integration of service-learning into curricula affects
Eastern European areas on a larger scale, the problem is not
geographically limited. Although perceived as a successful and innovative
teaching strategy by many practitioners in the field (Strand, Marullo,
Cutforth, Stoecker & Donohue, 2003; Marullo & Edwards, 2000; Bringle,
Games, & Malloy, 1999), barriers against its integration in the U.S.
Washington Academy of Sciences
39
curriculum include: scarce administrative support, faculty participation,
and budgetary constraints (Bringle & Hatcher, 2000; Holland, 1997;
Ward, 1996).
We posit that the above mentioned reasons are not strong enough
for Croatian faculty to avoid engagement in SL projects and teaching
strategy on such a large scale and that they only need an excellent
enticement to recognize sen/ice-leaming as a research and teaching tool
worth the time and the effort.
Our five-year service-learning research and experience shows that
faculty who replace the seminar part of their course with SL projects (in
other words: those who replace imaginary problems and solutions with
real community experience) never give up this teaching method, no matter
how scarce the budget or administrative support may be. On the contrary,
they enthusiastically continue to motivate students to enroll and they
emphasize that every year it becomes less time-consuming, more
meaningful and definitely more than a setting to teach theoretical concepts
in a hands-on manner, once they get past the initial logistics.
The obvious benefits of SL for faculty members is taking on new
roles, seeing students excited and the classroom energized, building
personal connections with students, learning from our students and seeing
greater student involvement in discussions and the relevance of the
subject. These benefits outweight the logistical problems.
Therefore, we believe that we need to discover the key motivators
to raise Croatian faculty’s interest in service-learning. We also believe that
combining service-learning with new and successful e-leaming methods
that are still not used in Croatia (such as introduction of e-Portfolios)
would make the faculty more willing to engage in SL.
E-portfolios are digitized collections of text-based, graphic, or
multimedia artifacts including demonstrations, resources, and
accomplishments that represent what a person has learned over time on
which he/she has reflected. They are designed for presentation to one or
more audiences for a particular rhetorical purpose and can be used for
final assessment as well as for reflection, deep learning, knowledge
growth and social interaction.
The faculty could first start using e-portfolios as a tool for student
reflection in their e-courses; while later it can be employed as a tool for
connection of the service experience to learning in their courses.
Winter 2011
40
Therefore, we plan to explore the possibility to connect SL course
design with e-course design, so that Croatian faculty members get the
opportunity to combine the elements of two educational innovations (e-
leaming and service-learning) in a thoughtful way.
Service-Learning: Progress in Croatia
During the Academic Year 2006-07 the author conducted
workshops for the Croatian faculty in different fields and with different
Croatian universities, school teachers and NGO’s in order to promote SL
and share in-class experiences. Service-learning was also introduced in the
Faculty of Economy, University of Rijeka (described in detail in the paper
co-authored with Jelenc & Mujevic, 2008). As of 2008-09, a stand-alone
elective course “Service-learning in Information Sciences” has been
offered to all students in the University of Zagreb, and thus achieved
remarkable enrollment in a short time.'"
In 2009 SL was added as a regulation of the Croatian National
Youth program 2009-2013 approved by the GovemmenU\ Also, in the
same year the Croatian translation of the “service-learning” term
(“drustveno korisno ucenje”) coined by the author of this paper, became
accepted as a common term at the JFDP Regional Conference'^". Since
2008 all the service-learning projects in the Faculty of Humanities and
Social Sciences are transformed step by step into service e-leaming
projects, offering faculty and students the ability to apply e-leaming
technology they adore to service-learning pedagogy through service-
learning projects that are (at least partially) conducted online.
Due to the fact that the information technology provides an
opportunity for students of information sciences to help community
organizations, and since information literacy becomes an important social
issue, our students tmly have a great field for activity where they can meet
different interests and apply specific knowledge and skills.
All of our students who took part in service e-leaming projects
used email, discussion boards, content management system (Moodle),
online journals and Word processor collaboration features for sharing,
collecting and organizing their work, as well as their reflection.
Below we briefly describe some of the service e-leaming projects
that were directly related to a community need. In most of these projects,
the students selected the project in consultation with the supervisor in the
chosen NGO, school, library or museum.
Washington Academy of Sciences
41
Project 1. Starting with the school year of 2009-2010, all pupils
who complete the fourth grade of grammar school in Croatia take the state
graduation exam (based on the Act on Primary and Secondary
Education"""^). The state graduation exam has two parts: mandatory exams
in general education subjects such as Croatian language and elective
exams in one or more optional subjects, such as Informatics. Our
information science students came up with the following project idea - an
online demonstration designed as a preparatory step for the state
graduation exam covering the complete information and computer science
curriculum of the state grammar schools. The students’ partner was the
National Centre for External Evaluation of Education that creates paper
exams for all subjects in the state graduation exam and delivers exam
materials to schools.
Our information science students summarized their knowledge and
skills in the field of computer and information sciences and created the
application in the midst of turbulence caused by the introduction of the
state graduation exam. With this project they aimed to gain for their own
benefit, connecting the theory learned during the study with new practical
experiences while at the same time helping the pupils to achieve at a high
level in the state graduation exam.
State Graduation Online Demo Exam in Informatics'^ consists of
50 multiple-choice questions written in ActionScriptS through
collaboration with the National Centre for External Evaluation of
Education. It was tested and evaluated by the third grade pupils of Velika
Gorica grammar schoof, who will take the state graduation exam at the
end of the school year 2011-2012. The evaluation was performed as an
online survey that aimed to identify the impact of the student project on
grammar school pupils and to discover suggestions that could improve the
effectiveness of the exam.
The overall rating of the demo test was high. Regarding the
service-learning component, this project contributed to the pupils'
preparedness for the state graduation exam in an elective subject,
informatics, and offered them an insight into new technology and new
ways of knowledge acquiring and its evaluation (such as e-leaming and
online exams). The questions in the demo exam cover the entire content of
the subject of Informatics for the 4-year grammar schools, while the
simple but interactive online application enables pupils to take the exam
and test their knowledge anytime and anywhere, getting immediate
feedback.
Winter 201 1
42
Project 2. The aim of the project was to develop the educational
corner for the elementary school’s web page""' that would help the school
and its teachers to attract more pupils to visit the web page and learn
something new or possibly affirm their knowledge on a familiar topic
while browsing through the content and the enjoyable educational
activities. Students wanted the pupils to learn in a fun, interesting and
different way providing an e-environment where they felt comfortable to
learn. Our information science students were given numerous handwritten
materials made by the school’s pupils during their school year that
consisted of anagrams, mental maps, games of logic, quizzes on general
knowledge, Croatian language and literature, history, etc. Although it took
a lot of time and effort to convert these handwritten materials into useful
e-activities, students used it as a means to attract pupils. Their basic
hypothesis was that e-activities would look more friendly, interesting and
engaging to pupils, if their own ideas and materials were implemented in
the e-environment.
The educational comer was also intended to motivate the
elementary schools’ teachers to realize the importance of communication
with their pupils using a different medium, an online environment, and to
encourage them to put their educational materials online in order to
establish better communication and interaction with their pupils. Many
teachers embraced this application, while school pupils were excited to
find that their handwritten materials were used for something creative and
useful.
Project 3. Another group of information science students designed
a multimedia project for the NGO “Friends of Animals.”. Although a
leader in their field in Croatia, the NGO was at the very beginning of IT
usage when we first established contact and offered help. They had
computers and a website, but didn't possess the knowledge to use IT as a
driver for reaching their goals.
Therefore, they were excited about the students’ SL project, which
aimed to inform the citizens about the vegetarian products available in our
stores, encourage them to a healthier lifestyle using vegetarian recipes and
to learning about healthy food in an interesting way (via an interactive
database and multimedia applications on a CD-ROM). The NGO was
happy to promote their products by distributing this CD application in the
community for free at an event organized during World Vegetarian Day.
One of our students received a job offer from the NGO for the
position of information technology manager.
Washington Academy of Sciences
43
Project 4. Museology graduate students found that their colleagues
and friends visited museums in Zagreb rarely, and also that it is difficult to
find funding for promotion of museums at the University in the form of
posters and brochures. Therefore, they designed an e-brochure with
appealing design, freely accessible on the website of the Faculty for all
students who want to discover the world of museums in Zagreb. Their
partners were the following institutions: Archaeological Museum,
Croatian History Museum, Croatian Natural History Museum, Croatian
School Museum, Ethnographic Museum, Museum of Arts and Crafts,
Tehnical Museum and Zagreb City Museum. The number of e-brochure
monthly visits is growing, especially in the beginning of the academic
year, when freshmen explore the faculty website.
Project 5. Another project group consisted of Museology graduate
students and information technology students with teacher orientation,
who designed a workbook for children to complete during a visit to the
Zagreb City Museum and art workshops to help them acquire knowledge
in a museum. Their client was the Zagreb City Museum, where they tested
and evaluated the workbook with a group of elementary school pupils.
Both the pupils and the museum staff rated the workbook as an interesting
and useful tool for children, which they can keep as a souvenir from their
visit to the museum.
Each of the above project groups met a real social need, applying
the theoretical knowledge gained during their studies and acquiring new
skills required for activities that they selected due to their interests.
Another 45 groups also successfully completed excellent service-learning
projects.
In order to perform a student satisfaction analysis, we conducted a
survey in 2010 described in detail in the paper co-authored with Kisicek,
S. & Boras, D., 2010. The evaluation was performed as an online survey
that aimed to identify the impact of SL on our students and to collect
suggestions that could improve the effectiveness of the course. The survey
consisted of 20 questions that tried to encourage students to critically
reflect on their SE experience, but also to reflect on the community
partners and the course itself.
Female students were slightly overrepresented in our sample:
71.4% of students taking the survey were female. Interestingly, 57.1% of
the students did volunteer work before taking this course, but most of
them did not perceive it as worth mentioning in their CV. All students
(100%) would recommend the next generation of students to enroll in the
Winter 201 1
44
course. Also, all of them think that the SL project was a rewarding
experience and that they expanded their existing knowledge and skills.
The survey numbers show that the majority of our students are willing to
volunteer in the community after the completion of the project (92.9%).
The overall quality of their service-learning experience was rated
high, with 85.7% of respondents stating it was excellent or good.
Furthemore, 71.4% think their SL experience was more educational than
the traditional seminar at the university. In regards to the relationship with
the community partner, 92.9% of the students would recommend the
community partner to future students.
Regarding the SL project influence on students, 50% of them
strongly agree or agree that they understand better the needs and problems
in their society, 57.1% strongly agree or agree that they feel responsible
for progress in the society, 85.7% of students strongly agree to encourage
other students to enroll in the SL course, while 78.6% strongly agree or
agree that the social aspect of the project demonstrated how they can
become involved in community activities. Furthermore, 78.5% strongly
agree that they learned better the content of the course and study through
the application of knowledge to real community problems, while 57.1%
strongly agree that this was a chance to reflect on their future career and
educational objectives. They consider the most important aspects of
service-learning to be teamwork, interaction with the client, references for
their CV, communication skills, applying knowledge and being able to
give of themselves.
Additionally, students had to define in which areas the project had
a positive impact. 78.6% of them agreed that it influenced their attitude
towards service-learning projects, faculty where SL projects are
implemented, their attitude towards their study and work after study. Also,
85.7% of them agreed the project improved the application and
enrichment of knowledge gained in the study as well as the ability to work
in teams and increased their feelings of personal achievement. Moreover,
71.4% of them agreed the project fostered the desire to help others and a
sense of social responsibility and involvement in the society. Finally,
78.6% of them agreed that it increased their self-confidence and skills
such as communication, problem solving and persistence as well as the
insight into their personal weaknesses and abilities.
In addition, our community partners evaluated the impact of the
project at the end of the semester and 78.6% of them strongly agreed or
agreed that the SL project was really useful for society. All of them
Washington Academy of Sciences
45
expressed interest in future collaboration with our institution and our
students.
Based on the above described experience, it can be concluded that
service-learning offers students a unique opportunity for recognizing the
complexity of the concepts of academic courses and research issues. In
addition to the adoption of theoretical knowledge, these projects enabled
the students to integrate the knowledge with experience. The projects also
enabled the community to solve some problems and to strengthen its
connection to the University. Finally, the commitment of students to the
idea of service-learning made it possible to satisfy the most frequently
mentioned student expectations of the Bologna Process: teamwork,
fieldwork and work on student’s skills, competences and practical
implications of gained knowledge.
Future Work
Communities in which businesses are located today are
international, diverse and sometimes virtual. On the other hand, the
number of e-courses in the universities is growing at an exponential rate,
as well as the challenges of today’s rapidly-changing, technology-
mediated reality. Therefore, both our teachers and students need to prepare
themselves for an increasingly challenging e-linked work environment
with diverse participants and modes of engagement and to see themselves
as a part of a larger social entity in that global work environment.
We assume that service-learning combined with e-leaming tools
can be used to decrease faculty challenges of course development and
facilitation in this new environment and enhance learning. They will be
able to use already available courses and a flexible system, such as a
content management system, that will accommodate future growth and
technology enhancements.
We strongly believe that the Croatian teaching faculty would show
more interest for ST if they were able to be part of a ST project during
their own study. Hence, our objective is to design service-learning e-
courses that will be offered by the Center for Teacher Education at the
Faculty of Humanities and Social Sciences and at the university level so
that every future school or faculty teacher gets a chance to try this method
at the early stage of his or her teaching career.
Finally, we plan to design a workshop for faculty and school
teachers that will combine elements of two educational innovations: e-
leaming and service-learning and share the pointers for the development
Winter 2011
46
of new courses. We also intend to promote the service e-leaming method
through e-portal UPraVO^“ (the portal deals with curriculum planning and
innovative teaching methods) and through the online teaching support
center SPONA"‘".
Reference
1. Bringle, R.G., & Hatcher, J.A. (2000). Institutionalization of
service learning in higher education. Journal of Higher Education,
71(3), 273-290.
2. Bringle, R.G., Games, R., & Malloy, E.A. (1999). Colleges and
universities as citizens: Issues and perspectives. In (Eds.) R.G.
Bringle, R. Games, & E.A. Malloy, Colleges and Universities as
Citizens (pp. 1-16). Boston, MA: Allyn and Bacon.
3. Driscoll, A., Holland, B., Gelmon, S., & Kerrigan, S. (1996). An
assessment model for service-learning: Comprehensive case
studies of impact on faculty, students, community, and institution.
Michigan Journal of Community Service Eeaming, 3(1), 66-71.
4. European Commission. (2009). Eurobarometer: Students and
Higher Education Reform - Survey among students in higher
education institutions, in the EU Member States, Croatia, Iceland,
Norway and Turkey. Special Target Survey, Brussels, 125 pp.
(http://ec.europa.eu/education/higher-
education/doc/studies/barometersum_en.pdf)
5. Gray, M. J., Ondaatje, E. H. &, Zakaras, E. (1999). Combining
service and learning in higher education: Summary report. Santa
Monica, CA: RAND.
6. Hammond, C. (1994). Integrating service and academic study:
Faculty motivation and satisfaction in Michigan higher education.
Michigan Journal of Community Service Eeaming, 1(1), 21-28.
7. Holland, B. (1997). Analyzing institutional commitment to service:
A model of key organizational factors. Michigan Journal of
Community Service Eeaming, 4(1), 30-41.
8. Jelenc, E., Mikelic Preradovic, N., & Mujevic, D. (2008).
Implementing Model of Service Eeaming in Teaching Strategic
Management Course. In Proceedings of the 4th International
Conference on Enterprise Odyssey: Tourism - Governance and
Entrepreneurship, Zagreb, pp. 381-393.
9. Eondon Communique (2007),
http://www.ond.vlaanderen.be/hogeronderwijs/bologna/
Washington Academy of Sciences
47
10. Marullo, S., & Edwards, B. (2000) From charity to justice. The
American Behavioral Scientist, 43, pp. 895-912.
11. Mikelic Preradovic, N., Kisicek, S., & Boras, D. (2010).
Evaluation of Service learning in ICT curriculum. In Proceedings
of the 2nd Paris International Conference on Education, Economy
and Society, Vol. 3, pp. 55-66.
12. Mikelic Preradovic, N. (2009). Ucenjem do drustva znanja: teorija
i praksa drustveno korisnog ucenja (Learning for the Knowledge
Society: service learning theory and practice). Zagreb: Zavod za
informacijske studije Odsjeka za informacijske znanosti
Filozofskog fakulteta Sveucilista u Zagrebu. (coursebook).
13. Mikelic, N., & Boras, D. (2006). Service learning: can our students
learn how to become a successful student? In Proceedings of the
28th International Conference on Information Technology
Interfaces, Zagreb: SRCE, pp. 651-657.
14. Morton, K. & Troppe, M. (1996). From margin to the mainstream:
Campus Compact’s project on integrating service with academic
study. Journal of Business Ethics, 15, 21-32.
15. Stanton, T. K. (1994). The experience of faculty participants in an
instructional development seminar on service- learning. Michigan
Journal of Community Service Learning, 1(1), 7-20.
16. Strand, K., Marullo, S., Cutforth, N., Stoecker, R., & Donohue, P.
(2003 a). Community-based research and higher education:
Principles and practices. San Francisco, CA: Jossey-Bass.
17. Strand, K., Marullo, S., Cutforth, N., Stoecker, R., & Donohue, P.
(2003b). Principles of best practices for community-based
research. Michigan Journal of Community Service Learning, 9(3),
5-15.
18. Ward, K. (1996). Service-learning and student volunteerism:
Reflections on institutional commitment. Paper delivered at the
American Educational Research Association. New York, N.Y.
19. Ward, K. (1998). Addressing academic culture: Service learning,
organizations, and faculty work. In R. A. Rhoads & J. P. F.
Howard (Eds.), Academic service learning: pedagogy of action and
reflection (pp. 72- 80). San Francisco: Jossey-Bass Publishers.
Winter 2011
48
' By region we mean the neighboring countries of Bosnia and Herzegovina, Montenegro,
Serbia and Macedonia.
" http://www.unizg.hr/bopro/activities/ankete.htm#
http://www.moj-posao.net/Vijest/70975/Nezaposlenih-u-prosincu-jos-vise/2/
the term e-leaming refers to the use of an e-leaming platform: Moodle, Blackboard,
WebCT, WebX.
' The course is accompanied by a website that serves to promote and archive successful
■ student projects: http:/infoz.ffzg.hr/dku/index.htm. access through
http://www.ffzg.unizg.hr/infoz/hr/
http://www.propisi.hr/print.php?id=9392
JFDP Regional Conference “Teaching methods and Techniques at the Universities in
South Eastern Europe”, Zagreb, March, 2009.
Official Gazette, 87/08.
"" http://cal.ffzg.hr/Ispit_informatika_za_drzavnu_maturu/proJekt.html accessible through
university login
http://www.gimnaziJa-velika-gorica.skole.hr
http://www.skola-retkovec.hr/edu-kutak/
Xll
http://domus. srce.hr/iuoun/index. php?option=com_content&task=view&id=37&Item
id=61
http://www.unizg.hr/fileadmin/upravlJanJekvalitetom/pdf/spona/spona.pdf
Washington Academy of Sciences
49
WASHINGTON ACADEMY OF SCIENCES
MEMBERSHIP DIRECTORY 201 1
M=Member; F=Fellow; LF=Life Fellow; LM=Life Member; EM=Emeritus
Member; EF=Emeritus Fellow
ABEL, DAVID (Dr.) 113 Hedgewood Drive, Greenbelt MD 20770-1610
(LM)
ANTMAN, STUART (Dr.) University of Maryland, 2309 Mathematics
Building, College Park MD 20742-4015 (F)
APPETITI, EMANUELA PO Box 25805, Washington DC 20027 (M)
APPEE, DAFNA DRAVNIEKS National Capital Society of American
Foresters, PO Box 9288, Arlington VA 22219 (M)
ARSEM, COLLINS (Mr.) 3144 Gracefield Rd Apt 117, Silver spring MD
20904-5878 (EM)
ARVESON, PAUL T. (Mr.) 6902 Breezewood Terrace, Rockville MD
20852-4324 (F)
BAILEY, R. CLIFTON (Dr.) 6507 Divine Street, Mclean VA 22101-4620
(LF)
BARBOUR. LARRY L. (Mr.) Pequest Valley Farm, 585 Townsbury
Road, Great Meadows NJ 07838 (M)
BARWICK, W. ALLEN (Dr.) 13620 Maidstone Lane, Potomac MD
20854-1008 (F)
BEACH, LOUIS A. (Dr.) 1200 Waynewood Blvd., Alexandria VA 22308-
1842 (EF)
BEAM, WALTER R. (Dr.) 4804 Wellington Farms Drive, Chester VA
23831 (F)
BECKER, EDWIN D. (Dr.) Bldg. 5, Rm. 128, Nat. Institutes Of Health,
Bethesda MD 20892-0520 (EF)
BEKEY, IVAN (Mr.) 4624 Quarter Charge Drive, Annandale VA 22003
(F)
BEMENT, ARDEN (Dr.) National Science Foundation, 4201 Wilson
Boulevard, Arlington VA 22230 (F)
BERMAN, BARRY L. (Prof.) George Washington University,
Department of Physics, 725 21st St NW, Washington DC 20052-
0052 (F)
BERRY, JESSE F. (Mr.) 2601 Oakenshield Drive, Rockville MD 20854
(M)
BIGLARI, HAIK (Dr.) Sr. Director of Engineering, Fairchild controls,
540 Highland Street, Frederick MD 21701-7672 (M)
BIONDO, SAMUEL J. (Dr.) 10144 Nightingale St., Gaithersburg MD
20882 (EM)
Winter 2011
50
BLACK, JACQUELYN G. (Dr.) School of Arts and Science, Marymount
University, 2807 N. Glebe Road, Arlington VA 22207 (F)
BLACKSTEN, HARRY RIC (Mr.) 4413 N. 18th St., Arlington VA 22207
(EM)
BODSON, DENNIS (Dr.) 233 N. Columbus Street, Arlington VA 22203
(F)
BOYER. WILLIAM (Mr.) 3725 Alton PI, N.W., Washington DC 20016
(M)
BRIMMER. ANDREW F. (Dr.) Suite 302, 4400 MacArthur Blvd., NW,
Washington DC 20007 (F)
BRISKMAN, ROBERT D. (Mr.) 61 Valerian Court, North Bethesda MD
20852, (F)
BROWN, ELISE A.B. (Dr.) 6811 Nesbitt Place, Mclean VA 22101-2133
(LF)
BROWN, LEWIS R. (Dr.) US EPA, Mailcode: 7507P, 1200 Pennsylvania
Avenue, Washington DC 20704 (M)
CERF, VINTON G. (Dr.) 1435 Woodhurst Blvd., McLean VA 22102-
2234 (F)
CHRISTMAN, GERARD (Mr.) 6109 Berlee Drive, Alexandria VA 22312
(F)
CHUBIN, DARYL E. (Dr.) 1200 New York Ave, NW, Washington DC
20005 (F)
CHUCK, EMIL (Dr.) GMU, 4400 University Drive Stop 2C4, Fairfax
VA 22030 -4444 (M)
CLINE, THOMAS LYTTON (Dr.) 13708 Sherwood Forest Drive, Silver
Spring MD 20904 (F)
COATES, VARY T. (Dr.) 5420 Connecticut Ave NW #517, Washington
DC 20015 - 2032 (LF)
COBLE, MICHAEL (Dr.) The Armed Forces DNA Identification
Laboratory, 1413 Research Blvd, Rockville MD 20850 (F)
COBLE, MICHAEL DEWITT (Dr.) The Armed Forces DNA
Identification Laboratory, 1413 Research Blvd, Rockville MD
20850 (F)
COFFEY, TIMOTHY P. (Dr.) 976 Spencer Rd., McLean VA 22102 (F)
COHEN, MICHAEL P. (Dr.) 1615 Q. St. NW T-1, Washington DC
20009-6310 (LF)
COLE, JAMES H. (Mr.) 9404 Fairpine Lane, Great Falls VA 22066 (M)
CORONA, ELIZABETH T 3003 Van Ness Street, NW #W3 16,
Washington DC 20008 (M)
Washington Academy of Sciences
51
CRISPIN, KATHERINE (Dr.) Geophysical Laboratory, Carnegie
Institution of Washington, 525 1 Broad Branch Dr NW,
Washington DC 20015 (M)
CURRIE, S.J., C. L. (Rev.) Pres., Assn of Jesuit, Colleges & Universities,
1 Dupont Circle NW #405, Washington DC 20036 (EF)
DANCKWERTH, DANIEL 419 Beach Drive, Annapolis Maryland
21403-3906 (M)
DAVIES, DICK (Mr.) Sale Lab, Inc., 1 140 23rd St., NW #303,
Washington DC 20037 (M)
DAVIS, ROBERT E. (Dr.) 1793 Rochester Street, Crofton MD 211 14 (F)
DEAN, DONNA (Dr.) 367 Mound Builder Loop, Hedgeville WV 25427-
7211 (F)
DEDRICK, ROBERT L. (Dr.) 21 Green Pond Rd, Saranac Lake NY
12983 (EF)
DIAGNE, NDEYE FAMA (Dr.) 8311 Marketree Cir, Montgomery
Village MD 20886-4919 (M)
DISBROW, JAMES (Mr.) 507 13th St SE, Washington DC 20003 (M)
DOCTOR, NORMAN (Mr.) 6 Tegner Court, Rockville MD 20850 (EF)
DONALDSON, EVA G. (Ms.) 3941 Ames St Ne, Washington DC 20019
(F)
DONALDSON, JOHANNA B. (Mrs.) 3020 North Edison Street,
Arlington VA 22207 (EF)
DUHE, BRIAN (Mr.) 6396 Hwy 10, Greensburg LA 70441 (M)
DUNCOMBE, RAYNOR L. (Dr.) 1804 Vance Circle, Austin TX 78701
(EF)
DURRANI, SAJJAD (Dr.) 17513 Lafayette Dr, OLNEY MD 20832 (EF)
EDINGER, STANLEY EVAN (Dr.) Apt #1016, 5801 Nicholson Lane,
North Bethesda MD 20852 (EM)
EGENREIDER, JAMES A. (Dr.) 1615 North Cleveland Street, Arlington
VA 22201 (F)
EL KFIADEM, HASSAN (Dr.) Dept, of Chemistry, American University,
4400 Massachusetts Ave, Washington DC 20016-8014 (EF)
ENGLER, MD, RENATA J. M. (Col) 1900 Wallace Avenue, Wheaton
MD 20902-1302 (F)
ERICKSON, TERRELL A. (Ms.) 4806 Cherokee St., College Park MD
20740-1865 (M)
ETTER, PAUL C. (Mr.) 16609 Bethayres Road, Rockville MD 20855 (F)
EVANS, HEATHER (Dr.) Apt 419, 1727 Massachusetts Ave NW,
Washington DC 20036 (M)
FASANELLI, FLORENCE (Dr.) 471 1 Davenport Street, Washington DC
20016 (F)
Winter 2011
52
FAULKNER. JOSEPH A. (Mr.) 2 Bay Drive, Eewes DE 19958 (F)
FINKEESTEIN, ROBERT (Dr.) 1 1424 Palatine Drive, Potomac MD
20854-1451 (M)
FORZIATI, AEPHONSE F. (Dr.) 65 Heritage Dr, Unit 6, Cleveland GA
30528 (EF)
FRANKLIN, JUDE E. (Dr.) 76 1 6 Carteret Road, Bethesda MD 20817-
2021 (F)
FREEMAN, ERNEST R. (Mr.) 5357 Strathmore Avenue, Kensington MD
20895-1160 (LEF)
FREEMAN, HARVEY 1503 Sherwood Way, Eagan MN 55122 (F)
FREHIEE, EISA (Dr.) 1239 Vermont Ave NW #204, Washington DC
20005r3643 (M)
GAUNAURD, GUILLERMO C. (Dr.) 4807 Macon Road, Rockville MD
20852-2348 (EF)
GEBBIE, KATFIARINE B. (Dr.) Physics Laboratory, National Institute of
Standards and Technology, 100 Bureau Drive, MS 8400,
Gaithersburg MD 20899-8400 (F)
GHAFFARI, ABOLGHASSEM (Dr.) 13129 Chandler Blvd, Sherman
Oaks CA 91401-6040 (LF)
GIBBON, JOROME (Mr.) 3 1 1 Pennsylvania Avenue, Falls Church VA
22046 (F)
GIBBONS, JOFIN H. (Dr.) Resource Strategies, P.O. Box 379, The Plains
V A 20198 (EF)
GIFFORD, PROSSER (Dr.) 59 Penzance Rd, Woods Hole MA 02543-
1043 (F)
GLUCKMAN, ALBERT G. (Mr.) 18123 Homeland Drive, Olney MD
20832-1792 (EF)
GORDON, NANCY M Associate Director for Strategic Planning and
Innovation, US Census Bureau, HQ Rm: 8H128, Washington DC
20233 (F)
GRAY, JOHN E. (Mr.) PO Box 489, Dahlgren VA 22448-0489 (M)
GRAY, MARY (Professor) Department of Mathematics, Statistics, and
Computer Science, American University, 4400 Massachusetts
Avenue NW, Washington DC 20016-8050 (F)
GREENOUGH, M. L. (Mr.) Greenough Data Assoc., 616 Aster Blvd.,
Rockville MD 20850 (EF)
GRIFO, FRANCESCA (Dr.) Union of Concerned Scientists, 1825 K St
NW, Suite 800, Washington, DC 20006 (M)
GUTERMUTH, PAUE-GEORG (Dr.) IM Wingert 28, 53604 Bad Honnef
, Germany (EF)
Washington Academy of Sciences
53
HACK, HARVEY (Dr.) Northrop Grumman Corp., Ocean Systems MS
9105, PO Box 1488, Annapolis MD 21404-1488 (F)
HACSKAYLO, EDWARD (Dr.) 7949 N Sendero Uno, Tucson AZ
85704-2066 (EF)
ELMG, SJ, FRANK R. (Rev.) Loyola College, 4501 North Charles St,
Baltimore MD 21210-2699 (F)
HARR. JAMES W. (Mr.) 180 Strawberry Lane, Centreville MD 21617
(EF)
HAYNES, ELIZABETH D. (Mrs.) 7418 Spring Village Dr., Apt. CS 422,
Springfield VA 22 1 50-493 1 (M)
HAZAN, PAUL 14528 Chesterfield Rd, Rockville MD 20853 (F)
HEANEY, JAMES B. 6 Olive Ct, Greenbelt MD 20770 (M)
HERBST, ROBERT L. (Mr.) 4109 Wynnwood Drive, Armadale VA
22003 (EF)
HEYER. W. RONALD (Dr.) MRC 162, PO Box 37012, Smithsonian
Institution, Washington DC 20013-7012 (F)
HIBBS, EUTHYMIA D. (Dr.) 7302 Durbin Terrace, Bethesda MD 20817
(M)
HIETALA, RONALD (Dr.) 6351 Waterway Drive, Falls Church VA
22044-1322 (M)
HILL II, RICHARD E. (Mr.) 712 Mapleton Rd, 712 Mapleton Rd,
Rockville MD 20850 (M)
HOFFELD, J. TERRELL (Dr.) 1 1307 Ashley Drive, Rockville MD
20852-2403 (F)
HOLLAND, PH.D., MARK A. 201 Oakdale Rd., Salisbury MD 21801
(M)
HOLLINSHEAD, ARIEL (Dr.) 23465 Harbor View Rd. #622, Punta
GordaFL 33980-2162 (EF)
HONIG, JOHN G. (Dr.) 7701 Glenmore Spring Way, Bethesda MD
20817 (EF)
HOROWITZ, EMANUEL (Dr.) Apt 618, 3100 N. Leisure World Blvd,
Silver Spring MD 20906 (EF)
HOU, WANQUI 244 E. Pearson St. Apt 1814, Chicago IE 60611 (M)
HOWARD, SETHANNE (Dr.) 5526 Green Dory Lane, Columbia MD
21044 (LF)
HOWARD-PEEBLES, PATRICIA (Dr.) 323 Wrangler Dr., Fairview TX
75069 (EF)
HULSE, RUSSELL (Dr.) 13 Harvest Dr., Plainsboro NJ 08536 (LF)
HURDLE, BURTON G. (Dr.) 3440 south Jefferson St, Falls Church VA
22041 (F)
IKOSSl, KIKI (Dr.) 6275 Gentle LN, Alexandria VA 22310 (F)
Winter 201 1
54
INGRAM, C. DENISE (Dr.) 910 M St. NW #409, Washington DC 20001
(M)
JACOX, MARJLYN E. (Dr.) 10203 Kindly Court, Montgomery Village
MD 20886-3946 (F)
JANUSZEWSKI, JOSEPH (Mr.) MSC 5607, 12 South Dr, Bethesda MD
20892 (M)
JARRE;LL, H. JUDITH (Dr.) 9617 Alta Vista Ter., Bethesda MD 20814
(F)
JENSEN, ARTHUR S. (Dr.) Apt. 1 104, 8820 Walther Blvd, Parkville MD
21234-9022 (LF)
JOHNSON, EDGAR M. (Dr.) 1384 Mission San Carlos Drive, Amelia
Island FL 32034 (LF)
JOHNSON, GEORGE P. (Dr.) 3614 34th Street, N.W., Washington DC
20008 (EF)
JOHNSON, JEAN M. (Dr.) 3614 34th Street, N.W., Washington DC
20008 (EF)
JOHNSON, PHYLLIS T. (Dr.) 833 Cape Drive, Friday Harbor WA 98250
(EF)
JONG, SHUNG-CHANG (Dr.) 8892 Whitechurch Ct, Bristow VA 20136
(LF)
JORDANA, ROMAN DE VICENTE (Dr.) Batalla De Garellano, 15,
Aravaca, 28023, Madrid , Spain (EF)
KADTKE, JAMES (Dr.) Apt. 824, 1701 16th St. NW, Washington DC
20009-3131 (M)
KAHN, ROBERT E. (Dr.) 909 Lynton Place, Mclean VA 22102 (F)
KAPETANAKOS, C.A. (Dr.) 4431 MacArthur Blvd, Washington DC
20007 (EF)
KARAM, LISA (Dr.) 8105 Plum Creek Drive, Gaithersburg MD 20882-
4446 (F)
KATZ, ROBERT (Dr.) 16770 Sioux Lane, Gaithersburg MD 20878-2045
(F)
KAY, PEG (Ms.) 6111 Wooten Drive, Falls Church VA 22044 (LF)
KEEFER. LARRY (Dr.) 7016 River Road, Bethesda MD 20817 (F)
KEISER, BERNHARD E. (Dr.) 2046 Carrhill Road, Vienna VA 22181-
2917 (LF)
KENNEDY, WILLIM G. (Dr.) 9812 Ceralene Drive, Fairfax VA 22032-
1734 (M)
KLINGSBERG, CYRUS (Dr.) 1318 Deerfield Drive, State College PA
16803 (EF)
KLOPFENSTEIN, REX C. (Mr.) 4224 Worcester Dr., Fairfax VA 22032-
1140 (LF)
Washington Academy of Sciences
55
KRUGER. JEROME (Dr.) 1801 E. Jefferson St. Apt 241, Rockville MD
20852 (EF)
LACAMPAGNE, CAROLE (Dr.) 4530 Connecticut Ave, Washington DC
20008 (F)
LANHAM, CLIFFORD E. (Mr.) P.O. Box 2303, Kensington MD 20891
(F)
LAWSON, ROGER H. (Dr.) 10613 Steamboat Landing, Columbia MD
21044 (EF)
LEIBOWITZ, LAWRENCE M. (Dr.) 3903 Laro Court, Fairfax VA 22031
(LF)
LEMKIN, PETER (Dr.) 148 Keeneland Circle, North Potomac MD 20878
(EM)
LESHUK, RICEIARD (Mr.) 9004 Paddock Lane, Potomac MD 20854
(M)
LEWIS, DAVID C. (Dr.) 27 Bolling Circle, Palmyra VA 22963 (F)
LEWIS, E. NEIL (Dr.) Malvern Instruments, Suite 300, 7221 Lee
Deforest Dr, Columbia MD 21046 (F)
LIANG, CHUNLEI (Dr.) MAE, 801 22nd Street NW, Washington DC
20052 , USA (M)
LIBELO, LOUIS F. (Dr.) 9413 Bulls Run Parkway, Bethesda MD 20817
(LF)
LINDGREN, CARL EDWIN (Dr.) lAPSR, 10431 HWY 51, Courtland
MS 38620 (M)
LING, LEE (Mr.) 1608 Belvoir Drive, Los Altos CA 94024 (EF)
LONDON, MARILYN (Ms.) 3520 Nimitz Rd, Kensington MD 20895 (F)
LOOMIS, TOM H. W. (Mr.) 1 1502 Allview Dr., Beltsville MD 20705
(EM)
LUTZ, ROBERT J. (Dr.) 17620 Shamrock Drive, Olney MD 20832 (EF)
LYON, HARRY B. (Mr.) 7722 Northdown Road, Alexandria VA 22308-
1329 (M)
LYONS, JOHN W. (Dr.) 7430 Woodville Road. Mt. Airy MD 21771
(EF)
MAFFUCCI, JACQUELINE (Dr.) 1619 Hancock Ave, Alexandria VA
22301 (M)
MALCOM, SHIRLEY M. (Dr.) 12901 Wexford Park, Clarksville MD
21029-1401 (F)
MANDERSCHEID, RONALD W. (Dr.) 10837 Admirals Way, Potomac
MD 20854-1232 (LF)
MARRETT, CORA (Dr.) Directorate for Education and Human
Resources, National Science Foundation, 4201 Wilson Boulevard,
Arlington VA 22230 (F)
Winter 2011
56
MARTIN, WILLIAM F 9949 Elm Street, Lanham MD 20706-4711 (F)
MARVEL, KEVIN B. (Dr.) American Astronomical Society, Suite 400,
2000 Florida Ave NW, Washington DC 20009 (F)
MASON, JEFFREY (Dr.) Building # 1 02, Room 2151,1413 Research
Boulevard,, Rockville MD 20850 (F)
MAZZUCHI, THOMAS A. (Dr.) Operations Research Dept., 4794
Catteric Ct, Fairfax VA 22032 (F)
MCNEELY, CONNIE L. (Dr.) School of Public Policy, George Mason
University, 3351 Fairfax Dr. Stop 3B1, Arlington VA 22201 (M)
MENZER. ROBERT E. (Dr.) 90 Highpoint Dr., Gulf Breeze FL 32561-
4014 (EF)
MESS, WALTER (Mr.) 1301 Seaton Ln, Falls Church VA 22046 (LM)
MESSINA, CARLA G. (Mrs.) 9800 Marquette Drive, Bethesda MD
20817 (F)
METAILIE, GEORGES C. (DR.) 18 Rue Liancourt, 75014 Paris ,
FRANCE (F)
MEYLAN, THOMAS (Dr.) 3550 Childress Terrace, Burtonsville MD
20866 (F)
MIELCZAREK, EUGENIE A. (Dr.) 3181 Readsborough Ct, Fairfax VA
22031-2625 (F)
MILLER, KENT L. (Dr.) 4721 Rodman Str. NW, Washington DC 20016-
3234 (M)
MILLER II, ROBERT D. (Dr.) The Catholic University of America,
10918 Dresden Drive, Beltsville MD 20705 (M)
MILLSTEIN, LARRY (Dr.) 4053 North 41st Street, McLean VA 22101-
5806 (M)
MIRIEL, VICTOR (Dr.) Salisbury University, Dept, of Biological
Sciences, 1101 Camden Ave, Salisbury MD 21801 (M)
MITTLEMAN, DON (Dr.) Apartment 909, 5200 Brittny Dr. S, St.
Petersburg FL 33715-1538 (EF)
MORGOUNOV, ALEXEY (Dr.) CIMMYT, P.K. 39, Emek, Ankara
06511 , Turkey (M)
MORRIS, JOSEPH (Mr) Mail Stop G940, The Mitre Corporation, 7515
Colshire Dr., McLean VA 22102 (M)
MORRIS, P.E., ALAN (Dr.) 4550 N. Park Ave. #104, Chevy Chase MD
20815 (EF)
MOSKOWITZ, YOKI (Ms.) 223 N. Oakland St., Arlington VA 22203-
3512 (M)
MOUNTAIN, RAYMOND D. (Dr.) 5 Monument Court, Rockville MD
20850 (F)
Washington Academy of Sciences
57
MOXLEY, FREDERICK (Dr.) 64 Millhaven Court, Edgewater MD
21037 (M)
MUMMA, MICHAEL J. (Dr.) 210 Glen Oban Drive, Arnold MD 21012
(F)
MURDOCH, WALLACE P. (Dr.) 65 Magaw Avenue, Carlisle PA 17015
(EF)
NORRIS, KARL H. (Mr.) 1 1204 Montgomery Road, Beltsville MD
20705 (EF)
O’HARE, JOHN J. (Dr.) 108 Rutland Blvd, West Palm Beach FL 33405-
5057 (EF)
OHRINGER, LEE (Mr.) 5014 Rodman Road, Bethesda MD 20816 (EF)
OLSEN, KATHIE L. (Dr.) 1504 N. 22 Street, Arlington VA 22209 (M)
ORDWAY, FRED (Dr.) 5205 Elsmere Avenue, Bethesda MD 20814-
5732 (EF)
OSBORNE, CAROLYN (Dr.) 900 N. Stafford St., Arlington VA 22203
(M)
O'SHEA, PATRICK (Dr.) A. James Clark School of Engineering, 2405
A.V. Williams Bldg., University of Maryland, College Park MD
20742 (M)
O'HARE, JOHN J. (Dr.) 108 Rutland Blvd., West Palm Beach FL 33405-
5057 (EF)
PARASCANDOLA, JOHN (Dr.) 11503 Patapsco Dr, Rockville MD
20852 (M)
PARR. ALBERT C (Dr.) 2656 SW Eastwood Avenue, Gresham OR
97080-9477 (F)
PATEL, D. G. (Dr.) 1 1403 Crownwood Lane, Rockville MD 20850 (F)
PAZ, ELVIRA L. (Dr.) 172 Cook Hill Road, Wallingford CT 06492
(LEF)
PICKHOLTZ, RAYMOND L. (Dr.) 3613 Glenbrook Road, Fairfax VA
22031-3210 (EF)
POLAVARAPU, MURTY 10416 Hunter Ridge Dr., Oakton VA 22124
(LF)
PRIBRAM, KARL (Dr.) PO Box 679, Warrenton VA 20188 (EM)
PROCTOR, JOHN H. (Dr.) 102 Moray Firth, Ford's Colony,
Williamsburg VA 23 1 88 (LF)
PRZYTYCKI, JOZEF M. (Prof) 10005 Broad St, Bethesda MD 20814
(F)
PYKE, JR, THOMAS N. (Mr.) 4887 N. 35th Road, Arlington VA 22207
(F)
RADER, CHARLES A. (Mr.) 1101 Paca Drive, Edgewater MD 21037
(EF)
Winter 2011
58
RAMAKER. DAVID E. (Dr.) 6943 Essex Avenue, Springfield VA 22150
(F)
RAUSCH, ROBERT L. (Dr.) 737 Femcliff Ave NE, Bainbridge Island
WA98110 (F)
RAVITSKY, CHARLES (Mr.) 37129 Village 37, Camarillo CA 93012
(EF)
READER, JOSEPH (Dr.) National Institute of Standards and Technology,
1 00 Bureau Drive, MS 8422, Gaithersburg MD 20899-8422 (F)
REDISH, EDWARD F. (Prof) 6820 Winterberry Lane, Bethesda MD
20817 (F)
REINER. ALVIN (Mr.) 11243 Bybee Street, Silver Spring MD 20902
(EF) ,
REISCHAUER. ROBERT (Dr.) 5509 Mohigan Rd, Bethesda MD 20816
(F)
REISCHAUER, ROBERT D. (Dr.) 5509 Mohican Rd., Bethesda MD
20816 (F)
RENAUD, PHILIP (Capt.) Living Oceans Foundation, 8181 Professional
Place Suite 21 5, LandoverMD 20785 (M)
RHYNE, JAMES J. (Dr.) 1830 Corona Ave., Los Alamos NM 87544-
5767 (F)
RICKER, RICHARD (Dr.) 12809 Talley Ln, Damestown MD 20878-
6108 (F)
RIDGELL, MARY P.O. Box 133, 48073 Mattapany Road, St. Mary’s City
MD 20686-0133 (EM)
ROBERTS, SUSAN (Dr.) Ocean Studies Board, Keck 752, National
Research Council, 500 Fifth Street, NW, Washington DC 20001
(F)
ROSE, WILLIAM K. (Dr.) 10916 Picasso Lane, Potomac MD 20854 (F)
ROSENBLATT, JOAN R. (Dr.) 701 King Farm Blvd, Apt 630, Rockville
MD 20850 (EF)
SAENZ, ALBERT W. (Dr.) 6338 Olde Towne Court, Alexandria VA
22307-12227 (F)
SANDERS, JAY (Dr.) 7850 Westmont Lane, McLean VA 22102 (F)
SAUBERMAN, P.E., HARRY R (Mr.) 8810 Sandy Ridge Ct., Fairfax VA
22031 ,USA (M)
SAVILLE, JR, THORNDIKE (Mr.) 5601 Albia Road, Bethesda MD
20816-3304 (LF)
SCHINDLER, ALBERT 1. (Dr.) 6615 Sulky Lane, Rockville MD 20852
(EF)
SCHLOSSBERG, PETER 3114 Worthington Cir, Falls Church VA
22044-2631 (M)
Washington Academy of Sciences
59
SCHMEIDLER, NEAL F. (Mr.) Omni Engr & Technology, Inc, STE 900,
8200 Greensboro Dr, McLean VA 22102 (F)
SCHNEPFE, MARIAN M. (Dr.) Potomac Towers, Apt. 640, 2001 N.
Adams Street, Arlington VA 22201 (EF)
SCHROFFEL, STEPHEN A. 1860 Stratford Park PI #403, Reston VA
20190-3368 (F)
SEBRECHTS, MARC M. (Dr.) 7014 Exeter Road, Bethesda MD 20814
(F)
SEVERINSKY, ALEX J. (Dr.) 4707 Foxhall Cres NW, Washington DC
20007-1064 (EM)
SHAFRIN, ELAINE G. (Mrs.) 4850 Connecticut Ave NW Apt 818,
Washington DC 20008 (EF)
SHAW, JINESH (Mr.) 1111 Arlington Blvd, Arlington VA 22209 (M)
SHETLER. STANWYN G. (Dr.) 142 E Meadowland Ln, Sterling VA
20164-1144 (EF)
SHRIER, STEFAN (Dr.) PO Box 320070, Alexandria VA 22320-4070
(EF)
SHROPSHIRE, JR, W. (Dr.) Apt. 426, 300 Westminster Canterbury Dr.,
Winchester VA 22603 (LF)
SHUGART, ERIKA (Dr.) Marian Koshland Science Museum, 500 5th
Street NW, Washington DC 20001 (M)
SILVER, DAVID M. (Dr.) Applied Physics Laboratory, 11100 Johns
Hopkins Road, Laurel MD 20723-6099 (M)
SMITH, REGINALD C. (Mr.) 7731 Tauxemont Road, Alexandria VA
22308 (M)
SMITH, THOMAS E. (Dr.) 3121 Brooklawn Terrace, Chevy Chase MD
20815-3937 (LF)
SODERBERG, DAVID L. (Mr.) 403 West Side Dr. Apt. 102,
Gaithersburg MD 20878 (M)
SOLAND, RICHARD M. (Dr.) 3426 Mansfield Road, Falls Church VA
22041-1427 (LF)
SPANO, MARK (Dr.) 9105 E. Hackamore Dr., Scottsdale AZ 85255 (F)
SPARGO, WILLIAM J. (Dr.) 9610 Cedar Lane, Bethesda MD 20814 (F)
SPILHAUS, JR, A.F. (Dr.) 10900 Picasso Lane, Potomac MD 20854
(EM)
STARAI, THOMAS (Mr.) 11803 Breton Ct. 21, Reston VA 20191-3203
(M)
STERN, KURT H. (Dr.) 103 Grant Avenue, Takoma Park MD 20912-
4328 (EF)
STIFF, LOUIS J. (Dr.) 332 N St., SW., Washington DC 20024-2904 (EF)
Winter 201 1
60
STOMBLER. ROBIN (Ms.) Auburn Health Strategies, Suite D, 4622 S.
28th Rd., Arlington VA 22206-1130 (M)
STONE, JAMES L 405 Tearose PI. SW, Leesburg VA 20175 (M)
STRAUSS, SIMON W. (Dr.) 4506 Cedell Place, Temple Hills MD 20748
(LF)
SUBRAMANIAN, ANAND (Dr.) 2571 Sutters Mill Dr., Herndon VA
20171 (M)
SUCHER, JOSEPH (Dr.) Apt. 421, 31 16 Gracefield Rd. Silver Spring
MD 20904 (F)
SYKES, ALAN O. (Dr.) 304 Mashie Drive, Vienna VA 22180 (EM)
SZTEIN, ESTER (Dr.) 8509 Cottage St., Vienna VA 22180 (M)
TABOR. HERBERT (Dr.) NIDDK, EBP, Bldg 8, Rm 223, National
Institutes of Health, Bethesda MD 20892-0830 (M)
TADMOR. EITAN (Dr.) 3202 Farmington Dr., Chevy Chase MD 20815
(F)
TEICH, ALBERT H. (Dr.) PO Box 309, Garrett Park MD 20896 (EF)
THOMPSON, F. CHRISTIAN (Dr.) 6611 Green Glen Ct, Alexandria VA
22315-5518 (LF)
TIDMAN, DEREK A. (Dr.) 6801 Benjamin St., McLean VA 22101-1576
(M)
TIMASHEV, SVIATOSLAV A. (Mr.) 3306 Potterton Dr., Falls Church
VA 22044-1603 (F)
TOUWAIDE, ALAIN Department of Botany - MRC 166, National
Museum of Natural History, PO Box 37012, Washington DC
20013-7012 (LF)
TOWNSEND, LEWIS R. (Dr.) 8906 Liberty Lane, Potomac MD 20854
(M)
TOWNSEND, MARJORIE R. (Mrs.) 3529 Tilden Street, NW,
Washington DC 20008-3 194 (LF)
TRAN, NICK (Dr.) Suite 300, 6363 Walker Lane, Alexandria VA 22310
(M)
TROXLER. G.W. (Dr.) PO Box 1 144, Chincoteague VA 23336-9144 (F)
TURGEON, DONNA (Dr.) 8701 Running Fox Ct., Fairfax Station VA
22039 (M)
TYLER. PAUL E. (Dr.) 1023 Rocky Point Ct. N.E., Albuquerque NM
87123-1944 (EF)
UBELAKER. DOUGLAS H. (Dr.) Dept, of Anthropology, National
Museum of Natural History, Smithsonian Institution, Washington
DC 20560-01 12 (F)
UHLANER. J.E. (Dr.) 5 Maritime Drive, Corona Del Mar CA 92625
(EF)
Washington Academy of Sciences
61
UMPLEBY, STUART (Professor) The George Washington University,
2033 K St NW, S. 230, Washington DC 20052 (F)
VAISHNAV, MARIANNE P. (Ms.) P.O. Box 2129, Gaithersburg MD
20879 (LF)
VAN TUYL, ANDREW (Dr.) 3618 Littledale Road, Apt. 203, Kensington
MD 20895-3434 (EF)
VANE III, RUSSELL RICHARDSON (Dr.) 2102 Capstone Cir, Herndon
VA20170 (M)
VARADI, PETER F. (Dr.) Apartment 1606W, 4620 North Park Avenue,
Chevy Chase MD 20815-7507 (EF)
VAVRICK, DANIEL J. (Dr.) 10314 Kupperton Court, Fredricksburg VA
22408 (F)
VIZAS, CHRISTOPHER (Dr.) 504 East Capitol Street, NE, Washington
DC 20003 (M)
WALDMANN, THOMAS A. (Dr.) 3910 Rickover Road, Silver Spring
MD 20902 (F)
WALLER, JOHN D. (Dr.) 5943 Kelley Court, Alexandria VA 22312-
3032 (M)
WAYNANT, RONALD W. (Dr.) 6525 Limerick Court, Clarksville MD
21029 (F)
WEBB, RALPH E. (Dr.) 21-P Ridge Road, Greenbelt MD 20770 (F)
WEGMAN, EDWARD J. (Dr.) GMU Center Computer Statistics, 368
Research Bldg. Stop 6A2, 4400 University Drive, Fairfax VA
22030-4444 (LF)
WEIL, TIMOTHY (Mr.) SECURITYFEEDS, PO Box 18385, Denver CO
80218 (M)
WEISS, ARMAND B. (Dr.) 6516 Truman Lane, Falls Church VA 22043
(LF)
WERGIN, WILLIAM P. (Dr.) 1 Arch Place #322, Gaithersburg MD
20878 (EF)
WIESE, WOLFGANG L. (Dr.) 8229 Stone Trail Drive, Bethesda MD
20817 (EF)
WILLIAMS, CARL (Dr.) 2272 Dunster Lane, Potomac MD 29854 (F)
WILLIAMS, E. EUGENE (Dr.) Dept, of Biological Sciences, Salisbury
University, 1101 Camden Ave, Salisbury MD 21801 (M)
WITH, CATHERINE PO Box 6481, Silver Spring MD 20916 (M)
WITHERSPOON, F. DOUGLAS ASTI, 11316 Smoke Rise Ct., Fairfax
Station VA 22039 (M)
WOOD, H. JOHN (Dr.) 15806 Pinecroft Lane, Bowie MD 20716 (M)
WOOTEN, RUSSELL (Mr.) 42508 DeSoto Terrace, Brambleton VA
20 148, USA (M)
Winter 201 1
WULF, WILLIAM A. (Dr.) Quill Spring, 3897 Free Union Road,
Charlottesville V A 22901 (F)
ZELKOWITZ, MARVIN (Dr.) 10058 Cotton Mill Lane, Columbia MD
21046 (M)
Washington Academy of Sciences
AFFILIATED INSTITUTIONS
The National Institute For Standards and Technology
Meadowlark Botanical Gardens
The John W. Kluge Center of the Library of Congress
Potomac Overlook Regional Park
Koshland Science Museum
American Registry of Pathology
Living Oceans Foundation
Winter 201 1
64
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
Washington Academy of Sciences
DELEGATES TO THE WASHINGTON ACADEMY OF SCIENCES
REPRESENTING AFFILIATED SCIENTIFIC SOCIETIES
Washington Academy of Sciences
Room 1 1 3
1200 New York Ave. NW
NONPROFIT ORG
US POSTAGE PAID
MERRIFIELDVA 22081
PERMIT# 888
Washington, DC 20005
Return Postage Guaranteed
^Hc5************]y[ixED ADC 207
ERNST MAYR LIBRARY
MUSEUM COMP ZOOLOGY
HARVARD UNIVERSITY
26 OXFORD ST
CAMBRIDGE, MA 02138-2902