STEPHEN E. SCHWARTZ
This page was last updated 2022-05-15.
Background image: High resolution sky and cloud photograph looking vertically upward, at Department of Energy Atmospheric Radiation Measurement site in north central Oklahoma, July 31, 2015, 1635 UTC (local sun time = UTC – 6.5 h; 16:35 = 10:05 sun time). See High-Resolution Photography of Clouds from the Surface: Retrieval of Optical Depth of Thin Clouds down to Centimeter Scales.
Publications and Presentations
Recent Popular Lectures
Research Highlights on the Web
In the News
Education and Honors
Atmospheric Sciences Division
Brookhaven National Laboratory
Upton, NY 11973
Skype: stepheneschwartz (by pre-arrangement)
(Like any web page, this page is a work in progress. As time permits I am updating the page to keep up with developments in the field and my evolving research interests. Please stay tuned. )
For much of my career my research has focused on the chemistry of Atmospheric Energy-Related pollutants (AER pollutants). The principal substances of concern have been sulfur and nitrogen oxides emitted into the troposphere as byproducts of fossil fuel combustion, and their oxidation products, i.e., sulfuric acid and nitric acid and the salts of these species. These substances are of concern from the perspective of human health, acid deposition, visibility reduction, and climate change. More recently my research interest has turned increasingly to understanding the response of Earth’s climate system to the increase of carbon dioxide (CO2) and other so-called forcing agents, substances which change the radiation budget of the planet, specifically including also the influences of atmospheric aerosols (submicroscopic particles suspended in air).
As these materials are introduced into the atmosphere largely in association with energy-related activities, the environmental consequences of these emissions are of immediate concern to the United States Department of Energy, and much of the support for my research comes from the Climate and Environmental Sciences Division (CESD) within the Office of Biological and Environmental Research within that Department. Within CESD our research has been supported mainly by the Atmospheric Science Program (ASP) and the Atmospheric Radiation Measurement Program (ARM); in 2009 these two programs were merged into the Atmospheric System Research (ASR) Program.
From 2004 to 2009 I served as chief scientist of the Department of Energy’s Atmospheric Science Program. Much of the research of that program was (and in ASR continues to be) focused on the influences of aerosols on atmospheric radiation, clouds, and climate. An overview of the issues pertaining to the climate influences of atmospheric aerosols and approaches to understanding the processes governing the life cycle of these aerosols and representing them in models is presented in a paper published in The Bulletin of the American Meteorological Society in 2007. Research in the ARM Program focused on (and in ASR continues to focus on) understanding atmospheric radiation and the influences on this radiation, especially the influences of clouds. A description of the ARM program and its measurement approaches is given in a paper published in The Bulletin of the American Meteorological Society in 1994.
The following narrative provides a rather general introduction to our recent research dealing with energy related emissions and their environmental consequences, focusing mainly on influences on climate and on climate system response, and places this work in the context of the larger climate change issue. Our research is represented in our publications. The links below are to the citations of the papers on my publications page.
I welcome inquiries of interest from any and all. Much of our work is conducted in collaboration with others at their institutions or as visiting scientists at Brookhaven National Laboratory. I particularly encourage inquiries from students; you are our future.
Earth’s Energy Budget and Perturbations on this Budget
At present CO2 is introduced into the atmosphere largely as a byproduct of fossil fuel combustion, although deforestation, which had initially been the dominant source, continues to make an substantial contribution. As shown in Figure 1, the amount of carbon dioxide in Earth’s atmosphere has increased from its value in the preindustrial era (prior to 1750) of about 278 parts per million (ppm) to its present value of about 400 ppm. Carbon dioxide is long lived in the atmosphere (average residence time thought to be about a century). So it is fair to say that we get the benefit of burning fossil fuels for energy today — to heat, cool, and light our homes, schools, and workplaces; to produce the food we eat and the goods we use; to move goods and people from here to there — but we leave the CO2 in the atmosphere for our grandchildren and their grandchildren. To my thinking, this situation makes it imperative to understand the consequences of this excess CO2, to provide input to informed policymaking.
Figure 1. Mixing ratios and radiative forcings of major heat-trapping gases (“greenhouse gases”) over the past 10, 000 years. Mixing ratios, by mole, relative to dry air, are in parts per million or billion, as indicated. The unit of forcing, and of energy fluxes in geophysics generally, is the watt per square meter. Colors denote different ice cores; red lines denote instrumental atmospheric measurements. The figure is modified from the 2007 report of the Intergovernmental Panel on Climate Change, Working Group I.
Of major concern is enhancement of the so-called “greenhouse effect” by the additional CO2 present in the atmosphere. Carbon dioxide, water vapor, other polyatomic molecules in the atmosphere, and also clouds, absorb thermal infrared radiation emitted from the Earth surface and re-radiate some of this energy back toward the surface. This phenomenon is responsible for the temperate climate of the planet; without it the average temperature of the planet would be about -19 °C (-2 °F) instead of the actual average temperature, about 15 ° C (59 ° F). This infrared heating of the planet, which is additive to the heating by absorbed solar radiation, is commonly referred to as the “greenhouse effect,” so called (but incorrectly so) because of the supposed mechanism of warming by agricultural greenhouses. This situation is illustrated in Figure 2; it should be stressed that the numbers given are annual and global averages of quantities that vary substantially in space and time. A key strength of this figure is that the numbers representing the several fluxes are based on measurements, from space (by satellite) and at the Earth surface. The various flux quantities are pretty well established, but there are uncertainties still of a few watts per square meter. Improving knowledge of these fluxes, there spatial and temporal variability, and the influences on them is the objective of much current climate research.
Figure 2. Earth’s radiation budget. Energy flows (flux densities, W m-2) comprising the energy budget of the planet are shown in orange for the shortwave (SW, solar) region of the spectrum and red for the longwave (LW, thermal infrared). Also shown (green) are transfer of energy from the surface to the atmosphere by sensible heat and latent heat (transfer of water vapor from the surface to the atmosphere, followed by condensation in the atmosphere). The quantity α denotes the planetary albedo, the fraction of shortwave radiation incident on the planet that is reflected back to space (i.e., not absorbed). Cloud radiative effect (CRE) denotes difference, clouds minus cloud-free. Also shown are absolute temperatures (K) corresponding to thermal infrared fluxes by the Stefan-Boltzmann radiation law for emissivity taken as unity. Quantities in boxes denote anthropogenic perturbations. Quantities in italics are derived directly from measurements, from space, by satellite-borne instruments, or, for heat uptake rate, from the increase in ocean heat content with time. Uncertainties are given as 1 sigma estimates. Modified from a 2012 paper written with Bjorn Stevens of the Max Planck Intitute, Hamburg.
The energy fluxes that are represented in the figure are briefly described as follows. Of the solar radiation incident on the planet (about 340 W m-2, global annual average) about 30% is reflected; the balance (about 70% or 238 W m-2) is absorbed, some in the atmosphere, some at the surface. The absorption of this radiant energy heats the planet. The surface of the planet emits radiant energy in the infrared region of the spectrum; for an average global temperature of 288 K (temperature above absolute zero) corresponding to 15 ° C or 59 ° F, the emitted infrared flux (according to the Stefan-Boltzmann black-body radiation law of physics) is about 390 W m-2. This energy, in addition to energy introduced into the atmosphere by absorption of solar radiation, and from latent heat (water vapor that condenses in the atmosphere) and sensible heat at the surface, serves to warm the atmosphere. Much of the heat energy that is introduced into the atmosphere is absorbed and re-emitted back to the surface by polyatomic molecules in the atmosphere (296 W m-2) and by clouds (31 W m-2). This additional radiant energy incident on the surface of the planet further warms the planet to its observed average temperature, constituting the greenhouse effect. The balance of energy introduced into the atmosphere is emitted to space, with a flux that is nearly identical to the absorbed solar energy flux; the slight difference, which is not well known and is the object of much current research, is a consequence of delayed response of the climate system to perturbations in fluxes due mainly to changes in atmospheric composition. It is this flux that corresponds to a black body having a temperature of -19 ° C or -2 ° F. This is the temperature that the planet would exhibit in the absence of the greenhouse effect. So the greenhouse effect is responsible for the temperate climate of the planet. This situation serves as the context in which to understand the consequences of changes in atmospheric composition.
Also indicated in the figure is the increase in the amount of thermal infrared radiation, 3 W m-2, that is absorbed in the atmosphere and re-radiated to the surface as a consequence of the increase in the amounts of CO2, methane, nitrous oxide and chlorofluorocarbons in the atmosphere, the so-called longwave radiative forcing by these long-lived greenhouse, or heat-trapping, gases, LLGHGs. It is this slight perturbation in the radiation budget of the planet that is the cause of all the concern over global warming.
Some years ago I had the opportunity to brief a senior official at the Department of Energy who had recently assumed responsibility for the research programs that encompassed the climate research within the Department, and I used an earlier version of this figure to illustrate my briefing. When I pointed out the forcing (at that time) of 2.8 W m-2 and compared it with the natural greenhouse effect, which is about 300 W m-2, he observed, with some astonishment in his voice, “Why, that’s less than 1 percent!” What I then expected him to say was “Well, that can hardly be important,” but that’s not what he said. Rather what he said, and which reflects remarkable insight on his part, was “That’s a really tough scientific problem.”
That observation was, and remains, remarkably astute. The challenge facing the climate change research community is to determine the consequences of this less than 1% change to an accuracy of something like 25%. Another observer has aptly commented that it this challenge that makes climate change science the “hard science.”
Radiative Forcing by Aerosols and its Implications
In fact, the situation is even tougher. Let’s suppose that forcing by the incremental greenhouse gases were the only influence on climate change over the past century or so. Then, assuming we knew the forcing pretty well, which we think we do, it would be possible to look for changes in the climate system that have occurred so far, for example the increase in global mean temperature, and empirically determine the sensitivity of the climate system, that is the increase in global temperature normalized to the forcing. It’s not quite that simple; there are time lags of the system that must be accounted for; but there are some relatively straightforward approaches to doing that. Once we had determined the sensitivity we would be able to state with some confidence the consequences of a given prospective future increment of atmospheric greenhouse gases, key information that is needed to inform policy on energy and climate.
Unfortunately quantitative description of climate change over the industrial era is made much more difficult because of the influences of atmospheric aerosols, small suspended particles in the atmosphere. Atmospheric aerosols affect Earth’s radiation budget directly, by scattering incoming shortwave (solar) radiation, and indirectly, by modifying the microphysical properties and reflectivity of clouds. Both of these phenomena reduce the amount of solar radiation that is absorbed by the climate system, and, as concentrations of aerosols have increased over the industrial era, have exerted a cooling influence on Earth’s climate system that is confidently believed to have offset some of the warming influence of the incremental greenhouse gases. To put a name to this aerosol influence on climate and to contrast it to the greenhouse effect, I called it the “whitehouse effect”, the concept being that aerosols are making our house (Earth) whiter, but the term hasn’t gained as much currency as “greenhouse effect.”
In order to quantitatively understand the response of the climate system to changes in atmospheric composition and other influences it is essential to quantify all of these influences and their time dependence over the industrial era. Much of the recent research in our group has focused on the radiative influences of anthropogenic aerosols on climate.
Atmospheric aerosols are present in the natural atmosphere but their concentrations have substantially increased because of human activities, mainly emissions of sulfur and nitrogen oxides and carbonaceous materials associated with fossil fuel combustion. We called attention to aerosol forcing in a 1992 paper paper led by the late Robert Charlson and published in Science that has been enormously influential, having received so far more than 4000 citations in the scientific literature. Since that time we and many others have presented a body of work that indicates that anthropogenic aerosols are exerting an influence on climate change that is comparable in magnitude to the anthropogenic greenhouse effect (but of opposite sign, that is a cooling influence). However the magnitude of these aerosol influences remains quite uncertain in comparison to that of the longwave forcing by incremental greenhouse gases.
It should be emphasized that one should not take any comfort with the fact that the aerosols may be negating much of the greenhouse gas forcing–in fact just the opposite. Because the atmospheric residence time of tropospheric aerosols is short (about a week) compared to the decades-to-centuries lifetimes of the greenhouse gases, then to whatever extent greenhouse gas forcing is being offset by aerosol forcing, it is last week’s aerosols that are offsetting forcing by decades worth of greenhouse gases. Because the greenhouse gases are long-lived in the atmosphere, their atmospheric loadings tend to approximate the integral of emissions. Because the aerosols are short-lived, their loading tend to be proportional to the emissions themselves. There is only one function that is proportional to its own integral, the exponential function. So only if society is to make a commitment to continued exponential growth of emissions can such an offset be maintained indefinitely. And of course exponential growth cannot be maintained forever. So if the cooling influence of aerosols is in fact offsetting much of the warming influence of anthropogenic greenhouse gases, then when society is unable to maintain this exponential growth, the climate could be in for a real and long-lasting shock.
Estimates from the IPCC (Intergovernmental Panel on Climate Change) Report Climate Change 2007 — The Physical Science Basis of the several contributions to radiative forcing over the industrial period are shown in Figure 3. Also shown (as “I-beams”) are the IPCC’s estimates of the uncertainties associated with each of these quantities. It is seen that the uncertainties of the several aerosol forcings substantially exceed those associated with the greenhouse gases and other forcings; with the 2007 report the IPCC for the first time provided an estimate for the first indirect forcing (cloud albedo enhancement), rather than giving only an indication of the range of possible values as in the 2001 report. Added to the figure (light blue bar) is an estimate of the total (direct plus first indirect) forcing, -1.2 W m-2, and the associated uncertainty range: -0.6 to -2.4 W m-2.
Figure 3. Radiative forcing of climate change over the industrial period. Global average radiative forcing (RF) estimates and uncertainty ranges (5-95% confidence interval) in 2005, relative to the preindustrial climate, for anthropogenic carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O), halocarbons (mainly chlorofluorocarbons CFCs) and aerosols and for other important identified agents and mechanisms, together with the typical geographical extent (spatial scale) of the forcing and the assessed level of scientific understanding (LOSU). Forcings are expressed in units of watts per square meter, W m-2. The total anthropogenic radiative forcing and its associated uncertainty are also shown. The figure is modified from IPCC WG I, AR4,  by addition of a bar for total aerosol forcing (light blue) representing the sum of aerosol direct and first indirect forcings, and associated uncertainty.
Importantly, also for the first time in the 2007 report the IPCC Working Group provided an estimate of the total forcing over the industrial period and of the associated uncertainty, based on the assumption, which is inherent in the forcing-response hypothesis, of additivity of forcings, as we had previously advocated in a 1996 paper in Science. It is seen that the uncertainty range in the estimated total forcing is quite large. This uncertainty range is due almost entirely to the uncertainty in aerosol forcing; if the aerosol forcing is small (negative) the total forcing is large, 2.4 W m-2, whereas if the aerosol forcing is large (negative) this negative forcing is offsetting a major fraction of the positive forcing (mainly greenhouse gas forcing) and the total forcing is at the small end of the uncertainty range, 0.6 W m-2. The central 90% confidence limits of the estimated forcing differ by a factor of 4.
Response of the Climate System to Perturbations
Why is it essential to reduce uncertainty in radiative forcing over the industrial period? To my thinking the primary reason is to improve understanding of climate change over this period, with the implicit assumption that improved understanding of past climate change will lend confidence to projections of future climate change that would result from a given prospective set of future forcings. Here attention focuses largely on the amount by which global mean surface temperature, GMST, would change in response to a given forcing, under the assumption that changes in other climatic variables would scale with changes in GMST. This assumption is largely borne out in climate model calculations. Another major assumption is that GMST would scale linearly with the applied forcing. This assumption is also borne out to great extent in climate model studies. This linearity assumption leads to the concept of climate sensitivity, the proportionality between change in GMST and forcing. Knowledge of the climate forcing over the industrial period is thus essential to empirical determination of Earth’s climate sensitivity, from the observed increase in GMST over the instrumental record, together with forcing over the same period, or alternatively, as input to calculations with global climate models, GCMs, necessary to evaluate the performance of these models over the period of instrumental record. A 2004 paper assessed the accuracy in aerosol forcing required to empirically determine the climate sensitivity to a given desired uncertainty.
A consequence of uncertainty in forcing over the instrumental record as input to climate model calculations is seen in the comparison of modeled and observed temperature change over the twentieth century, as reported in the 2007 IPCC Assessment Report. Figure 4, modified from a figure that originally appeared in the report, compares observed changes in global mean surface temperature with results simulated by 14 different climate models using natural and anthropogenic forcings. The modeled change in global mean surface temperature exhibits a spread of less than a factor of 2, well less than the factor of 4 spread in the estimates of radiative forcing over the industrial period discussed above. In a 2007 paper in Nature Reports on Climate Change we asked how it could be that the spread in modeled temperature change was well less than that in the forcing that was necessary as input to the model, speculating that one possible reason might be anticorrelation between the sensitivities of the several models used in the study and the forcings employed in the individual studies. Later that year in an article published Geophysical Research Letters Jeffrey Kiehl showed this to be the case, at least for a subset of the models. This situation suggests that much less confidence can be placed in the accuracy with which climate models are able to represent climate change than might be inferred from the agreement between modeled and observed temperature change over the twentieth century presented in the IPCC report.
Figure 4. Observed and modeled change in global mean surface temperature over the twentieth century. Decadal averages of observations are shown for the period 1906-2005 (black) plotted against the center of the decade and relative to the corresponding average for 1901-1950. The blue band shows the 5-95% range for 19 simulations from five climate models using only the natural forcings due to solar activity and volcanoes. The rose-colored band shows the 5-95% range for 58 simulations from 14 climate models using both natural and anthropogenic forcings. Added to the figure are I-beams denoting uncertainties. The range of the modeled increase in global mean surface temperature over the twentieth century (red) – ~0.5 to 1.0 K, or a factor of 2 – is well less than that of the IPCC estimate for the global mean forcing, a factor of 4 (green). Modified from the IPCC 2007 Assessment report; from our 2007 paper in Nature Reports on Climate Change.
Earth’s Climate Sensitivity and its Implications
I would argue that knowledge of Earth’s climate sensitivity is of enormous importance to the peoples of the world and to planning the means by which to meet collective future energy requirements. At present over 85% of current primary energy derives from combustion of fossil fuels, which results in emission of CO2 into the atmosphere, where it accumulates, with a lifetime of the excess CO2 of something like 100 years. This excess CO2 will thus continue to exert a radiative forcing over this time period, and as further CO2 is emitted in the future, the mixing ratio of atmospheric CO2 and the resultant radiative forcing will continue to increase. The key question therefore is what will be the resultant changes in Earth’s climate, and the single most important index of such change is the increase in global mean surface temperature. Hence the need to determine Earth’s climate sensitivity.
The sensitivity of Earth’s climate to perturbation in radiative flux has been of interest to climate scientists for quite some time. Historically this sensitivity has been expressed as the equilibrium increase in global mean surface temperature GMST that would result from a doubling of the mixing ratio of CO2 in the atmosphere ΔT2×. (Like many terms used in the climate change business, that term “equilibrium” as used here is a misnomer; “equilibrium” refers to a situation where all fluxes in a system are balanced by equal and opposite fluxes, the so called “detailed balance” condition. A better term would be “steady state”.) This measure of climate sensitivity has been commonly used as a benchmark for examining sensitivity in climate models. However absent a major decrease in emissions of CO2, mainly from fossil fuel combustion, a doubling of atmospheric CO2 will occur well before the end of the present century. For this reason Earth’s climate sensitivity takes on a significance to the peoples of the world that goes well beyond a measure of the sensitivity of climate models.
A history of estimates of Earth’s “equilibrium” climate sensitivity is shown in Figure 5. Here climate sensitivity is expressed on the right axis as the change in GMST that would result from a change in radiative flux of 1 W m-2, and also, on the left axis, as ΔT2×; the conversion from K/(W m-2) to ΔT2× assumes a forcing of doubled CO2 F2× equal to 3.7 W m-2. To my thinking the unit W m-2 for climate sensitivity is preferable, as it does not rely on any particular value of F2×, which quantity differs substantially even among current climate models. However the measure of climate sensitivity as the response to doubled CO2, with unit K, sees widespread use and is unlikely soon to be eradicated.
Figure 5. Estimates of Earth’s equilibrium climate sensitivity. Estimates are expressed as the increase in global mean surface temperature GMST that would result from a doubling of the amount of CO2 in the atmosphere ΔT2×, left axis, or as the change in GMST that would result from a change in radiative flux of 1 W m-2, right axis; the conversion from ΔT2× to K/(W m-2) assumes a forcing of doubled CO2 F2× equal to 3.7 W m-2. The point denoted “Stefan-Boltzmann” is the sensitivity that would apply to a black body radiator at Earth’s mean surface temperature, 288 K. The point denoted “Arrhenius” was based on a calculation that accounted for water vapor and snow-ice feedback as a function of latitude and season. The National Research Council “Charney Report” (1979) gave a best estimate ΔT2× of 3 K with uncertainty range ± 1.5 K. The remaining estimates are from successive assessment reports of the International Panel on Climate Change (IPCC), the organization which, together with Al Gore, was awarded the 2007 Nobel Peace Prize; the first three reports gave only an estimated range; the 2007 report provided a best estimate ΔT2× of 3 K with a slightly decreased uncertainty range, 2 – 4.5 K; the 2013 report reverted to the earlier uncertainty range and removed the best estimate. The 2021 report reinserted the best estimate and substantially decreased the uncertanty range. The notations “1 sigma”, “Likely”, and “> 66%” denote the likelihood that the actual climate sensitivity lies within the indicated uncertainty range. Figure is updated from one published in a 2008 paper in Energy and Environmental Science, a publication of the Royal Society of Chemistry in the United Kingdom.
The estimate of climate sensitivity denoted “Stefan-Boltzmann,” which is obtained by application of Stefan’s law for the temperature dependence of the radiative flux of a black body, gives the sensitivity for a black body radiator at Earth’s GMST of 288 K (15 °C; 59 °F). To my knowledge Stefan did not actually calculate Earth’s climate sensitivity, although he might have used his radiation law to do so; he did use his formula to obtain a very accurate determination of the temperature of the Sun. The next estimate shown is that of Arrhenius (the same Arrhenius who gave us the theory of ionic solutions and the activation energy of chemical kinetics) who made what we would now call a “spread sheet” calculation (except that he did not have a personal computer with which to evaluate the entries) that took into account the feedback from water vapor and snow-ice albedo as a function of latitude and season. Any increase in sensitivity over that of a black body is due to positive feedbacks inherent in the model of the climate system (or for that matter in Earth’s actual climate system). Arrhenius thought that global warming from the increase in atmospheric CO2 due to fossil fuel combustion would be a benefit to the cold climate of Sweden.
The several remaining estimates of climate sensitivity derive from major national or international assessments, a 1979 report by a panel of the National Academy of Sciences headed by Jule Charney, and the four assessment reports of the International Panel on Climate Change, IPCC. The Charney report gave a best estimate of the climate sensitivity as ΔT2× = 3 ± 1.5 K. The first three IPCC reports declined to give a best estimate of Earth’s climate sensitivity but presented only a likely range for this quantity. The 2007 report again presented a best estimate, which coincided with that of the 1979 Charney report, and slightly decreased the estimate of the lower uncertainty range. The 2013 report reverted to the earlier uncertainty range and removed the best estimate.
In view of the expected increase in radiative forcing that may be expected over the present century due to increases in CO2 and other forcing agents the magnitude of Earth’s climate sensitivity indicated in the above figure assumes a significance to human society that transcends mere scientific interest. The increase in GMST of 3 K, corresponding to the best current estimate of sensitivity might be compared to the increase in GMST thought to be about 4 K between the last glacial maximum and the present temperate era, which was characterized by a kilometer-thick ice sheet over much of central North America. (Long Island, where I live and work, is the terminal moraine of that ice sheet.) Not much of a stretch of the imagination is required to envision that an increase in GMST of 3 K (or 4.5 K if the sensitivity is at the high end of the uncertainty range) could result in the melting of the last remnant of the North American ice sheet, that is, the Greenland ice sheet. Such a melting would increase global sea level by 7 meters, with resultant loss of much inhabited land and property of enormous economic and cultural value. The social and political consequences of such a rise in sea level would be enormous.
The above considerations led us to write a paper published in the American Meteorological Society Journal of Climate with the title “Why Hasn’t Earth Warmed as Much as Expected?” The premise of the question is as follows: If climate sensitivity is at the IPCC best estimate value of 3 K for a doubling of CO2, for which the forcing ΔT2× is 3.7 W m-2, and if the forcing by long-lived greenhouse gases (LLGHGs) in 2005 (the year for which we made the calculations) was 2.6 W m-2, then the warming expected from the incremental LLGHGs alone would result in an increase in GMST of (2.6/3.7) × 3 K = 2.1 K, well greater than the observed increase in GMST over the instrumental record, about 0.8 K. We posed this question to identify the possible reasons for this “warming discrepancy”. We recognized that the climate sensitivity as usually stated is a so-called “equilibrium sensitivity”, meaning that it refers to the temperature increase that would ultimately be attained once the system fully responded to the perturbation in forcing. From consideration of the planetary energy imbalance, which we inferred from the ocean heating rate, we were able to show that the lack of attainment of equilibrium accounted for only about 20% of the warming discrepancy. We were able to show that the warming discrepancy is due to some combination of offset by aerosol forcing and climate sensitivity being lower than the present best estimate. However the uncertainties in aerosol forcing are so large that even the situation of climate sensitivity being greater than the best estimate, together with large, negative aerosol forcing, cannot be ruled out.
In that paper we also examined some of the implications of this situation. We asked the question how much more CO2 can be introduced into the atmosphere without committing the planet to an increase in GMST above preindustrial greater than a given amount. We examined this question for several values of this maximum allowable temperature increase, but we focused on 2 K above preindustrial, a value that has been widely proposed as a threshold for dangerous anthropogenic interference in the climate system, and which has been identified in the 2009 Copenhagen Accord as a target maximum increase in GMST. In our examination of the allowable future emissions of equivalent CO2 (including the forcing other LLGHGs expressed as amount of CO2) we did not allow for any offset by anthropogenic aerosols; our reason for this was that if emissions of CO2 were to be greatly reduced, emissions of aerosols and aerosol precursor gases from fossil fuel combustion would be similarly reduced. What we found was that if the climate sensitivity ΔT2× is 3 K, emissions of CO2 would have to cease essentially immediately in order not to commit the planet to an increase in GMST greater than 2 K above preindustrial. If the climate sensitivity is at the low end of the IPCC “likely” range for this quantity, the allowable amount of additional CO2 emissions corresponds to 36 years at the present emissions rate; in contrast, if the climate sensitivity is at the high end of the IPCC “likely” range, the planet is overcommitted to a 2 K temperature increase above preindustrial by about 33 years of emissions at the present rate. The long and short of it is that we (the scientific community and the larger public) do not know where on this spectrum of possibilities we sit.
In May, 2013 I presented a lab-wide lecture at Brookhaven National Laboratory with the title “Why has Earth NOT warmed as much as expected? And why is this so important?” A link to a video of the presentation is here and a link to the viewgraphs is here (Large File Warning: 56 Mbyte).
Alternative Empirical Approach to Determining Climate Sensitivity
Because of the large and thus far recalcitrant uncertainty in aerosol forcing I have tried to identify alternative approaches to determining climate sensitivity. Several years ago I introduced, in an article published in the Journal of Geophysical Research an alternative empirical approach to determining Earth’s climate sensitivity by means of a single-compartment energy balance model. The basis of the approach is the recognition, within such a model, that the climate sensitivity S is related to the time constant for Earth’s climate system to respond to a perturbation τ and the effective heat capacity of the climate system C as S = τ/C. When I present this work in a lecture I observe that this is one equation in three unknowns!
This study was quite controversial and drew three published Comments as well as much discussion on the Web. It also drew considerable media attention. I believe much of the media attention resulted from the quite low climate sensitivity obtained in the study.
What I proceeded to do in that study was first to determine the effective heat capacity of the climate system as the ratio of the slopes with time t over the instrumental record of global heat content H, which is dominated by ocean heat content, and global mean surface temperature T, C = (dH/dt)/(dT/dt). I then determined the time constant of the climate system from the decorrelation time of fluctuations of global mean surface temperature, a relation that goes back to Einstein’s fluctuation-dissipation theorem. In my initial paper (Schwartz, 2007) I obtained a time constant 5 ± 1 yr, that was somewhat too low, because, as was pointed out in a published Comment on my paper by Nicola Scafetta, there is an even shorter more rapid decrease in autocorrelation that confounds the analysis. My revised analysis, which obtains the time constant from the slope of a plot of the logarithm of the autocorrelation coefficient versus lag time, yields a time constant of 8.5 ± 2.5 yr. This time constant results in a climate sensitivity of 0.51 ± 0.26 K/(W m-2), corresponding to an equilibrium temperature increase for doubled CO2 of 1.9 ± 1.0 K, somewhat lower than the central estimate of the sensitivity given in the 2007 assessment report of the Intergovernmental Panel on Climate Change, but consistent within the uncertainties of both estimates. The relatively short time constant of the climate system means that the departure of the current increase in GMST from that which would be expected if the system were at steady state is quite small.
A second line of criticism of the paper was that the single-compartment model was not a good representation of the actual climate system, which exhibits multiple time constants, indicative of multiple compartments. In my Reply I suggested that the system might be more accurately characterized as a two-compartment system, and I proposed, as an analogy, an electrical circuit with two resistors and two capacitors (2 RC circuit), Figure 6. I proposed that the time constant that is determined by the decorrelation time is the shorter time constant of the system pertinent to climate change on the multidecadal scale, with the longer time constant, corresponding to the deep ocean, requiring a much greater time to reach steady state following a perturbation.
Figure 6. Isomorphism between resistance-capacitance circuit and two-compartment energy balance climate model. Differential equations on right can be solved to give time dependence for arbitrary applied time-dependent forcing (current). Dashed boxes enclose corresponding one-compartment systems. The figure is modified from the Reply to Comments on my 2007 paper and a 2012 paper that interprets the observed increase in GMST over the latter part of the twentieth century in terms of the two-compartment model.
I subsequently learned that such a two-compartment model had been proposed by Steve Schneider in 1981; again by Jonathan Gregory in 2000; and more recently by Isaac Held in 2010. (Mathematically this two-compartment climate model is isomorphic with the 2 RC electrical circuit.) In a paper in Surveys in Geophysics I further develop this model and evaluate climate sensitivity using the observed temperature trend over the twentieth century for several published time series of forcing. Not surprisingly, the sensitivities determined in this way depend strongly on the forcing data set employed. However the several forcing data sets all yield two distinct time constants, one about 7 years (range 4 – 9 years) and one about 500 years. So this would seem to be a robust feature of our climate system, though of course a great oversimplification.
A key finding of my 2012 paper was that the rate of increase of ocean heat content is to good accuracy proportional to the increase in GMST, from about 1965 to the present time, for which accurate ocean heat content data are available. This finding leads to the expectation that the increase in GMST would be proportional to the forcing, an expectation that is borne out for most of the forcing data sets examined. I argue that the slope of such a relation, which I designate the “transient climate sensitivity,” is the quantity that is pertinent to climate change on the multidecadal time scale that is of interest for policy purposes. For the several forcing estimates that are available the transient sensitivity that I determined ranges from 0.23 to 0.51 K/(W m-2) corresponding to transient ΔT2× 0.9 to 1.9 K.
Research on Atmospheric Aerosols
Because uncertainties associated with aerosol forcing are the major source of uncertainty in climate forcing over the industrial period, it is essential in my opinion to focus on the aerosol forcing to advance quantitative understanding of anthropogenically induced climate change. Consequently much of our research is directed to developing such improved understanding.
Reducing the uncertainty in aerosol forcing will require a major effort both in characterizing the present distribution and properties of aerosols and in developing understanding required to represent the processes controlling loading and properties of tropospheric aerosols in numerical models. Model-based descriptions of aerosol forcing need to be incorporated into climate models in order to represent this forcing not just for the present climate but also retrospectively over the industrial period and prospectively for various scenarios of future emissions. Much of our research is directed to developing and evaluating numerical models for representing the geographical distribution of loading of atmospheric aerosols. Our approach has been to use observationally derived meteorological data to drive our models, because the temporal and spatial variation in aerosol loading is governed to great extent by meteorological variability. Meaningful evaluation of the model by comparison with observations thus requires this approach. My colleague Carmen Benkovitz and I developed and extensively tested such a model for SO2 and sulfate aerosol for the Northern Hemisphere that rather successfully reproduces time series of sulfate concentrations measured at surface stations.
One of our continuing objectives is to represent the size dependence of atmospheric aerosols, not simply the mass concentration, as the optical and cloud nucleating properties of these aerosols depend strongly on particle size. This is a very challenging problem and is attracting much attention globally. One promising approach to representing the size distribution in a numerically tractable way is through the moments of the particle size distribution as pioneered by my colleague Robert McGraw, and we have published a number of papers examining this approach.
Early on we recognized that it would be necessary to represent sea salt aerosol in our models as this is a major component of the natural background aerosol in the absence of the anthropogenic perturbation. Hence we recognized the need to represent the size-dependent production of sea salt aerosol in models, and that in turn led to a need to characterize this production as a function of wind speed and other controlling variables. This resulted in an extensive review, spearheaded by my physicist and oceanographer colleague Ernie Lewis, of the experimental literature on sea salt aerosol production and on the pertinent physics and physical chemistry that influences the size and other properties of sea salt aerosol particles. Sea salt aerosol is produced mainly through the bursting of bubbles of air that has been entrained by wave breaking into the upper meter or so of the ocean, leading to the injection of small drops of sea water into the air. Our review resulted in a 400 page monograph published by the American Geophysical Union that, if nothing else, speaks to the complexity of the production process and to the large variability in production rate that is not yet well understood or characterized. So in our opinion there is much work to be done before even this relatively simple natural aerosol can be confidently represented in aerosol models and climate models. One consequence of this is the resultant uncertainty in the relative enhancement in the number concentrations of cloud drops in marine stratus and stratocumulus due to anthropogenic aerosols, affecting estimates of the influence of these aerosols on the aerosol indirect effect.
Tutorial papers on the greenhouse effect, Earth’s climate, and climate change
In 2018 I published a pair of papers in the American Journal of Physics on the greenhouse effect and climate change. The intent of the papers is to provide an introduction to the subject, for physics students and for physicists not working in the area of climate. These papers have received an enormous number of downloads from the journal website (two of the top 20 downloads in 2019) and at ESSOAR, the AGU preprint server. Such interest in this work is very gratifying. I would certainly welcome any feedback on it. The topics covered in the papers are as follows:
Paper 1: The Greenhouse Effect and Climate Change: Earth’s Natural Greenhouse Effect
— Overview of earth’s radiation budget
—- Top-of-atmosphere budget
—- Global energy balance
—- Earth’s energy budget
—- Spatial and temporal variability
—- Spectral dependence
— Overview of Earth’s climate
— The greenhouse effect
—- Processes comprising the greenhouse effect
—- Greenhouse gases in Earth’s atmosphere
— Supplemental notes
—- The consequences of a global energy imbalance
—- Why are water vapor and carbon dioxide strongly infrared-active whereas nitrogen and oxygen are not? The roles of quantum mechanics
—- Correlations between climate properties and greenhouse gases over the glacial ice ages
Paper 2: The Greenhouse Effect and Climate Change: The Intensified Greenhouse Effect
— Radiative forcing of climate change
— The intensified greenhouse effect
—- Anthropogenic increments in greenhouse gases
—- Radiative forcing by incremental greenhouse gases
—- Adjustment times of forcing agents
— Climate system response to forcing; climate sensitivity
— Inferences and implications
— Supplemental notes
—- Calculation of radiative forcing by an incremental greenhouse gas.
—- Can forcings from increases in GHGs be measured?
—- The budget and adjustment time of incremental atmospheric CO2.
—- Treating global temperature response as linear in the perturbation.
And much more!
The preceding overview has focused mainly on our research that deals with the climate change issue. Inevitably much has been left out. In my early years at Brookhaven I focused on the acid deposition issue, and we conducted laboratory studies, field measurements, modeling, and theoretical work on cloudwater composition and coupled mass transport and gas-aqueous reactions of SO2 and NO2 in clouds. This work, was also highly influential both in the research community and, by means of an overview article in Science published in 1989, in informing public policy leading to the 1992 acid deposition amendments to the Clean Air Act. In 2017 on the fortieth anniversary of the founding of the Department of Energy that paper was designated by the Department as one of 40 “Research Milestones” in the history of the Department.
An additional major component of our research has been directed to developing improved understanding and model-based representation of aerosol optical properties and radiative forcing. An important component of our work has been the use of surface- and satellite-based measurements to quantify influences of atmospheric aerosols on radiative fluxes and on cloud properties.
In all this research it has been my privilege to work with a number of extremely capable and talented colleagues at Brookhaven National Laboratory and elsewhere, and I apologize to them for giving short shrift to a goodly number of their contributions. I invite the interested reader to browse my publications page, even just by title, to gain a better appreciation of this research. And thank you for visiting my home page.
The DOE Office of Science 1999 Strategic Plan included a section on the climate influence of aerosols, highlighting our research. To download just this section (600 K pdf file) click here.
Modeling atmospheric sulfate on subhemispheric to hemispheric scale. Our group has developed (and continues to develop) a chemical transport model for atmospheric sulfate and precursor species. The output of this model is mixing ratios of sulfate as a function of location and time. The model is driven by observationally derived meteorological data (archived numerical weather prediction forecast model results) so the modeled sulfate may be compared with observations at specific locations and times.
Animation of the model results brings out features of the calculations and their temporal evolution that are not readily discernible from static images. Our 2001 paper pioneered publication of such animations in a peer-reviewed electronic journal.
Dynamical influences on the distribution and loading of SO2 and sulfate over North America, the North Atlantic and Europe in April 1987. Benkovitz C. M., Miller M. A., Schwartz S. E. and Kwon O-U. Geochem. Geophys. Geosyst. 2, Paper no. 2000GC000129 (2001). https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2000GC000129.
Modeling atmospheric sulfate on subhemispheric scale. This link connects to Quicktime (R) movies generated with output from our model showing the evolution of column burden of atmospheric sulfate (vertical integral of concentration) over several one-month periods.
Modeling sulfate on hemispheric scale. This link connects to movies generated with output from our model showing the evolution of column burden of sulfate from Asian anthropogenic sources and of total sulfate as a function of location in the Northern Hemisphere, June-July 1997.
Modeling volcanic sulfate on hemispheric scale. This link connects to a movie generated with output from our model showing the temporal and spatial evolution of column burden of sulfate from volcanic SO2 emissions in the Northern Hemisphere, June-July 1997.
Grains of Salts. An account of our work presented in a plenary lecture at the 1997 annual meeting of the American Association of Aerosol Research was featured in the February 1998 issue of ER News, Newsletter of the Department of Energy’s Office of Energy Research.
High-Resolution Model for Tropospheric Sulfate. Our model for tropospheric sulfate was highlighted in a DOE Research Summary (November 1994).
Atmospheric Heating and Cooling from Fossil-Fuel Combustion. Our examination of the greenhouse heating influence of fossil fuel combustion versus the aerosol cooling influence was highlighted in the Fall 1994 Newsletter of the DOE Carbon Dioxide Information and Analysis Center CDIAC Communications.
IN THE NEWS
Brookhaven Scientist Stephen Schwartz Wins 2022 Haagen-Smit Clean Air Award, BNL press release, May 24, 2022.
Could aerosols be a good thing against climate change?, YouTube Video explaining aerosol effects on climate and implications for future climate change, March 3, 2022.
Aerosol pollution: Destabilizing Earth’s climate and a threat to health, A nice overview by Conrad Fox, Mongabay, March 3, 2022. See also The Aerosol Double Whammy.
Stephen Schwartz Receives International Aerosol Fellow Award, BNL press release, November 4, 2020.
For Cloud Research Just Point and Click, Nowcast, Bulletin of the American Meteorological Society, July 1, 2017.
Looking Up: Taking Photos May Improve Climate Models, EOS Science News by AGU, April 4, 2017.
Video: Climate Change and Energy Use in Today’s World; interview with Danna O’Connor of KUNR in Reno, NV, April 4, 2011.
Climate: To find warming’s speed, scientists must see through clouds, Greenwire, November 26, 2012.
Climate: Ocean clouds obscure warming’s fate, create ‘fundamental problem’ for models, Greenwire, November 26, 2012.
Air pollution may have bright side — Aerosols may have cooling effect, Southampton Press, May 9, 2002.
The batting average paradox. Able has a higher batting average than Baker in the first half of the season and also in the second half. You might think that that means that Able has a higher average for the season. But you would be wrong. Click here to see why averaging ratios can be misleading.