3 HadCM3B
Armstrong
The Hadley Centre Coupled Model 3 Bristol (HadCM3B) is a coupled climate model that consists of a 3d dynamical atmospheric component with a resolution of 2.5° × 3.75°, 19 vertical levels, and a 30 minute timestep89, and an ocean model with resolution of 1.25° × 1.25°, 20 vertical levels and a 1-hour timestep. Levels have a finer resolution towards the Earth surface. The model is a variant of HadCM3 that has been developed at the University of Bristol. Despite its relative old age, the model has been shown to accurately simulate the climate system and remains competitive with more modern climate models. A key advantage of the model is its computationally efficiency, which permits long simulations and large ensemble studies. We also utilise the atmosphere-only version of the model; HadAM3B45. This incorporates the same atmospheric component as HadCM3B but with prescribed sea surface temperatures (SSTs).
The model incorporates the Met Office Surface Exchange Scheme (MOSES) version 2.191 which simulates water and energy fluxes and physiological processes such as photosynthesis, transpiration and respiration which is determined by stomatal conductance and consequently CO2 concentration. The fractional coverage of nine surface types are incorporated by MOSES 2.1 and simulated by the dynamic global vegetation model (DGVM) TRIFFID. Of the nine surface types, five are plant functional types (PFTs); deciduous and needleleaf trees, C3 and C4 grasses, shrubs, with the residual assigned to bare soil. Vegetation evolves throughout a model simulation depending on temperature, moisture, CO2 and competition with other PFTs. HadCM3B does not include an interactive carbon/methane cycle or ice model, so these boundary conditions have been imposed.
The model was tuned using a Bayesian statistical methodology targeted on seven observational targets. The performance of the model was quantified in a probabilistic sense that accounts for structural error in the model—that is the conditioning was specifically designed not to expect a perfect model—since this could easily lead to the right answer for the wrong reasons. Five of the tuning targets are for the present day: annual mean temperature, tropopause water vapour, central and north Africa annual mean precipitation and present-day tropical vegetation cover. The remaining two targets are derived from the mid-Holocene: pollen-inferred North African 6 kyr BP precipitation and vegetation cover. The paleo-conditioned model was then validated43 using a leaf-wax precipitation record14 and other records.
Together these updates allow a dynamic simulation of the Holocene greening of the Sahara that shows many similarities with the reconstructions of this time period.
Armstrong (2023) North African humid periods over the past 800,000 years
3.1 Greening of Sahara
Armstrong Abstract
The Sahara region has experienced periodic wet periods over the Quaternary and beyond. These North African Humid Periods (NAHPs) are astronomically paced by precession which controls the intensity of the African monsoon system. However, most climate models cannot reconcile the magnitude of these events and so the driving mechanisms remain poorly constrained. Here, we utilise a recently developed version of the HadCM3B coupled climate model that simulates 20 NAHPs over the past 800 kyr which have good agreement with NAHPs identified in proxy data. Our results show that precession determines NAHP pacing, but we identify that their amplitude is strongly linked to eccentricity via its control over ice sheet extent. During glacial periods, enhanced ice-albedo driven cooling suppresses NAHP amplitude at precession minima, when humid conditions would otherwise be expected. This highlights the importance of both precession and eccentricity, and the role of high latitude processes in determining the timing and amplitude of the NAHPs. This may have implications for the out of Africa dispersal of plants and animals throughout the Quaternary.
Armstrong (2023) North African humid periods over the past 800,000 years
Larrasoaña Abstract
Astronomically forced insolation changes have driven monsoon dynamics and recurrent humid episodes in North Africa, resulting in green Sahara Periods (GSPs) with savannah expansion throughout most of the desert. Despite their potential for expanding the area of prime hominin habitats and favouring out-of-Africa dispersals, GSPs have not been incorporated into the narrative of hominin evolution due to poor knowledge of their timing, dynamics and landscape composition at evolutionary timescales. We present a compilation of continental and marine paleoenvironmental records from within and around North Africa, which enables identification of over 230 GSPs within the last 8 million years. By combining the main climatological determinants of woody cover in tropical Africa with paleoenvironmental and paleoclimatic data for representative (Holocene and Eemian) GSPs, we estimate precipitation regimes and habitat distributions during GSPs. Their chronology is consistent with the ages of Saharan archeological and fossil hominin sites. Each GSP took 2–3 kyr to develop, peaked over 4–8 kyr, biogeographically connected the African tropics to African and Eurasian mid latitudes, and ended within 2–3 kyr, which resulted in rapid habitat fragmentation. We argue that the well-dated succession of GSPs presented here may have played an important role in migration and evolution of hominins.
Larrasoaña (2013) Dynamics of Green Sahara Periods and Their Role in Hominin Evolution
3.2 Mimi Framework
Mimi: An Integrated Assessment Modeling Framework
Mimi is a Julia package for integrated assessment models developed in connection with Resources for the Future’s Social Cost of Carbon Initiative.
Several models already use the Mimi framework, including those linked below. A majority of these models are part of the Mimi registry as detailed in the Mimi Registry subsection of this website. Note also that even models not registerd in the Mimi registry may be constructed to operate as packages. These practices are explained further in the documentation section “Explanations: Models as Packages”.
MimiBRICK.jl
MimiCIAM.jl
MimiDICE2010.jl
MimiDICE2013.jl
MimiDICE2016.jl (version R not R2)
MimiDICE2016R2.jl
MimiFAIR.jl
MimiFAIR13.jl
MimiFAIRv1_6_2.jl
MimiFAIRv2.jl
MimiFUND.jl
MimiGIVE.jl
MimiHECTOR.jl
MimiIWG.jl
MimiMAGICC.jl
MimiMooreEtAlAgricultureImpacts.jl
Mimi_NAS_pH.jl
mimi_NICE
MimiPAGE2009.jl
MimiPAGE2020.jl
MimiRFFSPs.jl
MimiRICE2010.jl
Mimi-SNEASY.jl
MimiSSPs.jl
AWASH
PAGE-ICE
RICE+AIR
3.3 MODTRAN
Benestad
Kininmonth used MODTRAN, but he must show how MODTRAN was used to arrive at figures that differ from other calculations, which also use MODTRAN. It is an important principle in science that others can repeat the same calculations and arrive at the same answer. You can play with MODTRAN on its website, but it is still important to explain how you arrive at your answers.
Benestad(2022) ew misguided interpretations of the greenhouse effect from William Kininmonth
3.5 PAGE
Kikstra Abstract
A key statistic describing climate change impacts is the ‘social cost of carbon dioxide’ (SCCO 2 ), the projected cost to society of releasing an additional tonne of CO 2 . Cost-benefit integrated assessment models that estimate the SCCO 2 lack robust representations of climate feedbacks, economy feedbacks, and climate extremes. We compare the PAGE-ICE model with the decade older PAGE09 and find that PAGE-ICE yields SCCO 2 values about two times higher, because of its climate and economic updates. Climate feedbacks only account for a relatively minor increase compared to other updates. Extending PAGE-ICE with economy feedbacks demonstrates a manifold increase in the SCCO 2 resulting from an empirically derived estimate of partially persistent economic damages. Both the economy feedbacks and other increases since PAGE09 are almost entirely due to higher damages in the Global South. Including an estimate of interannual temperature variability increases the width of the SCCO 2 distribution, with particularly strong effects in the tails and a slight increase in the mean SCCO 2 . Our results highlight the large impacts of climate change if future adaptation does not exceed historical trends. Robust quantification of climate-economy feedbacks and climate extremes are demonstrated to be essential for estimating the SCCO 2 and its uncertainty.
Kikstra Memo
How temperature rises affect long-run economic output is an important open question (Piontek et al 2021). Climate impacts could either trigger addi- tional GDP growth due to increased agricultural productivity and rebuilding activities (Stern 2007, Hallegatte and Dumas 2009, Hsiang 2010, National Academies of Sciences Engineering and Medicine 2017) or inhibit growth due to damaged capital stocks (Pindyck 2013), lower savings (Fankhauser and Tol 2005) and inefficient factor reallocation (Piontek et al 2019). Existing studies have identified substan- tial impacts of economic growth feedbacks (Moyer et al 2014, Dietz and Stern 2015, Estrada et al 2015, Moore and Diaz 2015), but have not yet quantified the uncertainties involved based on empirical distri- butions. One particular example is Kalkuhl and Wenz (2020), who incorporate short-term economic per- sistence into a recent version of DICE (Nordhaus 2017), approximately tripling the resulting SCCO 2 ($37–$132). For fairly comparable economic assump- tions, the effect of long-term persistence is shown to increase the outcome even more ($220–$417) (Moore and Diaz 2015, Ricke et al 2018). We further expand on this work by deriving an empirical distribution of the persistence of climate impacts on economic growth based on recent developments (Burke et al 2015, Bastien-Olvera and Moore 2021) which we use to moderate GDP growth through persistent market damages. This partial persistence model builds upon recent empirical insights that not all contemporary economic damages due to climate change might be recovered in the long run (Dell et al 2012, Burke et al 2015, Kahn et al 2019, Bastien-Olvera and Moore 2021). Investigating how the SCCO 2 varies as a func- tion of the extent of persistence reveals a sensitivity that is on par with the heavily discussed role of dis- counting (Anthoff et al 2009b).
Climatic extremes are another particularly important driver of climate change-induced dam- ages (Field et al 2012, Kotz et al 2021). The impact of interannual climate variability on the SCCO 2 has, however, not been analyzed previously, despite its clear economic implications (Burke et al 2015, Kahn et al 2019, Kumar and Khanna 2019) and an appar- ent relation to weather extremes such as daily min- ima and maxima (Seneviratne et al 2012), extreme rainfall (Jones et al 2013), and floods (Marsh et al 2016). Omission of such features in climate-economy models risks underestimation of the SCCO 2 because if convex regional temperature damage functions (Burke et al 2015) and an expected earlier cross- ing of potential climate and social thresholds in the climate-economy system (Tol 2019, Glanemann et al 2020). Here, we include climate variability by coupling the empirical temperature-damage func- tion with variable, autoregressive interannual tem- peratures. Increasing the amount of uncertainty by adding variable elements naturally leads to a less con- strained estimate for climate-driven impacts. How- ever, it is important to explore the range of possible futures, including the consideration of extremes in the climate-economy system (Otto et al 2020). In summary, we extend the PAGE-ICE CB-IAM (Yumashev et al 2019) to quantify the effect on the SCCO 2 of including possible long-term tem- perature growth feedback on economic trajectories, mean annual temperature anomalies, and the already modeled permafrost carbon and surface albedo feed- backs. Together, these provide an indication of the magnitude and uncertainties of the contribution of climate and economy feedbacks and interannual vari- ability to the SCCO 2 .
Figure: Illustrative sketch of changes and extensions to PAGE-ICE presented in this paper. (a) Changes in the climate representation. PAGE-ICE includes a more detailed representation of CO 2 and CH 4 sinks, permafrost carbon feedback, the effect of sea ice and land snow decline on surface albedo, and a fat-tailed distribution of sea level rise. Here we also include interannual temperature variability with a temperature feedback through annual auto-correlation. (b) Changes in the damage module. The PAGE-ICE discontinuity damage component was reduced to correspond with updates to climate tipping points and sea-level rise risk, and market damages were recalibrated to an empirical estimate based on temperatures. Thus, while the discontinuity and non-economic damages continue to be calculated based on the separation between tolerable and excess temperature, the market damages are now calculated based on absolute temperature. Here we also extend PAGE-ICE with the possibility of persistent climate-induced damages, which in turn affects GDP pathways and scales emissions accordingly (feedback loop in the figure).
The original PAGE- ICE does not simulate damage persistence. Thus, the economy always returns to the exogenous economic growth path, no matter how high the contemporary damages.
Our setup recognizes that deterministic assessments of the SCCO 2 carry only very limited information. PAGE-ICE uses Monte Carlo sampling of over 150 parameter distributions (Yumashev et al 2019) to provide distributions of the results. All results presen- ted use 50 000 Monte Carlo draws (and 100 000 for PAGE09, using (RISK?) within Excel), with draws taken from the same superset to be able to compare SCCO 2 distributions across models. The PAGE-ICE model has been translated into the Mimi mod- eling framework, using the same validation pro- cess as for Mimi-PAGE (Moore et al 2018). Model code and documentation are available from the GitHub repository, here.
To estimate the marginal damage of an additional tonne of CO 2 , PAGE-ICE is run twice, with one run following the exogenously specified emission pathway and the second run adding a CO 2 pulse. The SCCO 2 is then calculated as the difference in global equity- weighted damages between those two runs divided by the pulse size, discounted to the base year (2015). Equity weighting of damages follows the approach by Anthoff et al (2009a) using a mean (minimum, maximum) elasticity of marginal utility of consump- tion of 1.17 (0.1–2.0), and equity-weighted damages are discounted using a pure time preference rate of 1.03% (0.5%, 2.0%). For all our results, we rely on a 75Gt pulse size in the first time period of PAGE- ICE (mid-2017–2025), representing an annual pulse size of 10 Gt CO 2 . In this setup, we found that the choice of pulse size can have an effect on the SCCO 2 estimates.
We implement the persistence parameter following Estrada et al (2015) into the growth system of Burke et al (2015) such that: \[GDP_{r , t} = GDP_{r , t − 1} · (1 + g_{r , t − ρ} · γ_{r , t − 1} )\], where g is the growth rate, γ represents the contemporary economic damages in % of GDP returned by the market damage function and ρ specifies the share of economic damages that persist and thus alter the growth trajectory in the long run. Note that this approach nests the extreme assumptions of zero persistence usually made in CB-IAMs. We also rescale green-house gas emissions proportionally to the change in GDP, such that emission intensities of economic output remain unchanged.
Kikstra Conclusions
Our results show that determining the level of per- sistence of economic damages is one of the most important factors in calculating the SCCO 2 , and our empirical estimate illustrates the urgency of increas- ing adaptive capacity, while suggesting that the mean estimate for the SCCO 2 may have been strongly underestimated. It further indicates that considering annual temperature anomalies leads to large increases in uncertainty about the risks of climate change. Differences between PAGE09 and PAGE-ICE show that the previous SCCO 2 results have also decidedly underestimated damages in the Global South. The implemented climate feedbacks and annual mean temperature variability do not have large effects on the mean SCCO 2 . The inclusion of permafrost thawing and surface albedo feedbacks is shown to lead to a relatively small increase in the SCCO 2 for SSP2- 4.5, with modest distributional effects. Consideration of temperature anomalies shows that internal vari- ability in the climate system can lead to increases in SCCO 2 estimates, and is key to understanding uncer- tainties in the climate-economy system, stressing the need for a better representation of variability and extremes in CB-IAMs. Including an empirical estimate of damage per- sistence demonstrates that even minor departures from the assumption that climate shocks do not affect GDP growth have major economic implications and eclipse most other modeling decisions. It suggests the need for a strong increase in adaptation to per- sistent damages if the long-term social cost of emis- sions is to be limited. Our findings corroborate that economic uncertainty is larger than climate science uncertainty in climate-economy system analysis (Van Vuuren et al 2020), and provide a strong argument that the assumption of zero persistence in CB-IAMs should be subject to increased scrutiny in order to avoid considerable bias in SCCO 2 estimates.
Kikstra (2021) The social cost of carbon dioxide under climate-economy feedbacks and temperature variability (pdf)
MIMI Modelling Framwork
Mimi is a Julia package for integrated assessment models developed in connection with Resources for the Future’s Social Cost of Carbon Initiative. The source code for this package is located on Github here, and for detailed information on the installation and use of this package, as well as several tutorials, please see the Documentation. For specific requests for new functionality, or bug reports, please add an Issue to the repository.
Kikstra Review
3.6 Bern Simple Climate Model (BernSCM)
Bern SCM Github README.md
The Bern Simple Climate Model (BernSCM) is a free open source reimplementation of a reduced form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRF). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near linear behaviour. Illustrative simulations of scenarios from previous multi-model studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided as open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal timesteps with high accuracy, it is suitable for studies with high computational load, and for coupling with, e.g., Integrated Assessment Models (IAM). Further applications include climate risk assessment in a business, public, or educational context, and the estimation of CO2 and climate benefits of emission mitigation options.
See the file BernSCM_manual(.pdf) for instructions on the use of the program.
Strassmann 2017 The BernSCM Bern SCM (pdf) Bern SCM Github Code
Parameters for tuning Bern
Critics of Bern Model
Y’know, it’s hard to figure out what the Bern model says about anything. This is because, as far as I can see, the Bern model proposes an impossibility. It says that the CO2 in the air is somehow partitioned, and that the different partitions are sequestered at different rates.
For example, in the IPCC Second Assessment Report (SAR), the atmospheric CO2 was divided into six partitions, containing respectively 14%, 13%, 19%, 25%, 21%, and 8% of the atmospheric CO2.
Each of these partitions is said to decay at different rates given by a characteristic time constant “tau” in years. (See Appendix for definitions). The first partition is said to be sequestered immediately. For the SAR, the “tau” time constant values for the five other partitions were taken to be 371.6 years, 55.7 years, 17.01 years, 4.16 years, and 1.33 years respectively.
Now let me stop here to discuss, not the numbers, but the underlying concept. The part of the Bern model that I’ve never understood is, what is the physical mechanism that is partitioning the CO2 so that some of it is sequestered quickly, and some is sequestered slowly?
I don’t get how that is supposed to work. The reference given above says:
CO2 concentration approximation
The CO2 concentration is approximated by a sum of exponentially decaying functions, one for each fraction of the additional concentrations, which should reflect the time scales of different sinks.
So theoretically, the different time constants (ranging from 371.6 years down to 1.33 years) are supposed to represent the different sinks. Here’s a graphic showing those sinks, along with approximations of the storage in each of the sinks as well as the fluxes in and out of the sinks:
(Carbon Cycle Picture)
Now, I understand that some of those sinks will operate quite quickly, and some will operate much more slowly.
But the Bern model reminds me of the old joke about the thermos bottle (Dewar flask), that poses this question:
The thermos bottle keeps cold things cold, and hot things hot … but how does it know the difference?
So my question is, how do the sinks know the difference?
Why don’t the fast-acting sinks just soak up the excess CO2, leaving nothing for the long-term, slow-acting sinks? I mean, if some 13% of the CO2 excess is supposed to hang around in the atmosphere for 371.3 years … how do the fast-acting sinks know to not just absorb it before the slow sinks get to it?
Anyhow, that’s my problem with the Bern model—I can’t figure out how it is supposed to work physically.
Finally, note that there is no experimental evidence that will allow us to distinguish between plain old exponential decay (which is what I would expect) and the complexities of the Bern model. We simply don’t have enough years of accurate data to distinguish between the two.
Nor do we have any kind of evidence to distinguish between the various sets of parameters used in the Bern Model. As I mentioned above, in the IPCC SAR they used five time constants ranging from 1.33 years to 371.6 years (gotta love the accuracy, to six-tenths of a year).
But in the IPCC Third Assessment Report (TAR), they used only three constants, and those ranged from 2.57 years to 171 years.
However, there is nothing that I know of that allows us to establish any of those numbers. Once again, it seems to me that the authors are just picking parameters.
So … does anyone understand how 13% of the atmospheric CO2 is supposed to hang around for 371.6 years without being sequestered by the faster sinks?
All ideas welcome, I have no answers at all for this one. I’ll return to the observational evidence regarding the question of whether the global CO2 sinks are “rapidly diminishing”, and how I calculate the e-folding time of CO2 in a future post.
Best to all,
APPENDIX: Many people confuse two ideas, the residence time of CO2, and the “e-folding time” of a pulse of CO2 emitted to the atmosphere.
The residence time is how long a typical CO2 molecule stays in the atmosphere. We can get an approximate answer from Figure 2. If the atmosphere contains 750 gigatonnes of carbon (GtC), and about 220 GtC are added each year (and removed each year), then the average residence time of a molecule of carbon is something on the order of four years. Of course those numbers are only approximations, but that’s the order of magnitude.
The “e-folding time” of a pulse, on the other hand, which they call “tau” or the time constant, is how long it would take for the atmospheric CO2 levels to drop to 1/e (37%) of the atmospheric CO2 level after the addition of a pulse of CO2. It’s like the “half-life”, the time it takes for something radioactive to decay to half its original value. The e-folding time is what the Bern Model is supposed to calculate. The IPCC, using the Bern Model, says that the e-folding time ranges from 50 to 200 years.
On the other hand, assuming normal exponential decay, I calculate the e-folding time to be about 35 years or so based on the evolution of the atmospheric concentration given the known rates of emission of CO2. Again, this is perforce an approximation because few of the numbers involved in the calculation are known to high accuracy. However, my calculations are generally confirmed by those of Mark Jacobson as published here in the Journal of Geophysical Research.
CO2 Lifetime
The overall lifetime of CO 2 is updated to range from 30 to 95 years
Any emission reduction of fossil-fuel particulate BC [Black Carbon] plus associated OM [Organic Matter] may slow global warming more than may any emission reduction of CO 2 or CH 4 for a specific period,
Jacobsen Abstract
This document describes two updates and a correction that affect two figures (Figures 1 and 14) in ‘‘Control of fossil-fuel particulate black carbon and organic matter, possibly the most effective method of slowing global warming’’ by Mark Z. Jacobson (Journal of Geophysical Research, 107(D19), 4410, doi:10.1029/2001JD001376, 2002). The modifications have no effect on the numerical simulations in the paper, only on the postsimulation analysis. The changes include the following: (1) The overall lifetime of CO 2 is updated to range from 30 to 95 years instead of 50 to 200 years, (2) the assumption that the anthropogenic emission rate of CO 2 is in equilibrium with its atmospheric mixing ratio is corrected, and (3) data for high-mileage vehicles available in the U.S. are used to update the range of mileage differences (15–30% better for diesel) in comparison with one difference previously (30% better mileage for diesel). The modifications do not change the main conclusions in J2002, namely, (1) ‘‘any emission reduction of fossil-fuel particulate BC plus associated OM may slow global warming more than may any emission reduction of CO 2 or CH 4 for a specific period,’’ and (2) diesel cars emitting continuously under the most recent U.S. and E.U. particulate standards (0.08 g/mi; 0.05 g/km) may warm climate per distance driven over the next 100+ years more than equivalent gasoline cars. Toughening vehicle particulate emission standards by a factor of 8 (0.01 g/mi; 0.006 g/km) does not change this conclusion, although it shortens the period over which diesel cars warm to 13–54 years,’’ except as follows: for conclusion 1, the period in Figure 1 of J2002 during which eliminating all fossil-fuel black carbon plus organic matter (f.f. BC + OM) has an advantage over all anthropogenic CO 2 decreases from 25–100 years to about 11–13 years and for conclusion 2 the period in Figure 14 of J2002 during which gasoline vehicles may have an advantage broadens from 13 to 54 years to 10 to >100 years. On the basis of the revised analysis, the ratio of the 100-year climate response per unit mass emission of f.f. BC + OM relative to that of CO 2 -C is estimated to be about 90–190.
What’s Up with the Bern Model
Mearns
In modelling the growth of CO2 in the atmosphere from emissions data it is standard practice to model what remains in the atmosphere since after all it is the residual CO2 that is of concern in climate studies. In this post I turn that approach on its head and look at what is sequestered. This gives a very different picture showing that the Bern T1.2 and T18.5 time constants account for virtually all of the sequestration of CO2 from the atmosphere on human timescales (see chart below). The much longer T173 and T∞ processes are doing virtually nothing. Their principle action is to remove CO2 from the fast sinks, not from the atmosphere, in a two stage process that should not be modelled as a single stage. Given time, the slow sinks will eventually sequester 100% of human emissions and not 48% as the Bern model implies.
Figure: The chart shows the amount of annual emissions removed by the various components of the Bern model. Unsurprisingly the T∞ component with a decline rate of 0% removes zero emissions and the T173 slow sink is not much better. Arguably, these components should not be in the model at all. The fast T1.2 and T18.5 sinks are doing all the work. The model does not handle the pre-1965 emissions decline perfectly, shown as underlying, but these too will be removed by the fast sinks and should also be coloured yellow and blue. Note that year on year the amount of CO2 removed has risen as partial P of CO2 has gone up. The gap between the coloured slices and the black line is that portion of emissions that remained in the atmosphere.
The Bern Model for sequestration of CO2 from Earth’s atmosphere imagines the participation of a multitude of processes that are summarised into four time constants of 1.2, 18.5 and 173 years and one constant with infinity (Figure 1). I described it at length in this earlier post The Half Life of CO2 in Earth’s Atmosphere.
3.7 NorESM
Norwegian Earth System Model
About
A climate model solves mathematically formulated natural laws on a three-dimensional grid. The climate model divides the soil system into components (atmosphere, sea, sea ice, land with vegetation, etc.) that interact through transmission of energy, motion and moisture. When the climate model also includes advanced interactive atmosphere chemistry and biogeochemical cycles (such as the carbon cycle), it is called an earth system model.
The Norwegian Earth System Model NorESM has been developed since 2007 and has been an important tool for Norwegian climate researchers in the study of the past, present and future climate. NorESM has also contributed to climate simulation that has been used in research assessed in the IPCC’s fifth main report.
INES
The project Infrastructure for Norwegian Earth System Modeling (INES) will support the further development of NorESM and help Norwegian scientists also gain access to a cutting-edge earth system model in the years to come. Technical support will be provided for the use of a more refined grid, the ability to respond to climate change up to 10 years in advance, the inclusion of new processes at high latitudes and the ability of long-term projection of sea level. Climate simulations with NorESM are made on some of the most powerful supercomputers in Norway, and INES will help these exotic computers to be exploited in the best possible way and that the large data sets produced are efficiently stored and used. The project will ensure that researchers can efficiently use the model tool, analyze results and make the results available.
3.7.1 CCSM4
UCAR NCAR
The University Corporation for Atmospheric Research (UCAR) is a US nonprofit consortium of more than 100 colleges and universities providing research and training in the atmospheric and related sciences. UCAR manages the National Center for Atmospheric Research (NCAR) and provides additional services to strengthen and support research and education through its community programs. Its headquarters, in Boulder, Colorado, include NCAR’s Mesa Laboratory. (Wikipedia)
CCSM
The Community Climate System Model (CCSM) is a coupled climate model for simulating Earth’s climate system. CCSM consists of five geophysical models: atmosphere (atm), sea-ice (ice), land (lnd), ocean (ocn), and land-ice (glc), plus a coupler (cpl) that coordinates the models and passes information between them. Each model may have “active,” “data,” “dead,” or “stub” component version allowing for a variety of “plug and play” combinations.
During the course of a CCSM run, the model components integrate forward in time, periodically stopping to exchange information with the coupler. The coupler meanwhile receives fields from the component models, computes, maps, and merges this information, then sends the fields back to the component models. The coupler brokers this sequence of communication interchanges and manages the overall time progression of the coupled system. A CCSM component set is comprised of six components: one component from each model (atm, lnd, ocn, ice, and glc) plus the coupler. Model components are written primarily in Fortran 90.
CESM
The Community Earth System Model (CESM) is a fully-coupled, global climate model that provides state-of-the-art computer simulations of the Earth’s past, present, and future climate states.
CESM2 is the most current release and contains support for CMIP6 experiment configurations.
Simpler Models
As part of CESM2.0, several dynamical core and aquaplanet configurations have been made available.
3.7.2 NorESM Features
Despite the nationally coordinated effort, Norway has insufficient expertise and manpower to develop, test, verify and maintain a complete earth system model. For this reason, NorESM is based on the Community Climate System Model version 4, CCSM4, operated at the National Center for Atmospheric Research on behalf of the Community Climate System Model (CCSM)/Community Earth System Model (CESM) project of the University Corporation for Atmospheric Research.
NorESM is, however, more than a model “dialect” of CCSM4. Notably, NorESM differs from CCSM4 in the following aspects: NorESM utilises an isopycnic coordinate ocean general circulation model developed in Bergen during the last decade originating from the Miami Isopycnic Coordinate Ocean Model (MICOM). The atmospheric module is modified with chemistry–aerosol–cloud–radiation interaction schemes developed for the Oslo version of the Community Atmosphere Model (CAM4-Oslo). Finally, the HAMburg Ocean Carbon Cycle (HAMOCC) model developed at the Max Planck Institute for Meteorology, Hamburg, adapted to an isopycnic ocean model framework, constitutes the core of the biogeochemical ocean module in NorESM. In this way NorESM adds to the much desired climate model diversity, and thus to the hierarchy of models participating in phase 5 of the Climate Model Intercomparison Project (CMIP5). In this and in an accompanying paper (Iversen et al., 2013), NorESM without biogeochemical cycling is presented. The reader is referred to Assmann et al. (2010) and Tjiputra et al. (2013) for a description of the biogeochemical ocean component and carbon cycle version of NorESM, respectively.
There are several overarching objectives underlying the development of NorESM. Western Scandinavia and the surrounding seas are located in the midst of the largest surface temperature anomaly on earth governed by anomalously large oceanic and atmospheric heat transports. Small changes to these transports may result in large and abrupt changes in the local climate. To better understand the variability and stability of the climate system, detailed studies of the formation, propagation and decay of thermal and (oceanic) fresh water anomalies are required.
NorESM is, as mentioned above, largely based on CCSM4. The main differences are the isopycnic coordinate ocean module in NorESM and that CAM4-Oslo substitutes CAM4 as the atmosphere module. The sea ice and land models in NorESM are basically the same as in CCSM4 and the Com- munity Earth System Model version 1 (CESM1), except that deposited soot and mineral dust aerosols on snow and sea ice are based on the aerosol calculations in CAM4-Oslo.
3.7.2.1 NorESM Aerosol Interactions
The aerosol module is extended from earlier versions that have been published, and includes life-cycling of sea salt, mineral dust, particulate sulphate, black carbon, and primary and secondary organics. The impacts of most of the numer- ous changes since previous versions are thoroughly explored by sensitivity experiments. The most important changes are: modified prognostic sea salt emissions; updated treatment of precipitation scavenging and gravitational settling; inclu- sion of biogenic primary organics and methane sulphonic acid (MSA) from oceans; almost doubled production of land- based biogenic secondary organic aerosols (SOA); and in- creased ratio of organic matter to organic carbon (OM/OC) for biomass burning aerosols from 1.4 to 2.6. Compared with in situ measurements and remotely sensed data, the new treatments of sea salt and dust aerosols give smaller biases in near-surface mass concentrations and aerosol optical depth than in the earlier model version. The model biases for mass concentrations are approximately un- changed for sulphate and BC. The enhanced levels of mod- led OM yield improved overall statistics, even though OM is still underestimated in Europe and overestimated in North America. The global anthropogenic aerosol direct radiative forc- ing (DRF) at the top of the atmosphere has changed from a small positive value to −0.08 W m −2 in CAM4-Oslo. The sensitivity tests suggest that this change can be attributed to the new treatment of biomass burning aerosols and gravita- tional settling. Although it has not been a goal in this study, the new DRF estimate is closer both to the median model estimate from the AeroCom intercomparison and the best es- timate in IPCC AR4. Estimated DRF at the ground surface has increased by ca. 60 %, to −1.89 W m −2
The increased abundance of natural OM and the introduc- tion of a cloud droplet spectral dispersion formulation are the most important contributions to a considerably decreased es- timate of the indirect radiative forcing (IndRF). The IndRF is also found to be sensitive to assumptions about the coat- ing of insoluble aerosols by sulphate and OM. The IndRF of −1.2 W m −2 , which is closer to the IPCC AR4 estimates than the previous estimate of −1.9 W m −2 , has thus been obtained without imposing unrealistic artificial lower bounds on cloud droplet number concentrations.
Bentsen (2013) NorESM - Part 1 (pdf)
Iversen (2013) NorESM - Part 2 (pdf)
Assmann (2010) Biogeochemical Ocean Component - Isopycnic (pdf)
Tjiputra (2010) Carbon Cycle Component (pdf)