How to Research and Write Effective Case Studies in Meteorology
David M. Schultz
Division of Atmospheric Sciences and Geophysics, Department of Physics, University of Helsinki, Helsinki, Finland; Finnish Meteorological Institute, Helsinki, Finland; Centre for Atmospheric Science, School of Earth, Atmospheric and Environmental Sciences, University of Manchester, Manchester, United Kingdom
(Submitted 24 January 2010; in final form 26 March 2010)
ABSTRACT
Case studies are a staple of meteorological work, yet few meteorologists do it well. This article presents 16 principles that can help authors improve their case studies, at both the research and writing stages. These principles involve study design, organization and approaches to the research, writing the manuscript, and figure design. Other topics covered include citations and reference lists, terminology, and formatting.
Corresponding author address: David M. Schultz, Finnish Meteorological Institute, P.O. Box 503, Erik Palménin Aukio 1, FI-00101, Helsinki, Finland, E-mail: david.schultz@fmi.fi
1. Introduction
Case studies are indispensable to meteorology. By describing how the weather evolved during a specific event, case studies can be integral to improving weather forecasts and can reveal mesoscale phenomena that were previously undiscovered. Each of us can recall classic case studies that have inspired us by their interesting choice of topic, novel approach, easy-to-read text, clear figures, and concrete conclusions.
No one group of scientists is better suited to producing some of the best case studies than operational meteorologists. Every day they go to work and do battle with the weather, trying to understand and predict it. When we think of the authors of the best meteorological writing, those who we admire for their ability to address interesting problems and make sense of them, many forecasters from different generations come to mind: Joseph Galway (Lewis 1996), Leonard Snellman (MacDonald 2000; www.nwas.org/members/snellman.php), Robert Johns (Lewis 2007), Bradley Colman, and Matthew Bunkers.
With role models such as these, writing a case study should be relatively straightforward. After all, how difficult can it be to describe to other meteorologists what happened and how it happened? Instead, my experience with reading others’ case studies as an editor, a reviewer, or an interested reader indicates that few authors do it well. Facts seem to be thrown at the reader for no purpose. Statements are made, seemingly without evidence. Readers are shown dozens of unlabeled figure panels and expected to make sense of them. These are just a few of the problems with many case studies, whether written by forecasters, students, researchers, or even tenured professors.
The purpose of this article is to help to raise the quality of case studies by providing some guidance for those who research and write them. I offer 16 principles or rules that represent the standard approaches, best practices, and simple guidelines to help authors to communicate their research. My word can hardly be considered definitive on these matters, yet I have seen what works, what survives the peer-review process, and what is effective science. Indeed, the best scientific articles do adhere to many of these principles. Despite this evidence of success for those who follow these 16 principles, the principles are not intended to straightjacket authors into a rigid framework, where individuality and creativity get trumped. Instead, to quote Strunk and White’s (2000, 66) The Elements of Style:
…these cautionary remarks, these subtly dangerous hints, are presented in the form of rules, but they are, in essence, mere gentle reminders; they state what most of us know and at times forget.
I would like to hear your opinions. I hope you will submit your comments on this article, your experiences, and your suggestions on conducting and writing excellent case studies to the EJSSM Scientific Discussions forum.
To motivate this article, I begin by considering the purpose of a case study in section 2 and why an author should write effective case studies in section 3. Then, section 4 narrows the huge potential scope of this article by discussing what will not be covered. The 16 principles for producing a high-quality case study follow, organized into sections by study design (section 5), organization and approaches (section 6), writing the manuscript (section 7), numerical modeling (section 8), and figure design and production (section 9). Section 10 discusses references, and section 11 provides miscellaneous other advice. Finally, section 12 concludes this article.
2. Why write a case study?
Often, case studies describe extreme events that may only happen once in a lifetime. The value of studying a single extreme event is potentially limited by its representativeness. What value is documenting an event that never may be seen again? This is a legitimate question, one that is further discussed in section 5.
By contrast, case studies can have a greater purpose than just documentation. For example, National Weather Service Science and Operations Officer Jon Zeitler has a list of three criteria he looks for in case studies: (i) a unique or rare occurrence of a weather event, (ii) a demonstration of how new or unusual observations can be used to identify, analyze, or forecast an event, and (iii) a demonstration of how theory can be applied, especially for unusual cases. A meaningful case study meets two or all three criteria.
The first criterion might also be stated in the words of retired National Weather Service forecaster Jim Johnson, “What the hell was that?” Some phenomena in the atmosphere may not be understood yet, and a high-quality case study brings it to the attention of other meteorologists who might help. Such an observational case study also might lay the groundwork for a field program to collect more detailed observations of the phenomenon or a model simulation in a future article. Examples include Bosart (1983), McNulty (1991), and Schultz and Knox (2007).
Articles do not have to be limited to a single case. Indeed, comparing and contrasting two or more cases allow the author to show what may regulate the observed differences. Because describing two or more events risks producing a lengthy account, such manuscripts need to be focused to avoid unnecessary details. Examples of manuscripts where each one compares two different events include Pagnotti and Bosart (1984), Rogers and Bosart (1991), Schultz (2004b), and Doswell and Haugland (2007).
Finally, published case studies can be helpful to authors performing meta-analyses on a given topic by allowing the collection of all cases in the literature that meet certain criteria. Such an analysis can be a powerful means to challenge orthodoxy. Examples include the meta-analysis of occluded fronts by Schultz and Mass (1993, their Appendix) that demonstrated that cold-type occlusions, if they existed, were rare, and the meta-analysis by Bryan and Fritsch (2000, their Table 2) that showed the existence of moist absolutely unstable layers.
3. Why do it well?
Despite their importance to meteorology, case studies have gotten a bad reputation—I believe partly because many studies violate many of the 16 principles later in this article. Why should an author care about the quality of case studies, or scientific articles in general?
- A succinct, well-written article is more likely to be read and understood by forecasters, with a greater chance that its lessons will lead to improved forecasts of similar future scenarios.
- The process of writing often forces clarity of thought. Arguments that seem sound inside your head are often improved and refined when written.
- If submitted to a peer-reviewed journal, a poorly written case study may suffer a long painful review process or be rejected, potentially a tremendous waste of time for the authors, reviewers, and editor.
- If published, a flawed or poorly written case study contributes to a culture of mediocrity that less-experienced readers will believe is normal and acceptable, degrading the quality of our science (Batchelor 1981, 16).
- Poor-quality case studies, if published, can lead others down the wrong path. For example, in the late 1980s and early 1990s, an increasing number of case studies claimed to show the apparent utility of conditional symmetric instability (CSI) to forecasting precipitation bands (Fig. 1 in Schultz and Schumacher 1999; Schultz 2004a). At this time, CSI was the de rigueur explanation for banded or heavy precipitation, regardless of the presence of conditional or inertial instabilities, which some of these studies erroneously did not diagnose. In fact, CSI does not even have to be present to form snowbands, as argued by Schultz and Schumacher (1999) and has been shown for some northeast United States cyclones (Novak et al. 2010). Had these initial case studies been more thorough, many subsequent authors may not have been misled inadvertently.
- Forecasters especially have a responsibility to communicate their observations and research. As noted by Jim Johnson in “Why Forecasters Should Publish” (Schultz 2009, 217), Forecasters are researchers. Their job requires constant researching of the available data for familiar features. In doing so, forecasters often see unfamiliar features that later turn out to be significant in the evolution of the atmosphere! A few ideas jotted down at the end of the forecast shift can lead eventually to better understanding of these unfamiliar features and their impact upon the current atmospheric problem. In this way, easy documentation is available, making eventual publication of a possibly significant atmospheric phenomenon fairly simple.
For these reasons, we all have a responsibility to improve the quality of case studies. Effective communication through properly written case studies can be a means to improve our forecasts, educate our colleagues, and advance our science.
4. Three assumptions
It would be an overstatement to say that a single article could contain all the information that an author would need to know to write an effective case study. As such, I want to make it clear what I will not focus on in this article.
a. How to do science
I assume that you are interested in scientific exploration and understand the scientific method. I assume that you know how to construct a hypothesis, evaluate its validity, eliminate competing hypotheses, and support your arguments. If not, I recommend reading Valiela (2001), Booth et al. (2003), and Weston (2009).
b. The structure of a scientific paper
I assume that you know the basic structure of a scientific paper: abstract, introduction, data and methods, results, discussion, and conclusions. If not, then read chapters 9–13 of Day and Gastel (2006), chapter 4 of Schultz (2009), and A Guide to Science Writing by the Journal of Young Investigators (www.jyi.org).
c. Scientific writing
Finally, I assume that you can produce coherent, clear, and precise scientific writing. Writing should flow logically from one idea to the next, rather than being a disconnected series of sentences. Most people can improve their writing with a little guidance, regardless of their ability. To improve scientific writing, if not writing in general, I recommend Gopen and Swan (1990), Strunk and White (2000), and Williams (2006).
5. Designing the study
Sixteen principles for excellent scientific case studies follow (highlighted in blue).
#1: Have a well-defined purpose.
Before considering writing—indeed, at the start of the research—ask the question, “What is my motivation in studying this case?” Many times research projects start with a busted forecast, damaging storm, or an interesting weather observation. Alternatively, the case may be typical of a weather phenomenon that has plagued forecasters for a long time, but never has been documented. Whatever the reason, a research project with a strong motivation has a greater chance of success.
Beyond your motivation is the question of what results you hope to share with others. The best case studies go beyond just a description of the event and feature something new: some evolution of a weather phenomenon that has never been observed before, some study revealing the physical process responsible for an unusual structure, or some case study as part of climatology to understand the frequency of occurrence of a particular weather phenomenon.
Avoid giving a weather briefing, which is just a description of what happened. Mere documentation of a case generally results in the reader not having a take-away message. What was the reader supposed to learn from this event? How will the reader employ the lessons learned from this case? Instead of a weather briefing, deliver a map discussion, a focused, stimulating investigation of some aspect of the case that critically and scientifically evaluates questions aimed at a deeper understanding.
Without a purpose to focus the case study, the author runs the risk of aimless blathering—more weather briefing than map discussion. An explicit purpose statement near the end of the introduction to the paper is powerful. A well-stated purpose also is more likely to interest the audience and give them a statement to evaluate the success of the paper on its own terms.
Examples of clear and concise purpose statements include the following three examples.
- “The purpose of this paper is to determine the physical processes involved in the formation, maintenance, and dissipation of an intense snowband in the 15 December 1987 cyclone over the central United States.”
- “The goal of this paper is to understand why the tornadic outbreak of 20 May 2004 occurred over such a large area.”
- “The purpose of this paper is to determine why the dryline does not strengthen during the day, with the help of a two-season climatology of such days.”
Avoid multipart papers—titled “Part 1” and “Part 2”—which rarely are received favorably by reviewers (Schultz 2010). Reviewers often find that such multipart papers usually only have one article’s worth of material in it anyway. If you have sufficient material for two or more papers on a similar topic, the best strategy is to publish independent manuscripts that can each stand on its own. Section 3.3 in Schultz (2009) discusses further why I discourage authors from writing multipart manuscripts.
#2: Write a clear, concise, informative, and accurate title.
Being the first thing the audience reads of your paper, the title can either attract the audience or repel them. A good title consists of “the fewest possible words that adequately describe the contents of the paper (Day and Gastel 2006, 39).” The title should also be clearly worded, concise, informative, and accurate (Lipton 1998). An attention-commanding title doesn’t hurt either.
The title should reflect the purpose of the manuscript. A title such as “An Investigation of the North Carolina Cold-Air Damming Episode of 4 April 2003” is not sufficiently descriptive. “Enhancement of Cold-Air Damming in North Carolina by the Evaporation of Rain on 4 April 2003” is more descriptive and conveys information about the principal findings, although it is a bit long. If the date or location of the event is not significant to the audience, then it could be deleted. Avoid obvious and unnecessary words like “study” or “investigation” in the title. Chapter 3 of Schultz (2009) discusses writing titles in more detail, and an excerpt appears at www.eloquentscience.com/2009/08/excerpt-chapter-3-writing-an-effective-title.
#3: Discuss the frequency of occurrence of the event.
After having read the purpose of the study, the reader next will want to know the likelihood of a similar event happening again. Is that a once-in-a-lifetime flash flood or a yearly occurrence? If the event is unprecedented, say which records were broken and by how much. If the event occurs more frequently, a small climatology, composite analysis, search for analogs, or list of other similar events easily can give some indication of the representativeness of the event. Examples of such a combined case study–climatology article include Colman and Dierking (1992), Colle and Mass (1995), Dean and Bosart (1996), Novak et al. (2004), and Schultz et al. (2004). Examples of a case study–analog article include Gyakum and Roebber (2001) and McTaggart-Cowan et al. (2006). For many events, the author may not know the event frequency. Thus, a simple statement to that effect can alert researchers to more work needed.
6. Organization and approaches
#4: Use appropriate datasets and methods.
Most authors would benefit from more planning before conducting a research project. When brainstorming about the manuscript and outlining its body, consider the datasets, methods, analysis techniques, and graphics to use. Have you employed an appropriate dataset that resolves the features to discuss? Are there other available datasets that could help to justify some conclusions or shed light on unconsidered aspects of the case? Are your methods appropriate for the questions asked? Below are two examples.
Example 1. Many automated frontal analyses and some manual frontal analyses use equivalent potential temperature qe or wet-bulb potential temperature qw. Strictly speaking, fronts should be defined using air temperature (if the surface is relatively flat), potential temperature q, or virtual potential temperature qv (e.g., Sanders and Doswell 1995; Sanders 1999).
Example 2. Occasionally an author will use the quasigeostrophic omega equation (e.g, section 6.4 in Holton 2004) as a diagnostic tool to diagnose surface cyclone development. The omega equation is the wrong tool. The quasigeostrophic height-tendency equation (e.g, section 6.3.1 in Holton 2004) is more appropriate in that context.
#5: Where possible, present your results using an ingredients-based approach.
There are nearly an infinite number of maps, charts, and graphics that can be shown to illustrate a case. Some authors try to fit them all into one manuscript! Knowing what subset of maps to present, however, may not be obvious. How do you know what to include?
Many authors focus their research presentations using an ingredients-based approach. Ingredients are those items necessary and sufficient for an event to occur. For example, McNulty (1978) and Johns and Doswell (1992) have articulated the ingredients for deep, moist convection: lift, instability and moisture. Doswell et al. (1996) have described the ingredients for flash flooding, and Schultz et al. (2002) have discussed the ingredients for winter precipitation.
Authors not employing an ingredients-based approach often do not possess a framework to organize their research and present their data. Without organization, the manuscript often lacks focus, with the authors presenting maps just for the sake of presenting them, rather than for being the best choice to illustrate their case. An ingredients-based approach limits the discussion and the number of figures, keeping the presentation more focused and compact.
#6: Structure your presentation by following the forecast funnel from largest to smallest scales.
Figure 1: The forecast funnel that
authors generally should follow
when presenting a case study.
[Adapted from Snellman (1982).]
Once the ingredients are known, the case study is usually best organized in a manner consistent with how forecasters think. Snellman (1982) describes the forecast funnel process for approaching a weather forecast, starting with developing an understanding of the largest scales first, then understanding progressively smaller-scale phenomena (Fig. 1).
Such an approach also can be useful for writing a case study. For example, by placing that convective storm within a large-scale context first, the reader can appreciate better the storm’s environment and the large-scale processes controlling the environment.
#7: Limit the number of figures to the most essential.
A benefit of using an ingredients-based approach and following the forecast funnel is that the figures remain focused on the purpose of the case study. Limit the number of figures to the bare minimum to tell the story and to convince the audience of your argument. Poorly written case studies often contain too many figures that are unnecessary or too tangential. Although the addition of one or two interesting tidbits about the case that do not contribute to the primary purpose of the study can add some color to the manuscript, too many such tidbits distract and tire readers. Readers may even start to forget the purpose of the case study, lost in a forest of too many irrelevant facts and figures.
To keep readers focused, the author needs focus. One approach to maintaining focus happens before writing starts. Find a large conference table and lay out all the proposed figures in the order they likely will appear in the manuscript, or arrange them on your computer screen. Do these figures tell a story in the proposed order? Will rearranging them improve the story? Are any figures superfluous or missing? Employ ingredients-based and forecast-funnel approaches for the best story.
I am hesitant about offering guidelines on the number of figures in a manuscript because each study will require a different total. Nevertheless, I think a reasonable guideline is to aim for no more than 20 figures containing 50 panels. More than 25 figures or 100 panels will test even the most patient reader.
Finally, authors have a selfish reason for limiting the number of figures in a manuscript. Well-designed figures can be quite time consuming to produce, and poorly constructed figures are often common irritations for reviewers. Reducing these potential targets will save time and potentially make your manuscript more appealing to reviewers.
7. Writing the manuscript
#8: Provide evidence for all claims.
Forecasters—faced with often insufficient data, conflicting numerical models, a chaotic atmosphere, and forecast deadlines—necessarily must rely upon intuition in their daily jobs (e.g., Doswell 2004; section 2 in Steenburgh et al. 2010). The intuition that serves them well in forecasting, however, can be anathema to a research article, which requires a compelling argument based on evidence presented in the manuscript.
Similarly, observational case studies, especially those using the operational data stream, often lack all the data needed to draw definitive conclusions. This lack of necessary information should not stop good science from being done, but the author needs to be clear to the readers what conclusions can safely be drawn and which are reasoned speculation.
Consider the following example. If you believe that the vertical circulations associated with horizontal convective rolls in the boundary layer were responsible for organizing deep convection on a given day, then you must present observational evidence: of the rolls (e.g., radar imagery showing the mature circulations), that the rolls caused the circulations (i.e., the conditions for horizontal convective rolls were met beforehand), that the circulations preceded convection, and that the rolls produced ascent where the convection eventually occurred. Without evidence for all of these steps, your argument correspondingly weakens.
If the evidence is inadequate, then it may be appropriate to speculate on the causes, if prefaced with “I/We speculate that….” The primary point of your manuscript, however, cannot rely on speculation. Use speculation sparingly, so as to ensure the legitimacy of your arguments in the manuscript.
Oftentimes authors may say that the goal of their case study is to improve forecasts. How a single case study can do so may not necessarily be obvious to the reader. For example, a signature in the satellite or radar imagery preceding a severe weather event may be associated with the event, but may not necessarily cause the event or precede all severe weather events. Furthermore, a single case is not sufficient to determine if the signature is characteristic of all cases that do produce severe weather and is absent in all cases that do not. Only a more comprehensive study such as a climatology can provide such evidence. Therefore, be careful when claiming a silver bullet for forecasting a particular phenomenon based on a single case study.
#9: Avoid map-room jargon, imprecise wording, and incorrect scientific concepts.
Map-room jargon likely arose as meteorologists developed shorthand, colorful, or humorous terminology to describe more complicated aspects of the science. Although some jargon is essential for scientific communication, other jargon is simply inappropriate for legitimate scientific discourse. Below are five different types of jargon to omit from your writing and your speaking.
- Avoid colorful or discipline-specific terms known only to insiders; or, if using the term is absolutely necessary, define the jargon upon first usage. Examples include tail-end Charlie, bomb, and ring of fire.
- Jargon with empty or imprecise meaning should not be used. Examples include “thunderstorm activity” (be more specific about the quantity being measured: “cloud-to-ground flash rate”), “strong dynamics” (be more precise: “strong 500-hPa absolute vorticity maximum”), and “the mountain plays an important role in the lee-side convergence” (“important role” is vague; be more specific about how the mountain is producing the convergence).
- Avoid jargon with poorly defined scientific meaning. Rather than using terms that obscure physical insight, describe the physical processes instead. Examples include overrunning, triggering, low-topped convection, and moisture pooling.
- Some phrases, when taken literally, are inappropriate or meaningless. Examples include shallow moisture, dry punch, base of the trough, digging short-wave trough, and lifting northeastward.
- Finally, some jargon includes phrases that are scientifically incorrect. Such statements include the following examples.
Figure 2: A figure used to define the geographical locations described in a manuscript’s text body. Elevation above mean sea level (m) is shaded according to the scale. Plus signs indicate locations of available MesoWest stations. Bold roman text labels the special observing systems (e.g., mobile laboratories, radars). Regular roman text labels observing stations, ski resorts, and cities. Bold italic text labels geographic and political areas. Thin solid lines represent county and lake boundaries. The gray circle around DOW2 represents the 30-km range ring. The dashed box represents the location of an inset figure not shown here. [Figure and caption adapted from Fig. 1a in Schultz and Trapp (2003).]
- “A region of upper-level divergence moved over a region of low-level convergence, producing strong ascent.” Because ascent is directly linked to the vertical distribution of mass divergence through the continuity equation, this sentence implies cause-and-effect between mass divergence and ascent, which misrepresents what really is a single process.
- “The resolution of the NAM is 8 km.” The resolution is actually several times greater than the grid spacing (e.g., Pielke 1991, 2001; Laprise 1992; Grasso 2000a,b). Thus, authors should use grid spacing or grid interval, instead.
- “The cold front propagated from South Dakota to Oklahoma over the next 36 hours.” Propagate is usually used as a fancy-sounding word for move. Strictly speaking, the movement of a feature is composed of an advective component (translation) and a nonadvective component (propagation). Because cold fronts, Rossby waves, and squall lines usually have both advective and nonadvective components of motion, reserve the word propagate for the nonadvective component.
- “The convection was preceded by a region of surface moisture flux convergence.” Moisture flux convergence is not an appropriate diagnostic tool for convection initiation (Banacos and Schultz 2005).
Appendix B, “Commonly Misused Scientific Words and Expressions” from Schultz (2009), has further examples of how to make language more precise and is available from www.eloquentscience.com/category/excerpts.
#10: Clearly define the geography.
Because science (and the weather) is global, research results from the United States may be relevant to forecasters and researchers around the world, and vice versa. Just as most American scientists would not necessarily be able to identify the provinces in China, neither should Americans expect Chinese readers to know the U.S. states and their locations. Some region-specific terminology (e.g., Capital District, Golden Triangle) or local geographical locations (e.g., rivers, lesser mountain ranges) may not be known to readers, even within your own country.
You can do several things to help readers to understand the geographical setting of an article. The first would be to minimize the use of geographical terms that may not be widely known, or define the terms when first used. The second would be to include a figure whose purpose is to locate geographical places described in the text (Fig. 2). Even better would be to label the geography directly on each figure (Fig. 3), if such annotations do not cause clutter. Every map also should have a horizontal length scale and an indication of which direction is north (if omitting the north arrow otherwise would be ambiguous or if including it would benefit the figure).
8. Special guidelines for numerical modeling studies
#11: Model simulations should do more than just attempt to replicate the observations.
With the advent of cheap computing power and freely available mesoscale numerical models—WRF (Weather Research and Forecasting model), MM5 (Penn State–National Center for Atmospheric Research Mesoscale Model, Version 5), or the workstation Eta—modeling a weather event is currently in the hands of nearly anyone. Simulating an event to see if the model can reproduce it, however, is not sufficient cause for a published case study.
A model simulation that successfully reproduces the event in question is a powerful tool for understanding the relevant physical processes involved. That simulation, however, is only the first part of producing a scientific publication—proper diagnosis is the second component. The model output is a four-dimensional, dynamically consistent dataset, and this output can often be a substitute for the lack of observational data (assuming, of course, that a realistic simulation is achieved).
Figure 3: Six-hour precipitation (mm, color) ending
0000 UTC 17 February 2007 from a) Stage IV
product (merged rain gauge and radar-derived
precipitation) and b) WRF model output.
[Adapted from Schumacher et al. (2010).
Figure panels courtesy of Russ Schumacher.]
For example, computing diagnostic quantities from the model output—such as moist potential vorticity (e.g., Novak et al. 2006; Schultz and Knox 2007), frontogenesis (e.g., Schultz 2004b; Novak et al. 2006), or the terms in the momentum equation (e.g., Colle and Mass 1995)—can provide insights into the relevant physical processes. Alternatively, rerunning the model after altering the topography (e.g., Onton and Steenburgh 2001), water temperature (e.g., Onton and Steenburgh 2001), the transparency of the clouds to solar radiation (e.g., Roebber et al. 2002), or the assimilation of certain data points (e.g., Zhang et al. 2002) may demonstrate the importance of some physical processes and their sensitivity to the forecast of the event, as well.
#12: Thoroughly critique model output.
The next task is to verify that the model output “faithfully” represents the observations. Faithfully is in quotation marks to indicate that different people will have different opinions about how well the model represents the observations. Some may be quite concerned about the ability of the model to predict details of precipitation or wind, although others may not, preferring instead to focus on a satisfactory large-scale forecast. Some may want a near-perfect correspondence between the model output and data, whereas others may be comfortable with a lesser correspondence.
Consider Fig. 3. At first glance, the WRF model produced precipitation in more-or-less the right areas: northern Nebraska and southern South Dakota, orographic precipitation in central Colorado, and northwest–southeast-oriented bands over eastern Colorado. Thus, an author might write, “the model simulation reproduced the observed precipitation features.” If no more was said, however, the author would risk losing credibility with the readers.
Instead, many differences are apparent. The modeled bands in eastern Colorado are much narrower than their observed counterparts. The interesting precipitation structures in northern Nebraska and southern South Dakota are modeled as a broad region of precipitation. The modeled precipitation maxima are less than those observed, and the orographic precipitation is less widespread in the simulation.
Despite these important differences, the model simulation may be of sufficient veracity to diagnose this event and understand the reasons why the precipitation bands in eastern Colorado happened, as in the Schumacher et al. (2010) simulation. Looking at the model output with a critical eye demonstrates to the readers that, yes, the model was not perfect, but it may be acceptable for the purposes of the manuscript.
9. Figures
When doing research, we often create working figures, rough-draft versions created with the default settings in the software and not tailored to the specifics of the manuscript. These figures should not be thrown into the manuscript without careful editing and redesign. Publication-quality figures clean up these working figures by making them more clear, understandable, and aesthetic. The creation of publication-quality figures may take considerable time and effort that usually are repaid by kinder comments from the reviewers.
#13: Design the figures to be readable when printed at the reduced size in the journal.
Design the figures to the size that they will appear in the journal. Make sure that all the font sizes and wind barbs are large enough to read. Beware of dotted lines, which can disappear when the figure is reduced for the journal. Use sans serif fonts (e.g., Helvetica, Arial) because they survive reproduction better than serif fonts (e.g., Times New Roman).
#14: Make similar figures consistent with each other.
In many manuscripts of case studies, a common figure style may be repeated. For example, the surface map or sounding diagram may be presented at three different times in different locations throughout the manuscript. Each figure should be designed to be as consistent with the others as much as possible. If possible, use the same map background, line widths, color schemes, etc. Variable names, symbols, units, and contour intervals, should be consistent between similar figures, between the text and the figures, and with convention. For example, surface pressure fields should be contoured every 4 hPa (or every 1 or 2 hPa for mesoscale analyses), not 2.5 or 5 hPa. Similar figures should be plotted at the same size for easy comparison.
Alternatively, designing a figure with multiple panels (e.g., Fig. 3) can help with comparisons between similar fields. Label the panels (a), (b), (c), etc.
#15: Annotate figures to highlight important features for the reader.
Once figures are designed, it may be useful to include annotations to guide the reader to important regions (e.g., highlighting a featured vorticity maximum, locating the rear-inflow notch on radar imagery, drawing a 1:1 line on a scatterplot, including error bars, labeling geographical locations). Also, computer-generated figures may produce a default title legend needing replacement by something more sensible. Rather than using the default legend for a line graph, try labeling the lines directly.
Avoid what Tufte (2001, chapter 5) called chartjunk: extraneous grid lines, annotations, three-dimensional effects on two-dimensional graphs, and unnecessary graphical flourishes that detract from, compete with, or obscure the data rather than supplement it or enhance it. The data on the figure is of the highest importance and every effort should be made to have the data stand out from the rest of the figure. For example, graphs from Microsoft®Excel may need to be overhauled severely (Su 2008) before publication.
Other tips for figures include the following.
- Avoid wasted white space between and within the figure panels. Use space wisely to maximize the size of the figure in the journal.
- Label axes in words and with the proper units [“Temperature (°C),” not “T”]. Stick with standard units, and use SI units wherever possible. Label maps with the quantities being plotted so that the reader does not have to read the figure caption to understand the figure.
- If plotting wind barbs, describe the plotting convention in the caption: “pennant, full barb, and half-barb denote 25, 5, and 2.5 m s–1 respectively.”
- Be careful with color schemes that use both red and green (such as the standard wind products from the U.S. Doppler radar network), which may be difficult to interpret for people with red-green colorblindness (about 10% of males and 2% or less of females). Upload your figures to www.vischeck.com or www.ryobi-sol.co.jp/visolve/en/deflector.html to view them as a colorblind individual would.
- Many online journals are publishing animations (including EJSSM), which can be an effective tool to advance your argument. The American Meteorological Society journals will host animations as electronic supplements on their Web site.
For more information about the effective design of specific types of figures (e.g., line graphs, scatterplots, bar charts, horizontal maps), read section 11.7 in Schultz (2009).
#16: Fully describe and cite all figures and figure panels, in the text and in the captions.
Simply put, if you show a figure or a figure panel, cite it and explain it within the text. If figure panels are not explained in the text but are included “for completeness,” they should be eliminated. Cite figures in the text frequently so that the reader knows what specific figure is mentioned.
When describing the figure, discuss the most obvious aspects first, even if these are not the features of primary interest. Doing so gives the readers confidence that they are interpreting the figure correctly and prepares the readers for a more detailed look.
Do not repeat information in the text that belongs in the caption. For example, “Fig. 5 shows the 500-hPa heights and relative vorticity, indicating that the short-wave trough moved over Iowa at 1200 UTC.” This sentence can be written more simply as “The short-wave trough moved over Iowa at 1200 UTC (Fig. 5).” The caption should include a description of all figure elements. The meaning of all lines should be discussed and the units and contour intervals should be included. All shadings or color schemes should be described and have a legend, if necessary.
10. References
So many submitted manuscripts are lacking in basic etiquette for citations and reference lists. Include these points in your final checklist before submission.
- Make sure all items in the reference list are cited in the text.
- Make sure all citations in the text appear in the reference list.
- Triple-check the accuracy of the reference list. Ensure the correctness of the author lists, titles, years, journal names, volume numbers, page numbers, and ordering of references. Make sure that all authors’ names are spelled correctly (lest one of those misspellings ends up being a reviewer or editor!).
11. Other formatting and terminology issues
- Follow the format of the journal to which you are submitting. Many journals have formatting guidelines on their Web page. For example, EJSSM has its Guide for Authors, Reviewers, and Editors on the right-hand side of their Web page www.ejssm.org, and the American Meteorological Society has its Authors’ Guide at the Authors’ Resource Center, linked from www.ametsoc.org. If your manuscript is intended for a journal where the format is not stated, pick a style and be consistent throughout your manuscript.
- Use proper format for dates and times: 1225 UTC 8 March 2006, not March 8th and not 08.03.06, which can be confusing (3 August 2006? 6 March 2008?). If using local standard time (LST) in the manuscript, provide a conversion to UTC. For example, “The sea breeze began to strengthen after 1400 LST (UTC=LST+6 h).”
- Consider using the terms poleward and equatorward as hemispheric-neutral counterparts to northward and southward. Doing so will help your audience in the Southern Hemisphere read your paper more easily. The same is true of using cyclonic vorticity instead of positive vorticity.
- Acronyms and abbreviations should be defined upon first usage, unless otherwise specified by the journal. Also, if you are introducing an abbreviation for a long-winded term or a reference [e.g., SLV for Salt Lake Valley, SD95 for Sanders and Doswell (1995)], reconsider, especially if it is only used a few times. Although such abbreviations may make writing easier for an author, readers will find remembering infrequently used acronyms more annoying than simply spelling them out.
- First-person pronouns (I, we) are becoming more commonly accepted in scientific writing, and for good reason. They can improve awkward sounding phrases like, “It is hypothesized that” [Who is hypothesizing?] and “The authors selected five years of data.” Avoid overuse of the first-person pronouns in the data and methods sections of the manuscript, at the risk of sounding unprofessional. Instead, use first-person pronouns to defend your choices in the research that may affect the outcome. Examples include, “We speculate that…” and “Of the two choices available, we chose the second as our method because…”
- Minimize phrases that begin with it or there (e.g., “it is well known that”, “it can be shown that”, “it is very likely that”, “there are”). Most of these phrases can be deleted without loss of meaning, or replaced with a single word (e.g., “it can be noted that” can be omitted, “it is possible that” can be replaced with “possibly”).
- Use SI units wherever possible. If non-SI units need to be presented, include one of the measures in parentheses. For example, “the precipitation gauge measured in units of 0.01 inches (0.25 mm).”
- Finally, before submission, run your manuscript through spell-checking and grammar-checking programs. Do not expect perfection from these pieces of software, but use them as guidance to ensure that you have caught most of the egregious errors.
12. Conclusion
Although following these 16 principles (collected in Table 1) will not guarantee that your paper will be accepted for publication, these principles will help to avoid the common pitfalls that have trapped others. By presenting a clear, well organized, and scientifically justifiable case study, you will begin to develop a reputation as a clear thinker and presenter. You may even begin to be invited to give presentations at conferences and training courses.
Writing a case study and doing it well can be a substantial effort, so be prepared to invest the time to do it right. Do not rush a manuscript out the door. Seek the guidance of others during the research and writing phases, especially those with experience and reputations as good scientists and communicators. Give a presentation at your forecast office, university, laboratory, or at a regional weather conference. These opportunities can bring feedback to improve the manuscript and to develop your scientific skills. Furthermore, they may result in future research collaborations. A little effort can go a long way, and the rewards are potentially quite great, indeed.
Table 1: The 16 principles to writing an effective case study.
#1: Have a well-defined purpose. |
#2: Write a clear, concise, informative, and accurate title. |
#3: Discuss thefrequency of occurrence of the event. |
#4: Use appropriatedatasets and methods. |
#5: Where possible,present your results using an ingredients based approach. |
#6: Structure yourpresentation by following the forecast funnel from largest to smallest scales. |
#7: Limit the numberof figures to the most essential. |
#8: Provideevidence for all claims. |
#9: Avoid map-roomjargon, imprecise wording, and incorrect scientific concepts. |
#10: Clearly define the geography. |
#11: Model simulations should do more than just attempt to replicate the observations. |
#12: Thoroughly critique model output. |
#13: Design the figures to be readable when printed at the reduced size in the journal. |
#14: Make similar figures consistent with each other. |
#15: Annotate figures to highlight important features for the reader. |
#16: Fully describe and cite all figures and figure panels, in the text and in the captions. |
Acknowledgments
This article grew out of writing my book Eloquent Science: A Practical Guide to Becoming a Better Writer, Speaker, and Atmospheric Scientist (Schultz 2009)—the result of too much good advice on writing effective case studies to cram into one chapter. I thank my past advisors and mentors who taught me how to perform research and write it up: Clifford Mass, Lance Bosart, Daniel Keyser, and Charles Doswell. I especially thank Charles Doswell for his inspiration for Appendix B in Schultz (2009). I also thank my two closest collaborators for their years of inspiration, challenges, and friendship when cowriting journal articles: Paul Roebber and Jim Steenburgh. I thank the three formal reviewers (John Lewis, Paul Markowski, and Greg Mann), editors Roger Edwards and Ryan McCammon, as well as the following colleagues who provided comments on an earlier version of this manuscript: Thomas Andretta, Peter Banacos, Matthew Bunkers, Jay Charney, Stephen Corfidi, Chris Davis, Nikolai Dotzek, Jim Johnson, Paul Roebber, and Jon Zeitler. Partial funding comes from Vaisala Oyj.
REFERENCES
Banacos, P. C., and D. M. Schultz, 2005: The use of moisture flux convergence in forecasting convective initiation: Historical and operational perspectives. Wea. Forecasting, 20, 351–366.
Batchelor, G. K., 1981: Preoccupations of a journal editor. J. Fluid Mech., 106, 1–25.
Booth, W. C., G. G. Colomb, and J. M. Williams, 2003: The Craft of Research. 2nd ed. University of Chicago Press, 329 pp.
Bosart, L. F., 1983: Analysis of a California Catalina eddy event. Mon. Wea. Rev., 111, 1619–1633.
Bryan, G. H., and J. M. Fritsch, 2000: Moist absolute instability: The sixth static stability state. Bull. Amer. Meteor. Soc., 81, 1207–1230.
Colle, B. A., and C. F. Mass, 1995: The structure and evolution of cold surges east of the Rocky Mountains. Mon. Wea. Rev., 123, 2577–2610.
Colman, B. R., and C. F. Dierking, 1992: The Taku wind of southeast Alaska: Its identification and prediction. Wea. Forecasting, 7, 49–64.
Day, R. A., and B. Gastel, 2006: How to Write and Publish a Scientific Paper. 6th ed. Cambridge University Press, 302 pp.
Dean, D. B., and L. F. Bosart, 1996: Northern Hemisphere 500-hPa trough merger and fracture: A climatology and case study. Mon. Wea. Rev., 124, 2644–2671.
Doswell, C. A. III, 2004: Weather forecasting by humans—Heuristics and decision making. Wea. Forecasting, 19, 1115–1126.
——, and M. J. Haugland, 2007: A comparison of two cold fronts—Effects of the planetary boundary layer on the mesoscale. Electronic J. Severe Storms Meteor., 2 (4), 1–12.
——, H. E. Brooks, and R. A. Maddox, 1996: Flash flood forecasting: An ingredients-based methodology. Wea. Forecasting, 11, 560–581.
Gopen, G. D., and J. A. Swan, 1990: The science of scientific writing. Amer. Sci., 78, 550–558. [Available online at
www.americanscientist.org/issues/feature/the-science-of-scientific-writing/1.]
Grasso, L. D., 2000a: The differentiation between grid spacing and resolution and their application to numerical modeling. Bull. Amer. Meteor. Soc., 81, 579–580.
——, 2000b: Reply. Bull. Amer. Meteor. Soc., 81, 2479.
Gyakum, J. R., and P. J. Roebber, 2001: The 1998 ice storm—Analysis of a planetary-scale event. Mon. Wea. Rev., 129, 2983–2997.
Holton, J. R., 2004: An Introduction to Dynamic Meteorology. 4th ed. Elsevier, 535 pp.
Johns, R. H., and C. A. Doswell III, 1992: Severe local storms forecasting. Wea. Forecasting, 7, 588–612.
Laprise, R., 1992: The resolution of global spectral models. Bull. Amer. Meteor. Soc., 73, 1453–1454.
Lewis, J. M., 1996: Joseph G. Galway. Wea. Forecasting, 11, 263–268.
——, 2007: A forecaster’s story: Robert H. Johns. Electronic J. Severe Storms Meteor., 2 (7), 1–19.
Lipton, W. J., 1998: The Science Editor’s Soapbox. 93 pp. [Available from Science Soapbox, P. O. Box 16103, Fresno, CA 93755-6103.]
MacDonald, A. E., 2000: Leonard W. Snellman 1920–1999. Bull. Amer. Meteor. Soc., 81, 847–848.
McNulty, R. P., 1978: On upper tropospheric kinematics and severe weather occurrence. Mon. Wea. Rev., 106, 662–672.
——, 1991: Downburts from innocuous clouds: An example. Wea. Forecasting, 6, 148–154.
McTaggart-Cowan, R., L. F. Bosart, C. A. Davis, E. H. Atallah, J. R. Gyakum, and K. A. Emanuel, 2006: Analysis of Hurricane Catarina (2004). Mon. Wea. Rev., 134, 3029–3053.
Novak, D. R., L. F. Bosart, D. Keyser, and J. S. Waldstreicher, 2004: An observational study of cold season–banded precipitation in northeast U.S. cyclones. Wea. Forecasting, 19, 993–1010.
——, J. S. Waldstreicher, D. Keyser, and L. F. Bosart, 2006: A forecast strategy for anticipating cold season mesoscale band formation within eastern U.S. cyclones. Wea. Forecasting, 21, 3–23.
——, B. A. Colle, and A. R. Aiyyer, 2010: Evolution of mesoscale precipitation band environments within the comma head of northeast U.S. cyclones. Mon. Wea. Rev., 138, doi:10.1175/2010MWR3219.1.
Onton, D. J., and W. J. Steenburgh, 2001: Diagnostic and sensitivity studies of the 7 December 1998 Great Salt Lake–effect snowstorm. Mon. Wea. Rev., 129, 1318–1338.
Pagnotti, V., and L. F. Bosart, 1984: Comparative diagnostic case study of East Coast secondary cyclogenesis under weak versus strong synoptic-scale forcing. Mon. Wea. Rev., 112, 5–30.
Pielke, R. A., Sr., 1991: A recommended specific definition of “resolution.” Bull. Amer. Meteor. Soc., 72, 1914.
——, 2001: Further comments on “The differentiation between grid spacing and resolution and their application to numerical modeling.” Bull. Amer. Meteor. Soc., 82, 699.
Roebber, P. J., D. M. Schultz, and R. Romero, 2002: Synoptic regulation of the 3 May 1999 tornado outbreak. Wea. Forecasting, 17, 399–429.
Rogers, E., and L. F. Bosart, 1991: A diagnostic study of two intense oceanic cyclones. Mon. Wea. Rev., 119, 965–996.
Sanders, F., 1999: A proposed method of surface map analysis. Mon. Wea. Rev., 127, 945–955.
——, and C. A. Doswell III, 1995: A case for detailed surface analysis. Bull. Amer. Meteor. Soc., 76, 505–521.
Schultz, D. M., 2004a: Historical research in the atmospheric sciences: The value of literature reviews, libraries, and librarians. Bull. Amer. Meteor. Soc., 85, 995–999.
——, 2004b: Cold fronts with and without prefrontal wind shifts in the central United States. Mon. Wea. Rev., 132, 2040–2053.
——, 2009: Eloquent Science: A Practical Guide to Becoming a Better Writer, Speaker, and Atmospheric Scientist. Amer. Meteor. Soc., 412 pp. [Excerpts available online at www.eloquentscience.com.]
——, 2010: Rejection rates for multiple-part manuscripts. Scientometrics, submitted.
——, J. V. Cortinas Jr., and C. A. Doswell III, 2002: Comments on “An operational ingredients-based methodology for forecasting midlatitude winter season precipitation.” Wea. Forecasting, 17, 160–167.
——, and R. J. Trapp, 2003: Nonclassical cold-frontal structure caused by dry subcloud air in northern Utah during the Intermountain Precipitation Experiment (IPEX). Mon. Wea. Rev., 131, 2222–2246.
——, D. S. Arndt, D. J. Stensrud, and J. W. Hanna, 2004: Snowbands during the cold-air outbreak of 23 January 2003. Mon. Wea. Rev., 132, 827–842.
——, and J. A. Knox, 2007: Banded convection caused by frontogenesis in a conditionally, symmetrically, and inertially unstable environment. Mon. Wea. Rev., 135, 2095–2110.
——, and C. F. Mass, 1993: The occlusion process in a midlatitude cyclone over land. Mon. Wea. Rev., 121, 918–940.
——, and P. N. Schumacher, 1999: The use and misuse of conditional symmetric instability. Mon. Wea. Rev., 127, 2709–2732; Corrigendum, 128, 1573.
Schumacher, R. S., D. M. Schultz, and J. A. Knox, 2010: Convective snowbands downstream of the Rocky Mountains in an environment with conditional, dry symmetric, and inertial instabilities. Mon. Wea. Rev., submitted.
Snellman, L. W., 1982: Impact of AFOS on operational forecasting. Preprints, Ninth Conf. on Weather Forecasting and Analysis, Seattle, WA, Amer. Meteor. Soc., 13–16.
Steenburgh, W. J., D. M. Schultz, B. Snyder, and M. Meyers, 2010: Bridging the gap between operations and research to improve weather prediction in mountainous regions. Mountain Weather and Forecasting Monograph, Amer. Meteor. Soc., submitted.
Strunk, W., Jr., and E. B. White, 2000: The Elements of Style. 4th ed. Allyn and Bacon, 105 pp.
Su, Y.-S., 2008: It’s easy to produce chartjunk using Microsoft®Excel 2007 but hard to make good graphs. Computational Statistics and Data Analysis, 52, 4594–4601.
Tufte, E. R., 1983: The Visual Display of Quantitative Information. Graphics Press, 197 pp.
Valiela, I., 2001: Doing Science: Design, Analysis, and Communication of Scientific Research. Oxford University Press, 294 pp.
Weston, A., 2009: A Rulebook for Arguments. 4th ed. Hackett Publishing, 88 pp.
Williams, J. M., 2006: Style: Lessons in Clarity and Grace. 9th ed. Longman, 304 pp.
Zhang, F., C. Snyder, and R. Rotunno, 2002: Mesoscale predictability of the “surprise” snowstorm of 24–25 January 2000. Mon. Wea. Rev., 130, 1617–1632.