Building a better hurricane ‘cone of uncertainty’
Posted on 19 October 2022 by Guest Author
This is a re-post from Yale Climate Connections by Bob Henson
The now widely familiar “cone of uncertainty” for hurricanes in some ways is a victim of its own success.
Introduced in 2002, the cone was innovative in its day. Up till then, forecasts from the National Hurricane Center (NHC) had been typically portrayed as a single line showing the track of the center point of a tropical cyclone. Such forecasts originally extended out three days, then out to five days starting in 2003.
The cone put flesh on the bones of that “skinny line,” giving a visual sense of the range of uncertainty in the track forecast. Forecasters also hoped the cone would reinforce the important notion that a hurricane’s impacts were broader than a single line.
The cone was a hit, adopted widely in TV weathercasts (where some broadcasters had already been experimenting with cone-style depictions) and eventually serving as a handy graphic to share via social media. Nowadays, the cones for each storm are among the most heavily accessed graphics on NHC’s website.
But probabilities are a tricky thing to convey. For almost 60 years now, the National Weather Service (NWS) has issued probabilities of precipitation. Yet many folks currently still misinterpret those numbers – believing, for example, that a “20% chance of rain” means it will rain 20 percent of the time, or that it “shouldn’t” rain at all, instead of that the chances of receiving at least 0.01 inch of precipitation at a given spot are two out of ten.
The cone has spawned its own confusions, some of which came into sharp relief during Hurricane Ian. Many depictions of the cone, for example, still show the skinny line at the center, even though experts have long exhorted people to focus on the cone’s breadth rather than on the central line. (NHC provides versions of the cone both with and without the central line.)
One of the points that most often confuses the general public is that tropical cyclones often stray beyond the cone – about one-third of the time, in fact, as noted on the NHC’s explanatory page.
The lines on each side of the cone are drawn to encompass eight circles (not shown in the graphic), with each circle corresponding to one of the time steps on the forecast track. The radius of the circle at each time step is fixed such that two-thirds of all NHC track errors over the previous five years at that forecast time step (such as 48 or 72 hours) fall within the circle. (In theory, the cone could be enlarged to encompass a higher fraction of storms, but that approach would also increase the area covered by the cone and could be seen as over-warning.)
As forecasts have improved over the years, the cone has gradually narrowed. But because of the two-thirds guideline, plenty of storm centers still can be expected to go outside the cone. In the four days before its landfall, Ian was close to the right-hand edge of each forecast cone.
The less-than-100%-inclusive nature of the cone doesn’t appear to be widely understood. A survey of more than 2,800 Floridians, led by Scotney Evans of the University of Miami and now in early release at the Bulletin of the American Meteorological Society (BAMS), found that nearly half of respondents assumed that the cone showed all of the potential tracks for a hurricane.
“Our analysis suggests that many residents have difficulty interpreting several aspects, suggesting a rethink on how to graphically communicate aspects such as uncertainty; the size of the storm; areas of likely damage; watches and warnings; and wind intensity categories,” Evans and colleagues said. In some cases, better-educated respondents were actually more likely to misinterpret certain aspects of the cone.
The survey was part of a multi-year project involving experts in meteorology, social science, visualization design, and user experience that are evaluating NHC’s suite of graphical products, how they’re used and received, and how they might be improved.
“Current forecast products – including the NHC’s cone of uncertainty – are systematically misinterpreted by the general public,” said University of Miami researcher Barbara Millet at AMS’s May 2022 35th Conference on Hurricanes and Tropical Meteorology.
The cone is generic, not specific to a particular hurricane
With some hurricanes, forecast models are in closer agreement, lending more confidence than usual in the track; in that case, the cone is wider than it may need to be. With other hurricanes – such as Ian – the confidence is lower, and the cone may give an unrealistically narrow view of what could actually happen.
Another source of common confusion is that the cone is based on historical error, as noted above, and not on the uncertainties specific to a given hurricane.
Moreover, probabilities don’t always decrease in a classic way as one ventures further out from the “skinny line.” From about 96 to 48 hours before Ian’s landfall, a distinct split developed between two groups of models. The American (GFS) and Canadian (CMC) models showed Ian likely tracking north toward the Florida Panhandle; the European and United Kingdom models (ECMWF and UKMET), and others, portrayed Ian as taking an arc eastward toward the central Gulf Coast of Florida. With each successive run, all of these models tended to angle further eastward while gradually converging on a Gulf Coast landfall.
One problem – cone of uncertainty is premised on historical error, not on a given hurricane’s inevitable uncertainties.
Yet another complication arose from the angle of Ian’s approach, which was almost parallel to the west coast of the Florida peninsula. Because of this geometry, even a small change in approach angle would sharply shift the landfall location north or south. By late Monday, about 42 hours before landfall, it was increasingly clear that residents of the Fort Myers area faced more risk than earlier thought, even as those in the Tampa area would need to stay on alert (in part because of the landfall-angle issue).
Simply put, the model discrepancy in the critical two-to-four-day window for Ian was unusually problematic – and the cone simply isn’t designed to convey that kind of uncertainty.
The range of possibilities with Ian is especially vivid when comparing the ensemble output of the models shown in Figure 1 above. Every few hours, the models are run in ensemble mode dozens of times (e.g., 50 for the ECMWF and 30 for the GFS), in addition to each model’s standard, or operational, run. In each ensemble run, tiny variations are added to the starting-point data, serving as guesses for what can’t be seen by our imperfect observing systems.
Putting these four ensembles on a single map makes it clear that the major uncertainty several days out was between two camps – shown here by the GFS/CMC and the ECMWF/UKMET – rather than dropping off neatly on either side of a central line. Compared to the multi-model consensus, Ian’s ultimate track was both further southeast and faster, complicating responses to the approaching storm.
The idea isn’t that the GFS is inherently problematic. It was the top-performing track model for Atlantic storms in 2021, and in September it best foresaw Fiona’s breakneck pace toward a record-setting landfall in Canada. The point is that even the world’s top hurricane track models can disagree in crucial ways, and sometimes the “winning” answer is on one side of the pack, rather than in the middle (as a stand-alone cone graphic might imply).
The cone and its limits
Is there any way the present cone might be revamped to better reflect the nuances of a particular major threat like Ian? When the cone was introduced, there wasn’t anything like the full array of ensemble modeling now available. One can imagine a cone-like product based on multiple model ensembles, a cone that would contract or expand based on the level of model agreement or disagreement. Even if it didn’t show the full spaghetti-like tangles of each ensemble member, such a product might use shading or some other graphical element to bring home key points.
A 2018 study by Nicholas Leonardo and Brian Colle (Stony Brook University) found that “dynamic” cones based on ECMWF and multi-model ensembles performed as well as the NHC cone (with the storm falling within the cone at least two-thirds of the time) across a set of eight hurricane seasons.
NHC’s Joint Hurricane Testbed has explored the idea of a cone with a width that would vary based on the spread of models for a given storm. Franklin commented on Twitter:
The idea was discussed often. One issue was how to deal with wildly fluctuating cone sizes, both from forecast to forecast and through the length of a single forecast, so strong consistency constraints were going to be needed. Don’t recall there being much excitement for it.
There may be inherent limits to how much information can be conveyed in a single graphic like the cone, at least for broad public consumption. Millet reported at the May hurricane conference on a finding from one experiment that left her “very surprised”: Despite pervasive misunderstanding of the current cone, two alternative depictions that had variations such as soft-edged cone boundaries and a re-placement of text didn’t help improve understanding, and participants didn’t favor either of those alternate versions over the original.
Working on what’s working
It’s important to remember that when averaged over an entire season, the multi-model consensus (called TVCN) and the track forecasts issued by NHC hurricane specialists (the skinny black lines) are consistently more skillful than any one model. So there’s real and continuing value in the expertise of NHC forecasters in synthesizing the messages from multiple models over time and coming up with a unified forecast track – so long as users understand that the cone around the track isn’t guaranteed to keep any hurricane boxed in.
Apart from the forecast track and cone, NHC and local NWS offices provide a wide range of probabilistic products that convey the odds of various levels of wind, surge, and rain. These products stretch across multiple graphics and take more effort to digest, but they are invaluable to emergency managers and other high-level users. In a September 30 article, the New York Times pointed out that officials in Lee County, Florida, (the county that includes Fort Myers, Cape Coral, and the barrier islands nearby) opted not to require evacuations until Tuesday, September 27, even though the 10%-to-40% probability of a six-foot storm surge on Wednesday over many parts of Cape Coral and Fort Myers was exceeding the county’s own mandated evacuation criteria by Monday.
Storm surge watches and warnings, which NHC made operational in 2017, also conveyed the unusual breadth of Ian’s surge threat. A storm surge watch covered the entire west coast of Florida from near Tampa to the Everglades at 11 p.m. EDT Sunday, September 25, more than 2.5 days before landfall. A storm surge warning went into effect at 5 p.m. Monday, when NHC stated that “there is the danger of life-threatening storm surge across much of the Florida west coast where a storm surge warning has been issued.”
These storm surge products are new enough that it’ll take more time to judge how they’re being used by the public and various other stakeholders.
In short, there’s a wealth of ever-improving science backing up the ever-improving collective forecasts of NHC. There’s also a wealth of ways in which people can now get information. As ever, the challenge is distilling what we know about a tropical threat into something that helps people understand the threat and act on that knowledge. Perhaps a Cone 2.0 – or something like it – could be part of a next generation of graphical tools.