Re-thinking what we think we know about insect declines

Happy pollinator week!  Understandably, during this week there is a lot of attention on the decline of pollinating insects and actions that everyone can take (individual, community, regulatory, structural) to address these threats.  (I would be remiss here not to say go read my new pre-print analyzing the US state pollinator plans and how they align – or not – with best practices in evidence-based policymaking)

This conversation about pollinator declines often goes hand in hand with the conversation about concern over the declines of insects more broadly. As some recent major papers have claimed, there is evidence for large-scale and ongoing insect declines across all major Orders. Or not? Yet there have also been responses and rebuttals to those claims – not necessarily denying the existing of insect declines, but pointing out how critical it is to conduct these studies of insect declines in ways that rigorously and appropriately test the question.

One recent paper that generated a lot of media attention and discussion by the research community was published in early 2019, which I am colloquially referring to as The Insect Decline Study.  The attention from the media mainly focused on the claims in the study that the world would experience “loss of all insects within 100 years.” The attention from the research community was focused elsewhere – and on why that astounding claim is very likely false. Wagner (and many others) pointed out the significant limitations of requiring a keyword search include “declin*.” Mupepele et al pointed out that the rate of decline was calculated across “percentage of species declining per year” and across studies that measured populations and changes in many different ways and with varying sampling efforts. Saunders wrote about the complexities of studying declines, especially when most academic journals are “averse to publishing null results” – introducing another survey/data bias into the mix. [Despite these issues, the paper has now been cited 1197 times according to Google Scholar – and 641 times in the Web of Science database, as of today.]  

The data the authors used to generate their conclusions came from existing studies – as many of the insect decline studies are doing. So since we’re working on a project about reproducibility in biology and work with insects, this study seemed like a perfect case to get into and unpack a bit more. Our original idea was to use the broken windows algorithm that Christie developed on the data in this paper. See our recent paper on it here. This seemed like a great idea, the broken windows algorithm is designed for long term studies. And the stated objective of The Insect Decline Study was “compiling all long-term insect surveys conducted over the past 40 years,” so we expected that the study would include lots of long term datasets. We would then examine how long term datasets about insect populations when chunked up into a variety of short term bins did or didn’t support conclusions about declines, versus other kinds of trajectories. This is critical, since the vast majority of empirical work on insect populations are short term studies. 

So we started the seemingly innocent task of finding all the underlying studies in The Insect Decline Study and extracting their underlying data.  Easy, right? Then plug in it, right? 

Well, in practice, this wasn’t possible for some reasons I’ll explain below. It’s been a wild ride of exploring this data – and even though we couldn’t use the broken windows algorithm, we believe it’s a still great case for exploring reproducibility, the science of insect declines, population trajectories – and just generally working with data. (For more on that, go check out Christie’s excellent new podcast, How Do You Know?)

Figure caption / alt text = customized meme of the Natalie Portman / Anakin Star Wars meme. Top left quadrant has male character and text box reading “Insects are declining”. Top right quadrant then has woman character with a big smile and a text box saying “You used long term studies, right?” Then the lower left quadrant has the male character staring with no reply. The last, lower right quadrant has the woman character looking back with angst and the same text “You used long term studies, right?”

The underlying data

Others have already mentioned the limitation of the The Insect Decline Study was looking for declines in their literature review.  Yet there were immediately other problems we had in  understanding which papers and datasets were included. The methods section lists which keywords were used in a specific database and some additional parameters. It doesn’t mention what date that was on – but more importantly, there’s a key sentence after that: “Additional papers were obtained from the literature references.”

But the study and their supplemental data never explain which papers/datasets came from the literature review – and which they got from this ‘snowball’ approach.  But it does potentially explain why there are studies included that would be unlikely to have appeared using their keyword search – such as studies from the 1990s in Czech and Italian – or a UK natural history pamphlet, or an entire book in Swedish on bark beetles. But if they wanted to use references within references to find more sources, potentially because they wanted to ensure a representative sample, why stop there? As Christie has previously noted, major US long term insect datasets were not included. 

The study also never included the full references for the datasets and papers that they used in the study. In the Supplemental Data table, it just provides Author Year information – no data or paper titles or journals. So for today, I’ll focus on the references we could find and translate (1 reference/dataset was never found and four we have but have yet to translate).  I’ll quickly run through below some of the key findings from our exploration – we’ll be sharing lots more on this soon.

Temporal sampling methods

So it was hard to find the data and hard to understand why this data had been selected. This was even more true after assessing the length of the underlying datasets used. Analyzing insect declines requires having insect population data over time – so we would expect that this study included lots of underlying data that took repeated measures, ideally over a long period of time. Yet we found that this was overwhelmingly not the case. A few had annual data – but under 10 years; just as many had sporadic data collection under 10 years. Only about a fifth of the datasets used had data collected for more than 10 years – but it was sporadic (meaning not every year). Another fifth had a ‘snapshot’ – meaning only one time point provided. And lots of other underlying datasets had no temporal sampling methods to speak of based on how they were using literature or other methods – it was just not part of their framing. Bye-bye dreams of using the broken window algorithm – this paper, despite its stated objective, just did not compile long term, publicly available insect population datasets. 

Geographic scopes

And while the study made claims about global insect declines, most of the underlying datasets were not conducted at the global scale. We found several of the studies were of individual or several fields in a region. About a third conducted regional studies. And almost half the references were conducted at the scale of one country. However, the size of a country varies greatly! These ranged from the UK to Japan to New Zealand to Brazil – and more. Roughly a quarter of the references included data from multiple countries. Yet as we know from other responses that have been written, including multiple countries doesn’t mean those countries best represent the diversity or distributions of the taxa the paper was assessing – or insects generally. 

Response variables

If you’re studying insect declines, you’d want to be using papers that document populations over time and abundance, right? Well, The Insect Decline Study was ostensibly about species and threat status – not actually a test of declines. So it’s unfortunately both surprising – and very much not – that the response variables in the underlying studies were not abundance metrics.  Many datasets discussed IUCN red list threat status and had no quantitative information. Others were numbers of occurrences or numbers of species. Some were community composition and distribution; many were species richness. 

This goes hand in hand with many of the underlying papers/datasets themselves not being empirical studies. A significant number were literature reviews or other publications that were descriptive lists or records about species presence or range. Even when the underlying datasets were quantitative or empirical studies, they were carried out and analyzed in a wide variety of ways. This includes citizen science datasets, museum records, surveys (netting, kicknets, etc), or compiled data from other studies. Can scientists make conclusions over a range of sampling methods? Absolutely – it just needs to be done with transparency, care, and parameters for uncertainty, sampling effort, and other factors.

So are insects declining? Are pollinators declining? There is absolutely reason to be concerned about this – and to fight myths about pollinators to use best practices to combat declines, as Dr. Sheilla Colla discusses here and in her scholarship. Yet as previous work from Christie, this team, and others have discussed – it’s critical to test those questions and make those claims using the best available data and careful analytical approaches. 

This entry was posted in Uncategorized and tagged , . Bookmark the permalink.

3 Responses to Re-thinking what we think we know about insect declines

  1. Great post. This nicely summarises what we originally focused our 2020 BioScience paper on, but the journal made us move most of this to supp material as they didn’t want to appear too critical of the insect decline paper. Check out our supp data table for details of methods of each study: https://academic.oup.com/bioscience/article/70/1/80/5670748. In particular, we found that of the studies claimed as evidence in the ‘review’ paper, only 50% were long-term empirical studies, and most of these did not show declines, or showed complex trends including increases/no change.
    Look forward to seeing more of your project!

    • kstackwhitney says:

      Thanks for adding that and yes, really enjoyed that paper. Your experience highlights one of the challenges of writing work that yes, reacts – which are often sidelined as letters or commentaries – but is also a response to a broader trend or issue, not just the original. While letters and commentaries are immensely important, and many were posted quickly, they aren’t linked to the original article often in easy ways for readers – and often aren’t peer reviewed, so can be received or perceived very differently – while the original keeps getting cited. Our goal here was/is specifically to take the time (although definitely took longer due to pandemic) to do a kind of full re-analysis with the same dataset – so even though our original analysis plan didn’t work because of the many limitations of the datasets in the study that you and we and others found, we then chose another analytical framework to approach it with. That’s what we’re now wrapping up and writing up.

  2. sleather2012 says:

    Excellent and well argued. I’m actually in the process of writing a couple of blog articles about when is a long term data set long and one having yet another look at declines.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s