Skip to Main Content

Systematic Review: Run and test the search

Copy of published guide 'systematic review' (Paula Todd version) for Editing taken in July 2021

Run and refine your search strategy

A number of factors influence the recall or precision of a search strategy, including the selection of both keywords and subject headings and correct use of operators to combine concepts. It is important to carefully check for comprehensiveness, effectiveness and errors. Some key steps might be:

  • Test any phrases you have used to see if the use of proximity operators would be useful to broaden the search.
  • Identify keywords that are retrieving large numbers of results. Can they be more specific or combined with other terms using proximity?
  • Look at irrelevant papers in the search results. Can you see which terms are causing these to be retrieved? Can these terms be removed or refined without losing relevant papers?
  • Explore the subject heading trees if they exist in your chosen database to determine if you need to 'explode' any subject headings. Also check the 'scope notes' to look for additional terms that could be incorporated into your search.
  • Are all terms for which subject headings have been used also adequately represented by keywords? (This is especially important if you will translate the search into a database that doesn't use subject headings).

To assist you in investigating the quality of your search strategy, you may wish to utilise a search appraisal checklist. Example:

Test your search strategy

Once you have formulated a draft search strategy you will need to seek evidence of the effectiveness of the search. This can be done by testing the ability of your search in retrieving known articles that represent your research question. This testing process requires you to have a gold set of relevant articles that should undoubtedly be picked up by your search. The steps for testing on a gold set of articles is below:

  • Once you have run your search strategy and have a result set, the aim is to ensure that this gold set of articles are included in the results
  • First, you need to see which gold set articles are included in the database you are using. If you are testing an Ovid Medline search strategy, the gold set you use can only contain articles that are indexed in Ovid Medline. The easiest way to check if the article is present is to do a search in the Title field to see if it is found.

screenshot of title search in Ovid Medline database

  • Once you have confirmed that the article is present, you need to see if the article is included in your search results. You can do this by combining your final search strategy results with the gold set article using AND.

screenshot of combining results in search history

  • Do this for each article in your gold set. If an article is present in the database but not in your results, you know that the search strategy needs further refinement.

screenshot showing multiple gold set articles being tested

 

How many articles should you have in your gold set?
The more articles you have, the more evidence and confidence that you will have in the strength and appropriateness of your search strategy. For example, testing on 10 articles will let you know that you are on the right track. Testing on 25 articles from a variety of sources will give you much better assurance that you are not going to miss relevant papers due to the chosen terms and parameters of your search.

You may like to use the Smart Searching tool to evaluate your strategy.

Select databases and sources to run your search

Searching one database is inadequate and will find an unrepresentative set of studies. Create and test your search strategy in one core database, then move on to running it in other databases or sources you've selected as suitable for your review.

It is important to be mindful of inclusivity and diversity (of populations and contexts) when conducting secondary research, as much published research is not representative of all peoples. For example, you might choose to include national or regional sources, those that include non-western journals or peer reviewed literature from developing countries.

Three main databases used for Cochrane systematic reviews and other health intervention reviews are Ovid MedlineEMBASE and Cochrane CENTRAL.

If there are subject-specific databases relevant to the topic of the review, these will typically be included. Some examples include CINAHL plus or Ovid Emcare for nursing/allied health, PsycInfo for psychological sciences, or PEDro or AMED or SPORTDiscus for physiotherapy.

Researchers often supplement these with multidisciplinary sources or citation databases such as Scopus or Web of Science (Core Collection).

A list of databases available to Monash researchers can be found at the Databases by subject page.

Cochrane Central Register of Controlled Trials (CENTRAL) is searchable via the Ovid platform. CENTRAL contains WHO International Clinical Trials Registry Platform (ICTRP) which contains 17 registers, and ClinicalTrials.gov.

These two sources are considered to be the most important trial registers, making CENTRAL an excellent choice to include in your intervention-based review:

"Although there are many other trials registers, ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform (ICTRP) portal are considered to be the most important for searching to identify studies for a systematic review (Pansieri et al 2017). Research has shown that even though ClinicalTrials.gov is included in the WHO ICTRP Search Portal, not all ClinicalTrials.gov records can be successfully retrieved via searches of the ICTRP Search Portal (Glanville et al 2014, Knelangen et al 2018)". Cochrane Handbook 4.3.3 Trials registers and trials results registers #section-4-3-3.

For a full curated list of registries and information on searching registries see:

Handsearching is now commonly known as manual searching. While manual searching can still mean actually hand-searching print copies of journals for relevant studies, it more often involves browsing the online table-of-contents of relevant issues, or reference lists of relevant papers.

In SRs, manual searching is considered an important method of uncovering papers that may not have been picked up in your database searches. This can include studies in journals not indexed by core databases, or papers not retrieved by your search strategy due to being poorly described or incorrectly indexed.

Manual searching can encompass Citation searching which is a search method that can be done forward or backward in time.

  • Forward citation searching retrieves records that have cited an item, also known as “cited by”. This provides you with more recently published articles that may be relevant for your topic. 
  • Backward citation searching involves records that an item has cited (these will be located in the article's reference list). This is also known as snowballing - using known relevant articles to identify other key articles or search terms.

The main citation databases are Scopus, Web of Science and Google Scholar. 

Finding grey literature, searching it systematically and documenting your searches is time consuming and challenging. 

See the Grey literature library guide for further information on searching specific grey literature sources to identify relevant material, as well as using search engines such as Google/Google Scholar.

See the Moodle book MNHS: Systematically searching the grey literature for a comprehensive module on grey literature for systematic reviews.

Search Limits vs Search Filters

Limits
Many databases feature a built-in set of limiters that may be used to limit search results by age group, publication type, English language, and more. Typically, these can be selected by ticking a box to apply the limits from the database interface. The use of limits can result in the exclusion of relevant studies, so the use of a search filter is usually preferred for SRs. An example of limits in Ovid Medline is displayed below:

screenshot of Ovid limits


Filters
A search filter (also known as a hedge), is a search strategy which can be incorporated into your search to restrict the results. Commonly used filters include RCT filters, observational and diagnostic studies filters, elderly and children filters and adverse effects filters.

Filters are often developed by experts and are the most effective way of restricting a search. A search filter that has been tested, validated against a gold set of standard articles and the results published, is known as a 'validated filter'. If you make any changes to a validated filter (to translate it into another database for example), it is no longer a validated filter.

Filters can be categorised into two main groups:

  • By study type - methodological
  • By topic or subject

Links to available filters can be found on the next two tabs for study types and by topic.

The following filters are for specific topics or subjects:

The following example is a filter used to identify randomised trials in Ovid Medline. This and filters for other databases can be found in the Cochrane handbook, Chapter 4. Search filters 3.6.

Box 3.d Cochrane Highly Sensitive Search Strategy for identifying randomized trials in MEDLINE: sensitivity-and-precision-maximising version (2008 revision); Ovid format

  1. randomized controlled trial.pt.
  2. controlled clinical trial.pt.
  3. randomized.ab.
  4. placebo.ab.
  5. clinical trials as topic.sh.
  6. randomly.ab.
  7. trial.ti.
  8. 1 or 2 or 3 or 4 or 5 or 6 or 7
  9. Exp animals/ not humans.sh.
  10. 8 not 9

 

For Ovid Embase see:
Box 3.e Cochrane Highly Sensitive Search Strategy for identifying controlled trials in Embase: (2018 revision); Ovid format (Glanville et al 2019b)

For CINAHL Plus see:

3.6.3 Search filters for identifying randomized trials in CINAHL Plus; Box 3.f Cochrane CINAHL Plus filter