Article Text

Download PDFPDF

Lack of validity of self-reported mammography data
  1. Robert S Levine1,
  2. Barbara J Kilbourne2,
  3. Maureen Sanderson3,
  4. Mary K Fadden3,
  5. Maria Pisu4,
  6. Jason L Salemi1,
  7. Maria Carmenza Mejia de Grubb1,
  8. Heather O’Hara3,
  9. Baqar A Husaini2,
  10. Roget J Zoorob1 and
  11. Charles H Hennekens5
  1. 1 Department of Family and Community Medicine, Baylor College of Medicine, Houston, Texas, USA
  2. 2 Department of Sociology, Tennessee State University, Nashville, Tennessee, USA
  3. 3 Department of Family and Community Medicine, Meharry Medical College, Nashville, Tennessee, USA
  4. 4 University of Alabama School of Medicine at Birmingham, Birmingham, Alabama, USA
  5. 5 Charles E Schmidt School of Medicine, Florida Atlantic University, Boca Raton, Florida, USA
  1. Correspondence to Dr Robert S Levine; robert.levine{at}bcm.edu

Abstract

This qualitative literature review aimed to describe the totality of peer-reviewed scientific evidence from 1990 to 2017 concerning validity of self-reported mammography. This review included articles about mammography containing the words accuracy, validity, specificity, sensitivity, reliability or reproducibility; titles containing self-report, recall or patient reports, and breast or ‘mammo’; and references of identified citations focusing on evaluation of 2-year self-reports. Of 45 publications meeting the eligibility criteria, 2 conducted in 1993 and 1995 at health maintenance organisations in Western USA which primarily served highly educated whites provided support for self-reports of mammography over 2 years. Methodological concerns about validity of self-reports included (1) telescoping, (2) biased overestimates particularly among black women, (3) failure to distinguish screening and diagnostic mammography, and (4) failure to address episodic versus consistent mammography use. The current totality of evidence supports the need for research to reconsider the validity of self-reported mammography data as well as the feasibility of alternative surveillance data sources to achieve the goals of the Healthy People Initiative.

  • self-reported mammography
  • narrative review

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The Healthy People Initiative, administrated by the Office of Disease Prevention and Health Promotion of the US Centers for Disease Control and Prevention (CDC), provides science-based, 10-year national objectives which constitute a national prescription for improving the health of all Americans.1 The programme establishes benchmarks and monitors progress over time, partly to measure the impact of prevention activities.1 The Initiative also identifies specific data sources to be used for each objective. For breast cancer prevention, Objective C-17 for Healthy People 2020 aims to ‘Increase the proportion of women who receive a breast cancer screening based on the most recent guidelines’.2 The target population includes women ages 50–74 years. The data source designated for surveillance of progress towards this objective is the National Health Interview Survey (NHIS), also administered by the CDC.2 The NHIS is a nationwide, cross-sectional, inperson, household interview survey based on cluster sampling of households and non-institutional group quarters (eg, college dormitories).3 The following are specific NHIS questions used for monitoring: (1) Have you ever had a mammogram? and (2) When did you have your most recent mammogram?2 Mammograms themselves are described as ‘An x-ray of each breast to look for breast cancer’.4 Monitoring estimates track the percentage of women aged 50–74 years who have had a mammogram in the past 2 years. Data used for monitoring are therefore based on self-report, which has been criticised for its tendencies towards over-reporting, particularly among minority populations.5 Moreover, these NHIS questions do not distinguish between screening mammograms and mammograms which are used for follow-up after a diagnosis of breast cancer has been made, thereby adding to the probability of overestimation.5

Possible reasons for overestimation among blacks and African–Americans include the less detailed wording of the NHIS questions pertaining to mammography. In part, this possibility became apparent in the data from the US Behavioral Risk Factor Surveillance System (BRFSS). BRFSS is a long-standing state and local telephone survey of non-institutionalised residents regarding health-related risk behaviours, chronic health conditions and use of preventive services.6 More than 400 000 adult interviews are conducted each year.6 The BRFSS questionnaire wording reveals that more specific descriptions of mammography (ie, ‘A mammogram is an X-ray of the breast and involves pressing the breast between two plastic plates’) resulted in lower estimates of mammography use, particularly among African–Americans.5 A possible reason is that the more graphic description resulted in increased specificity in responses.5 It is also proposed that women with poor health who may be seeking care for numerous conditions requiring frequent contact with the medical system may make the specifics of mammography less distinct and more difficult to recall.7

At present, plans are under way for Healthy People 2030,8 so it seems important and timely to conduct a comprehensive qualitative review of peer-reviewed scientific publications pertaining to the validity of self-reported mammography.

Methods

We used Medline search strategies previously reported in meta-analyses of the validity of self-reported mammography.9 10 These strategies included using article titles containing the words accuracy, validity, specificity, sensitivity, reliability or reproducibility, and titles containing self-report, recall or patient reports, and breast or ‘mammo’. We also searched the references of identified citations to locate additional studies of interest. We described the resulting publications in terms of time, place, age, race and ethnicity, source of the study population, type of healthcare facility, whether there was information on annual and/or biennial frequency of mammography, and whether 2-year self-reports were specifically addressed. The enquiry focused on 2-year self-report. This is particularly pertinent to Healthy People since women with mammography screening within 2 years are considered up to date. In addition, since Medicare provides insurance benefits for mammography to all women 65 years and older, we also explored specific information about this population.

Results

Forty-five publications were identified (4, 9–52),4 9–52 and these are summarised in table 1. In all, 9 articles were published from 1990 to 1994,11–19 13 from 1995 to 1999,20–32 9 from 2000 to 2004,33–41 8 from 2005 to 2009,9 10 42–47 5 from 2010 to 2014,4 48–51 and 1 from 2015 to January 2018.52 Aside from the USA, countries of origin included Canada,49 Israel45 and the Netherlands.48 The lower age limit for inclusion for all but three studies was 40 years. Two of the three studies accepting women younger than 40 years were concerned with validity of self-reports among persons with known genetic risk for breast cancer.48 49 Participants included a variety of racial (black, white, Native American, Asian) and ethnic/religious (Arab, French Canadian, Hispanic, Orthodox Jewish) groups. Studies included persons from across the socioeconomic spectrum, although several studies (reviewed in refs 10) focused on the socioeconomically disadvantaged. One study50 concerned persons with intellectual developmental disabilities. Settings (specifically identified in table 1) for the 42 non-meta-analysis studies included health maintenance organisations (HMOs) (n=12), non-HMO clinical services (n=13), populations (n=13) and participants in research investigations (n=4). Of the 45 articles, 27 addressed 2-year recall or recall in the elderly. Of these, only two studies supported the validity of self-reported, 2-year recall among the elderly (65+ years of age). Each was done in HMO settings in 1993 and 1995 and reported in 2003.36 37 While finding the accuracy of self-reports acceptable in the study settings, the authors nonetheless cautioned against projecting their findings to the general population: ‘Caution is necessary concerning the generalizability of our findings to the entire US population and other diverse populations, because of the characteristics of our study sample and setting’.36 In the second study of Caplan et al,37 they noted: ‘It is important to keep in mind that this study used a relatively homogenous insured managed care population composed of mainly white women, aged 40–75 years, with at least a high school education, who were either currently employed or retired. Although the results cannot be generalized to the United States population, they provide credible insight regarding the utility of the BRFSS in an important segment of the population…Our study results suggest that self-reported data ascertained using the BRFSS provide an accurate estimate of the prevalence of screening for breast…cancers in KPC [(Kaiser Permanente Colorado)] and possibly other similar managed care populations with similar enrollees’.

Table 1

Description of scientific, peer-reviewed research about the validity of self-reported mammography: Australia, Canada, Israel, the Netherlands and the USA, 1990–2017

Holt et al 44 conducted a particularly relevant study in which they compared the responses of 5461 participants in the Medicare Current Beneficiary Survey with claims data. Each participant, in effect, served as her own control. The authors concluded that ‘On the basis of these findings, we believe it is premature to conclude that disparities in mammography have been eliminated. Further exploration of the reasons for differences between self-report and claims information is warranted’.

Two meta-analyses focused on current self-reporting methods used for the NHIS9 and BRFSS.4 Each of these reports concluded that these methods overestimate mammography utilisation and underestimate racial disparities or inequalities. Specifically, Rauscher et al 9 concluded that

When estimates of self-report accuracy from this meta-analysis were applied to cancer-screening prevalence estimates from the National Health Interview Survey, results suggested that prevalence estimates are artificially increased and disparities in prevalence are artificially decreased by inaccurate self-reports…National survey data are overestimating cancer-screening utilization for several common procedures and may be masking disparities in screening due to racial/ethnic differences in reporting accuracy

Rauscher et al 9 specifically cautioned against reliance on the NHIS, stating that

Because the NHIS is the major source of data on cancer screening used for tracking prevalence in the U.S. population, validation studies should be undertaken for a sample of respondents within the NHIS, and designed with enough power to detect meaningful differences in sensitivity and specificity for different racial/ethnic and socioeconomic groups

Njai et al 5 concluded that ‘Self-reported data overestimate mammography use — more so for black women than for white women. After adjustment for respondent misclassification, neither white women nor black women had attained the Healthy People 2010 objective (≥70%) by 2006, and a disparity between white and black women emerged’. With reference to 2-year self-report, they concluded that ‘Women tend to over-report their participation in…mammography screening in a given timeframe. The pooled estimates should be interpreted with caution due to unexplained heterogeneity’.4

Discussion

The present qualitative review of the totality of published evidence suggests a lack of validity of self-reports of mammography. This review also documents the historical development of scientific evidence about the quality of self-reported information provided in response to health survey questions about mammography screening. It demonstrates a remarkably consistent set of challenges to surveillance practices of the Healthy People programme, even as methods of analysis have grown increasingly complex. The narrative approach was also chosen, in part, because extensive, well-done meta-analyses confirming previous concerns about self-report have already been published4 9 10 to little or no apparent effect.53 Perhaps, by presenting more than quarter-century of research as it has evolved, the depth of scientific objections will become clearer.

In part, persistence of the present self-reported information protocols for mammography may reflect assertions that self-report is the only feasible, cost-effective way to obtain such information.52 Nonetheless, the aforementioned NHIS questions (ie, Have you ever had a mammogram? and When did you have your most recent mammogram?)4 are subject to several cogent concerns about bias, including (1) telescoping, whereby people recall distant events as occurring more recently than they actually happened54; (2) greater likelihood of producing inconsistent/overestimates from black women7; (3) failure to distinguish between screening and diagnostic mammography4; and (4) failure to address the issue of whether mammography screening is consistently used (as opposed to being ‘up to date’). This is so, even though additional questions already included in the NHIS survey were used as resources for tracking the progress of the Healthy People programme.55

Biased overestimates of mammography screening use may have serious adverse clinical and public health consequences. For example, Dr Harold Freeman, a past president of the American Cancer Society, wrote in the New York Times:

…for many years, the dominant cause of higher mortality has been late-stage disease at the time of initial treatment, in part as a result of black women being less likely to undergo mammography. However, this gap has been closed. The CDC reports that the rate of mammography is now the same in black and white women….56

Similarly, the Susan G Komen Foundation, a leading organisation which focuses exclusively on breast cancer, quotes data to the effect that ‘Black women now have slightly higher rates of mammography use than other women’.57 Based on the present data, neither the Freeman nor the Komen statements are likely to be accurate.

Aside from making more comprehensive use of existing NHIS information, additional surveillance alternatives include greater use of administrative claims58 and HEDIS (Healthcare Effectiveness Data and Information Set),59 as well as expansion of mammography registries.60 Specifically, Smith-Bindman et al 58 noted that 94% of women who had at least one mammogram within a 2-year reference period were accurately classified by administrative claims data as having undergone a mammogram during that period. Also, while Medicare claims are not available from HMOs, these organisations and others do provide information on mammography utilisation to the HEDIS.59 Finally, the National Cancer Institute’s Breast Cancer Surveillance Consortium60 might serve as a national mammography registry model, but at present it only operates in the states of New Hampshire, North Carolina; Vermont; Washington; San Francisco, California; and Chicago, Illinois.60

In conclusion, the current totality of evidence supports the need for research to reconsider the validity of self-reported mammography data as well as the feasibility of alternative surveillance data sources to achieve the goals of the Healthy People Initiative.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.
  38. 38.
  39. 39.
  40. 40.
  41. 41.
  42. 42.
  43. 43.
  44. 44.
  45. 45.
  46. 46.
  47. 47.
  48. 48.
  49. 49.
  50. 50.
  51. 51.
  52. 52.
  53. 53.
  54. 54.
  55. 55.
  56. 56.
  57. 57.
  58. 58.
  59. 59.
  60. 61.

Footnotes

  • Contributors Conception and design of study: RSL. Acquisition of data: RSL. Analysis and interpretation of data: RSL, BJK, MS, MKF, MP, JLS, MCMdG, HOH, BAH, RJZ, CHH. Drafting the manuscript: RSL, BJK, MS, MKF, MP, JLS, MCMdG, HOH, BAH, RJZ, CHH. Approval of the manuscript to be published: RSL, BJK, MS, MKF, MP JLS, MCMdG, HOH, BAH, RJZ, CHH.

  • Funding US Department of Health and Human Services, National Institutes of Health, National Institute on Minority Health and Health Disparities (grant number 5P20MD/000516-07).

  • Competing interests None declared.

  • Patient consent Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.