Skip to main content

Main menu

  • Home
  • Content
    • Published Ahead of Print
    • Current Issue
    • JASN Podcasts
    • Article Collections
    • Archives
    • Kidney Week Abstracts
    • Saved Searches
  • Authors
    • Submit a Manuscript
    • Author Resources
  • Editorial Team
  • Editorial Fellowship
    • Editorial Fellowship Team
    • Editorial Fellowship Application Process
  • More
    • About JASN
    • Advertising
    • Alerts
    • Feedback
    • Impact Factor
    • Reprints
    • Subscriptions
  • ASN Kidney News
  • Other
    • ASN Publications
    • CJASN
    • Kidney360
    • Kidney News Online
    • American Society of Nephrology

User menu

  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
American Society of Nephrology
  • Other
    • ASN Publications
    • CJASN
    • Kidney360
    • Kidney News Online
    • American Society of Nephrology
  • Subscribe
  • My alerts
  • Log in
  • My Cart
Advertisement
American Society of Nephrology

Advanced Search

  • Home
  • Content
    • Published Ahead of Print
    • Current Issue
    • JASN Podcasts
    • Article Collections
    • Archives
    • Kidney Week Abstracts
    • Saved Searches
  • Authors
    • Submit a Manuscript
    • Author Resources
  • Editorial Team
  • Editorial Fellowship
    • Editorial Fellowship Team
    • Editorial Fellowship Application Process
  • More
    • About JASN
    • Advertising
    • Alerts
    • Feedback
    • Impact Factor
    • Reprints
    • Subscriptions
  • ASN Kidney News
  • Follow JASN on Twitter
  • Visit ASN on Facebook
  • Follow JASN on RSS
  • Community Forum
Up Front MattersPerspective
Open Access

Compromising Outcomes

Peter B. Imrey
JASN July 2019, 30 (7) 1147-1150; DOI: https://doi.org/10.1681/ASN.2019010057
Peter B. Imrey
Department of Quantitative Health Sciences, Lerner Research Institute and
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data Supps
  • Info & Metrics
  • View PDF
Loading
  • clinical trials
  • multiple testing
  • publication guidelines
  • research ethics
  • data dredging
  • p-hacking

Controlled clinical trials are generally regarded as providing the best evidence of therapeutic benefits and harms, because they incorporate powerful tools to control research validity threats: concurrent controls, randomization, masking/blinding of treatments, and sample size predetermination for tolerable false positive and false negative error rates. However, the quality of clinical trial evidence depends on how well trials are executed and reported. The Consolidated Standards for Reporting Trials (CONSORT) guidelines1 are endorsed by JASN as criteria to which authors and editors should aspire to conform. However, a recent review by Chatzimanouil et al.2 raises concerns about adherence to specific CONSORT guidelines in 1996–2016 clinical trial reports in JASN, American Journal of Kidney Diseases (AJKD), Kidney International (KI), New England Journal of Medicine, and The Lancet.

Perhaps the most striking of these concerns is lack of clear outcome reporting. Because diseases and treatments have many manifestations, benefit and harm are multidimensional; thus, virtually all trials report multiple outcomes, many repeatedly during follow-up. Despite their ubiquity, multiple outcomes challenge both trial planners and consumers of trial results, because the chances of false positive and false negative scientific conclusions in otherwise well conducted trials often depend crucially on choices among alternative statistical approaches to multiple outcome analyses. The most prevalent approach involves declaration at a trial’s outset of a hierarchy of primary, secondary, and tertiary outcomes, with the former acting, in principle, as gatekeeper of an actionable statistical significance claim. However, ad hoc reporting and/or opaque reporting about multiple outcomes are increasingly recognized as generating controversy, confusion, and irreproducibility of clinical research findings.3–24 Unfortunately, despite modest progress from 1996 to 2016, Chatzimanouil et al.2 classify fewer than one half of JASN articles that they reviewed from this period as having sufficiently clarified which prespecified outcomes were initially defined as primary and secondary end points, including how and when assessed (figure 3B, outcome a in the work by Chatzimanouil et al.2). Moreover, almost no reports among those reviewed, and none whatsoever from JASN, AJKD, and KI, were classified as having clearly identified, explained, or disclaimed changes in prespecified outcomes during the trial (figure 3B, outcome b in the work by Chatzimanouil et al.2).

The finding of Chatzimanouil et al.2 is disturbing in the context of current reproducibility concerns due to the critical role played by outcome predetermination/prioritization in managing and describing a trial’s false positive and false negative error rates. This paper provides a brief conceptual overview of the technical “multiplicity” problem, accepted solutions, common evasions, and the need to address such evasions by increasing awareness and expectations of transparency.

As illustration, consider two parallel-arm clinical superiority trials comparing drugs X and Y. Suppose that Trial A records occurrences of a single efficacy outcome (e.g., rate of eGFR decline) annually during 4 follow-up years, and Trial B records cumulative occurrences of each of four dichotomous efficacy outcomes (eGFR decline >10 ml/min per 1.73 m2, AKI, dialysis onset, and all-cause mortality) once at the end of follow-up. In Trial A, if the four annual treatment differences are tested separately with, for simplicity, one-sided 2.5%-level hypothesis tests consistent with the Food and Drug Administration (FDA) standards for drug superiority trials or in Trial B, if each outcome is similarly compared separately between treatments, then the chance of falsely concluding that drug X is superior to drug Y if the drugs are truly equivalent is negligible—below one in 16,000 (4×0.975×0.0253+0.0254=0.0061%)—if, to claim overall superiority, significance is required for at least 3 years (Trial A) or three outcomes (Trial B) and yearly results (Trial A) or outcomes (Trial B) are assumed to be statistically independent (a simplifying, if unrealistic, assumption for illustration). However, if only one significant time or outcome is required, then under the same assumption, this chance is 9.63% (1−(0.975)4), virtually quadrupling the nominal 2.5% false positive rate. This problem worsens commensurately with more observation times and/or outcomes. Dependencies among times and outcomes alter the specific numbers but not the problem.

Although worst in “omics” studies involving tens of thousands to millions of simultaneous significance tests, this “multiple comparison problem” has been known to statisticians for a century and under discussion in the context of clinical trials for at least four decades.25 An extensive statistical literature provides several types of solutions within the clinical research literature’s usual “frequentist” hypothesis testing statistical framework, each with variations adapted to differing clinical and statistical situations. These are summarized broadly in Figure 1 and in more detail by statistical texts,26,27 a recent European Medicines Agency guidance,28 a US FDA draft guidance,29 and a recent journal special section.30–34 Broadly speaking, one may (1) restructure hypotheses to use fewer tests as with an overall analysis of variance (ANOVA) test or Pearson chi-squared test comparing k proportions or (2) redefine and control error rates in groups of tests rather than at the individual test level (e.g., as the probability of any false positives among a collection of tests [“familywise error rate”] or the fraction of false positives among all positive results [“false discovery rate”]). Analyses from a Bayesian statistical framework offer additional attractive solutions but may introduce other interpretive difficulties. Regardless, the available methods for handling multiple outcomes are powerful and flexible, and when used well, they can limit hypothesis testing errors and consequently, also limit incorrect scientific findings from purely random variation.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

At least six strategies are available for error control of multiple simultaneous hypothesis tests within the clinical research literature’s prevalent “frequentist” approach to statistical inference. The group on the left indicates four ways by which multiple comparison concerns in analyses of clinical trial data may be assuaged by reducing the number of formal statistical hypothesis tests that are simultaneously considered: omnibus testing of composite hypotheses, formation and testing of composite outcomes, comparisons of trends or rates of change rather than individual times points in analyses of longitudinal repeated measures outcomes, and formation of outcome hierarchies in which decisions predicated on statistical significance of lower outcomes in the hierarchy require statistical significance of outcomes higher in the hierarchy. The group on the right indicates two ways by which multiple comparison concerns can be managed, without reducing the number of tests, by controlling error rates defined in terms of results of groups of tests rather than individual tests: control of the “familywise error rate” (i.e., of the chance that any member of the group will yield a statistically significant false positive result when all null hypotheses tested are true) and control of the “false discovery rate” (i.e., of the fraction of positive test results resulting from a larger group of tests that are false positives). All of these methods are frequently used, and combinations of them may be required to adequately control false positive inferences from clinical trials. HSD, honestly significant difference.

They cannot, however, increase information in the study sample. The price of successful random error control, as for methods of statistical error control in general, is to pick a method and execute it as planned. In principle, this largely requires adherence to the initially planned hierarchy of outcomes and specific statistical procedure(s), because it is mathematical analyses and/or computational simulations based upon these that establish the study’s claimed error control properties. These error properties may be more or less sensitive to changes in a study’s analytic plan, but no established rules of thumb predict their sensitivity to omissions or substitutions among outcomes or rearrangement of an outcome hierarchy. Post hoc changes in handling multiple outcomes, including drawing conclusions about treatment efficacy from secondary or tertiary outcomes of a trial with nonsignificant primary outcome, abandoning or reshuffling components of an outcome hierarchy, or declaiming that significance tests are exploratory while drawing conclusions as if they were confirmatory, thus preserve the form but vitiate the technical meaning of statistical significance, with its intrinsic linkage to a specified false positive error rate.

This is a quandary for investigators, because the technical requirement of fidelity to a previously specified approach, while controlling undesirable “fishing expeditions,” also restricts health scientists’ natural proclivities to apply aspects of what is learned from data to their interpretation, and restricts statistical scientists’ natural desires to ensure that technical assumptions of analytic modeling processes are realistic and that analysis efficiently uses the information in the data. The quandary understandably causes discomfort, increased by the perception that journal acceptance rates are tightly linked to statistically significant P values. However, wisely accepting this discomfort honors the Nobel physicist Richard Feynman’s “first principle” of scientific integrity: “that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that.”35 As humans, we are supremely talented at finding threads—“stories”—to describe our experiences, scientific theories being one type of such stories. However, we are less talented at assessing probability and relative plausibility against a background of chance variation. We tend to be suckers for clues, and there are many possible stories, relatively few of which convey great insight.36

Many reviews have documented that outcome omissions, substitutions, reprioritizations, and inventions, often unannounced, appear with disturbing frequency in clinical research reports.3–24 Even when motivated by the sincere desire to capture all possible insights from hard-earned, expensive data, such changes are inherently undesirable and often harmful. They breach the implicit contract between investigators, funders, oversight bodies, and participating patients by invalidating important aspects of the trial rationale under which the study was approved and funded, and patients were consented and enrolled. Although the comments above have stressed issues with efficacy outcomes and false positive errors, similar considerations apply to safety outcomes and errors in identifying safety signals.

This does not mean that deviations from prespecified outcomes and analyses are always scientifically wrong and impermissible. It does mean, however, that such changes should always require clear rationale and be identified clearly in a paper’s main text, with rationale fully visible to reviewers, editors, and readers, preferably with results from the original plan presented for easy comparison. This allows readers to assess the arguments for and against any notable alterations in scientific findings from those on the basis of the initial plan. Transparency is no panacea. Here, however, as in other circumstances where principles may collide, it makes the information needed to understand and rationally debate an issue more available; discourages individuals from arbitrary decisions hard to publicly defend; illuminates disagreements and sometimes, paths to consensus as well; and fosters trust, the lifeblood of science. Such transparency is consistent with the spirit and letter of CONSORT and all other widely accepted research reporting guidelines. However, increased understanding of the dangers of compromising outcomes by investigators; heightened alertness by reviewers, editors, and readers to inconsistencies of reported outcomes and their statistical treatments with study protocols and analysis plans; and strengthened journal policies requiring transparency when addressing such inconsistencies, will all be needed for transparent, CONSORT-compliant outcome reporting to become the norm.

Disclosures

Dr. Imrey reports nonfinancial support from the Task Force on Design and Analysis of Oral Health Research and the Foundation for the Task Force on Design and Analysis of Oral Health Research all outside the submitted work.

Funding

This work was partially supported by the National Center for Advancing Translational Sciences, grant U54TR001960; the National Institute of Arthritis and Musculoskeletal and Skin Diseases, grants R01AR068342 and R01AR074131; and the Agency for Health Care Research and Quality, grant R01HS024277.

Acknowledgments

Dr. Imrey is grateful to JASN editorial colleagues and submitting authors, whose efforts have stimulated this commentary.

Footnotes

  • Published online ahead of print. Publication date available at www.jasn.org.

  • Copyright © 2019 by the American Society of Nephrology

References

  1. ↵
    1. Schulz KF,
    2. Altman DG,
    3. Moher D
    ; CONSORT Group: CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Trials 11: 32, 2010
    OpenUrlCrossRefPubMed
  2. ↵
    1. Chatzimanouil MKT,
    2. Wilkens L,
    3. Anders HJ
    : Quantity and reporting quality of kidney research. J Am Soc Nephrol 30: 13–22, 2019
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Chan AW,
    2. Hróbjartsson A,
    3. Haahr MT,
    4. Gøtzsche PC,
    5. Altman DG
    : Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to published articles. JAMA 291: 2457–2465, 2004
    OpenUrlCrossRefPubMed
    1. Chan AW,
    2. Krleza-Jeric K,
    3. Schmid I,
    4. Altman DG
    : Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ 171: 735–740, 2004
    OpenUrlAbstract/FREE Full Text
    1. Dwan K,
    2. Altman DG,
    3. Arnaiz JA,
    4. Bloom J,
    5. Chan AW,
    6. Cronin E, et al
    .: Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One 3: e3081, 2008
    OpenUrlCrossRefPubMed
    1. Mathieu S,
    2. Boutron I,
    3. Moher D,
    4. Altman DG,
    5. Ravaud P
    : Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 302: 977–984, 2009
    OpenUrlCrossRefPubMed
    1. Kirkham JJ,
    2. Dwan KM,
    3. Altman DG,
    4. Gamble C,
    5. Dodd S,
    6. Smyth R, et al
    .: The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. BMJ 340: c365, 2010
    OpenUrlAbstract/FREE Full Text
    1. Hannink G,
    2. Gooszen HG,
    3. Rovers MM
    : Comparison of registered and published primary outcomes in randomized clinical trials of surgical interventions. Ann Surg 257: 818–823, 2013
    OpenUrlCrossRefPubMed
    1. Rosenthal R,
    2. Dwan K
    : Comparison of randomized controlled trial registry entries and content of reports in surgery journals. Ann Surg 257: 1007–1015, 2013
    OpenUrlCrossRefPubMed
    1. Dwan K,
    2. Gamble C,
    3. Williamson PR,
    4. Kirkham JJ
    ; Reporting Bias Group: Systematic review of the empirical evidence of study publication bias and outcome reporting bias - an updated review. PLoS One 8: e66844, 2013
    OpenUrlCrossRefPubMed
    1. Dwan K,
    2. Altman DG,
    3. Clarke M,
    4. Gamble C,
    5. Higgins JP,
    6. Sterne JA, et al
    .: Evidence for the selective reporting of analyses and discrepancies in clinical trials: A systematic review of cohort studies of clinical trials. PLoS Med 11: e1001666, 2014
    OpenUrlCrossRefPubMed
    1. Page MJ,
    2. McKenzie JE,
    3. Kirkham J,
    4. Dwan K,
    5. Kramer S,
    6. Green S, et al
    .: Bias due to selective inclusion and reporting of outcomes and analyses in systematic reviews of randomised trials of healthcare interventions. Cochrane Database Syst Rev 10: MR000035, 2014
    1. Roest AM,
    2. de Jonge P,
    3. Williams CD,
    4. de Vries YA,
    5. Schoevers RA,
    6. Turner EH
    : Reporting bias in clinical trials investigating the efficacy of second-generation antidepressants in the treatment of anxiety disorders: A report of 2 meta-analyses. JAMA Psychiatry 72: 500–510, 2015
    OpenUrl
    1. Fleming PS,
    2. Koletsi D,
    3. Dwan K,
    4. Pandis N
    : Outcome discrepancies and selective reporting: Impacting the leading journals? PLoS One 10: e0127495, 2015
    OpenUrl
    1. Jones CW,
    2. Keil LG,
    3. Holland WC,
    4. Caughey MC,
    5. Platts-Mills TF
    : Comparison of registered and published outcomes in randomized controlled trials: A systematic review. BMC Med 13: 282, 2015
    OpenUrlCrossRefPubMed
    1. Pandis N,
    2. Fleming PS,
    3. Worthington H,
    4. Dwan K,
    5. Salanti G
    : Discrepancies in outcome reporting exist between protocols and published oral health Cochrane systematic reviews. PLoS One 10: e0137667, 2015
    OpenUrl
    1. Coronado-Montoya S,
    2. Levis AW,
    3. Kwakkenbos L,
    4. Steele RJ,
    5. Turner EH,
    6. Thombs BD
    : Reporting of positive results in randomized controlled trials of mindfulness-based mental health interventions. PLoS One 11: e0153220, 2016
    OpenUrlPubMed
    1. Ioannidis JPA,
    2. Caplan AL,
    3. Dal-Ré R
    : Outcome reporting bias in clinical trials: Why monitoring matters. BMJ 356: j408, 2017
    OpenUrlFREE Full Text
    1. Lancee M,
    2. Lemmens CMC,
    3. Kahn RS,
    4. Vinkers CH,
    5. Luykx JJ
    : Outcome reporting bias in randomized-controlled trials investigating antipsychotic drugs. Transl Psychiatry 7: e1232, 2017
    OpenUrl
    1. Jones PM,
    2. Chow JTY,
    3. Arango MF,
    4. Fridfinnson JA,
    5. Gai N,
    6. Lam K, et al
    .: Comparison of registered and reported outcomes in randomized clinical trials published in anesthesiology journals. Anesth Analg 125: 1292–1300, 2017
    OpenUrl
    1. Goldacre B,
    2. Drysdale H,
    3. Powell-Smith A,
    4. Dale A,
    5. Ioan M,
    6. Eirion S, et al
    .: The COMpare Trials Project, 2016. Available at: www.compare-trials.org. Accessed May 29, 2019
    1. Hopewell S,
    2. Witt CM,
    3. Linde K,
    4. Icke K,
    5. Adedire O,
    6. Kirtley S, et al
    .: Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials. Trials 19: 30, 2018
    OpenUrl
    1. Lee S,
    2. Khan T,
    3. Grindlay D,
    4. Karantana A
    : Registration and outcome-reporting bias in randomized controlled trials of distal radial fracture treatment. JBJS Open Access 3: e0065, 2018
    OpenUrl
  4. ↵
    1. Glasziou P,
    2. Altman DG,
    3. Bossuyt P,
    4. Boutron I,
    5. Clarke M,
    6. Julious S, et al
    .: Reducing waste from incomplete or unusable reports of biomedical research. Lancet 383: 267–276, 2014
    OpenUrlCrossRefPubMed
  5. ↵
    1. Tukey JW
    : Some thoughts on clinical trials, especially problems of multiplicity. Science 198: 679–684, 1977
    OpenUrlAbstract/FREE Full Text
  6. ↵
    1. Dmitrienko A,
    2. Tamhane AC,
    3. Bretz F
    : Multiple Testing Problems in Pharmaceutical Statistics, Boca Raton, FL, CRC Press, 2010
  7. ↵
    1. Westfall PH,
    2. Tobias RD,
    3. Wolfinger RD
    : Multiple Comparisons and Multiple Tests Using SAS, 2nd Ed., Cary, NC, SAS Institute Inc., 2011
  8. ↵
    European Medicines Agency: Guideline on Multiplicity Issues in Clinical Trials EMA/CHMP/44762/2017, 2017. Available at: https://www.ema.europa.eu/documents/scientific-guideline/draft-guideline-multiplicity-issues-clinical-trials_en.pdf. Accessed May 29, 2019
  9. ↵
    FDA: FDA Draft Guidance for Industry. Multiplicity Endpoints in Clinical Trials, 2017. Available at: http://www.fda.gov/media/102657/download. Accessed June 7, 2019
  10. ↵
    1. Dmitrienko A,
    2. D’Agostino RB
    : Editorial: Multiplicity issues in clinical trials. Statist Med 36: 4423–4426, 2017
    OpenUrl
    1. Chuang-Stein C,
    2. Li J
    : Changes are still needed on multiple co-primary endpoints. Statist Med 36: 4427–4436, 2017
    OpenUrl
    1. Sankoh AJ,
    2. Li H,
    3. D’Agostino RB
    : Composite and multicomponent end points in clinical trials. Statist Med 36: 4437–4440, 2017
    OpenUrl
    1. Snappin S
    : Some remaining challenges regarding multiple endpoints in clinical trials. Statist Med 36: 4441–4445, 2017
    OpenUrl
  11. ↵
    1. Dmitrienko A,
    2. Millen B,
    3. Lipkovich I
    : Multiplicity considerations in subgroup analysis. Statist Med 36: 4446–4454, 2017
    OpenUrl
  12. ↵
    1. Feynman RP
    : Cargo cult science. In: Surely You’re Joking, Mr. Feynman: Adventures of a Curious Character, edited by Feynman RP, New York, W.W. Norton, 1984, pp 338–446
  13. ↵
    1. Kahneman D
    : Thinking, Fast and Slow, New York, Farrar, Strauss and Giroux, 2011
PreviousNext
Back to top

In this issue

Journal of the American Society of Nephrology: 30 (7)
Journal of the American Society of Nephrology
Vol. 30, Issue 7
July 2019
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
View Selected Citations (0)
Print
Download PDF
Sign up for Alerts
Email Article
Thank you for your help in sharing the high-quality science in JASN.
Enter multiple addresses on separate lines or separate them with commas.
Compromising Outcomes
(Your Name) has sent you a message from American Society of Nephrology
(Your Name) thought you would like to see the American Society of Nephrology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Compromising Outcomes
Peter B. Imrey
JASN Jul 2019, 30 (7) 1147-1150; DOI: 10.1681/ASN.2019010057

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Compromising Outcomes
Peter B. Imrey
JASN Jul 2019, 30 (7) 1147-1150; DOI: 10.1681/ASN.2019010057
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like

Jump to section

  • Article
    • Disclosures
    • Funding
    • Acknowledgments
    • Footnotes
    • References
  • Figures & Data Supps
  • Info & Metrics
  • View PDF

More in this TOC Section

Up Front Matters

  • Polycystin-2 in the Endoplasmic Reticulum: Bending Ideas about the Role of the Cilium
  • Optimizing the Design and Analysis of Future AKI Trials
  • Identifying Antigen-Specific T Cells in ANCA-Associated Vasculitis: A Glimpse of the Future?
Show more Up Front Matters

Perspective

  • Ensuring Equitable Access to Dialysis: The Medicare Secondary Payer Act in Marietta Memorial Hospital Employee Health Benefit Plan v. DaVita, Inc.
  • The Potential for Pragmatic Trials to Reduce Racial and Ethnic Disparities in Kidney Disease
  • The Highs and Lows of Potassium Intake in CKD—Does One Size Fit All?
Show more Perspective

Cited By...

  • No citing articles found.
  • Google Scholar

Similar Articles

Related Articles

  • No related articles found.
  • PubMed
  • Google Scholar

Keywords

  • clinical trials
  • multiple testing
  • publication guidelines
  • research ethics
  • data dredging
  • p-hacking

Articles

  • Current Issue
  • Early Access
  • Subject Collections
  • Article Archive
  • ASN Annual Meeting Abstracts

Information for Authors

  • Submit a Manuscript
  • Author Resources
  • Editorial Fellowship Program
  • ASN Journal Policies
  • Reuse/Reprint Policy

About

  • JASN
  • ASN
  • ASN Journals
  • ASN Kidney News

Journal Information

  • About JASN
  • JASN Email Alerts
  • JASN Key Impact Information
  • JASN Podcasts
  • JASN RSS Feeds
  • Editorial Board

More Information

  • Advertise
  • ASN Podcasts
  • ASN Publications
  • Become an ASN Member
  • Feedback
  • Follow on Twitter
  • Password/Email Address Changes
  • Subscribe to ASN Journals
  • Wolters Kluwer Partnership

© 2022 American Society of Nephrology

Print ISSN - 1046-6673 Online ISSN - 1533-3450

Powered by HighWire