Abstract
A common lament is that long-term kidney transplant outcomes remain the same despite improvements in early graft survival. To be fair, progress has been made—in both our understanding of chronic injury and modestly, graft survival. However, we are still a long way from actually solving this important and difficult problem. In this review, we outline recent data supporting the existence of several causes of renal allograft loss, the incidences of which peak at different time points after transplantation. On the basis of this broadened concept of chronic renal allograft injury, we examine the challenges of clinical trial design in long-term studies, including the use of surrogate end points and biomarkers. Finally, we suggest a path forward that, ultimately, may improve long-term renal allograft survival.
- kidney transplantation
- clinical trial
- transplant pathology
- renal transplantation
- chronic allograft failure
- immunosuppression
Improving long-term renal allograft survival is one of the major unmet needs in kidney transplantation. Approximately one-half of graft failures beyond the first post-transplant year are caused by death of the recipient, and certainly, this is an important topic.1 However, the focus of this review is graft failure caused by chronic renal allograft injury in living patients.
Death-censored kidney transplant survival has unquestionably improved over the past quarter century (Figure 1). The half-life of standard criteria deceased donor (SCD) kidneys in the United States has increased from 10.6 years for those transplanted in 1989 to an estimated 15.5 years for transplants performed in 2005.1 Similarly, over the same time period, the half-life of kidneys from living donors (LDs) increased from 17.4 years to an estimated 20.9 years. However, this improvement primarily reflects a dramatic decline in the percentage of grafts failing in the first 1 year post-transplant, decreasing from 8.6% to 4.5% for LD kidneys and from 15.8% to 5.1% for SCD kidneys.
Rates of late renal allograft loss have remained constant. (A) Kaplan–Meier cumulative graft failure and (B) death-censored graft failure by year of first deceased SCD transplants from transplant years 1989–2008. Reprinted from ref. 1, with permission.
Unfortunately, death-censored attrition rates beyond the first year remain relatively unchanged since 1989—3%–5% per year for SCD and 2%–3% per year for LD kidneys. By 10 years, 20%–30% of all renal allografts will have failed.
In recent years, our understanding of late graft failure has changed dramatically. In this review, we outline some of the history of the study of chronic graft injury and summarize data leading to our model. We then critically examine the recent approaches to the study of long-term injury (for example, the use of surrogate end points) and suggest how novel treatment trials might be constructed going forward.
Hypotheses Regarding Chronic Renal Allograft Injury: A Historical Perspective
“Those Who Cannot Remember the Past Are Condemned to Repeat It.”2
It is instructive and sobering to note that much research in chronic renal allograft injury over the past 30 years involved plausible hypotheses subsequently proven to be largely incorrect. In the 1980s, several studies showed that the patients who experienced an early acute cellular rejection (ACR) episode had less favorable long-term (usually 5-year) graft survival than those who remained rejection-free.3–5 Thus, the concept that lowering ACR rates would improve long-term graft survival became widely accepted in kidney transplantation, stimulating the development of new immunosuppressants. Indeed, ACR remains an accepted end point for clinical trials by the Food and Drug Administration (FDA) today. However, when newer protocols lowered ACR rates routinely to <10% in the first year, late graft loss rates did not change.1
In the 1990s, as more detailed histologic studies were conducted, a hypothesis emerged that progressive fibrosis and vascular injury (termed chronic allograft nephropathy [CAN]) were the major cause of graft loss.6 One report from Australia studied 120 patients, 119 of whom were recipients of bladder-drained simultaneous pancreas kidney transplants maintained on cyclosporin/azathioprine-based immunosuppression.7 In surveillance biopsies obtained at 5 years after transplantation, 66% of patients showed moderate-to-severe interstitial fibrosis (i.e., Banff criteria lesions: moderate [ci2]=26%–50% of the cortical interstitium is fibrotic; severe (ci3)>50% of the cortical interstitium is fibrotic). Because calcineurin inhibitors (CNIs), such as cyclosporin, were known to cause fibrosis in kidneys, it was hypothesized that CAN was largely caused by CNI nephrotoxicity. This concept that chronic CNI nephrotoxicity was a major contributor to chronic graft loss was widely embraced by the transplant community.8,9
Multiple ensuing studies tested the hypothesis that minimization or avoidance of CNI-based therapy could reduce fibrosis and consequently, improve long-term allograft survival. Most studies were relatively short term, with preservation of renal function (GFR)—believed to be predictive of ultimate graft survival—used as a surrogate end point. Although some single-center studies suggested that total avoidance of CNIs was safe and associated with better renal function over time,10,11 other trials showed increased ACR in the patients on CNI-sparing and CNI-free therapies and minimal, if any, improvements in eGFR.12–14 The effect on GFR tended to be particularly limited when the CNI used was tacrolimus rather than cyclosporin.15 For example, the Symphony Study, with over 1600 subjects with 12 months of follow-up, documented a tacrolimus-based regimen to be associated with less rejection, less fibrosis, and better renal function than either cyclosporin or CNI-free, sirolimus-based therapy.15
A major problem with the CNI nephrotoxicity hypothesis is that CNI-associated histologic lesions are relatively nonspecific, and the diagnosis is commonly one of exclusion made in patients with dysfunction and no other discernable histologic diagnosis.16 Ironically, one group suggested that lesions commonly associated with CNI nephrotoxicity were more frequent in recipients who were on lower doses of the agent or nonadherent to therapy.17 Currently, there seems to be emerging consensus that CNIs play little role in graft loss early (<5 years) after transplantation, and CNI use remains ubiquitous in renal transplantation. However, the role of CNI toxicity at later time points continues to be debated, including whether nephrotoxicity is primary or merely exacerbates other mechanisms of injury.
The CTLA4Ig fusion protein, belatacept, was approved by the FDA as part of a CNI-free regimen. It has been shown to result in more ACR but better renal function and less CAN at 1 and 3 years compared with a cyclosporin-based regimen.18 Long-term outcomes of patients treated with belatacept are not yet available, and it will be interesting to see if these early improvements in renal function translate into improvements in graft survival.
Broadening the Concept of Chronic Renal Allograft Injury
In recent years, a broader view of chronic renal allograft injury has emerged on the basis of biopsy data from several large studies of solitary kidney transplants (arguably, a more appropriate study population than bladder-drained simultaneous pancreas kidney recipients). These studies have used two different approaches to investigate the causes of chronic injury—one approach on the basis of surveillance biopsies in well functioning kidneys and one approach on the basis of biopsies for cause in kidneys with dysfunction.
The conclusions reached by these studies may, at first glance, seem somewhat different; however, we believe that these approaches are complementary and need to be reconciled if we are to have a complete picture of chronic graft injury. Longitudinal surveillance biopsy data to date have focused more intensely on events in the first 5 years after transplantation. Data from this time frame suggest that not all grafts experience chronic injury and that, in those that do, there are several different causes19 (Figure 2). In contrast, biopsy-for-cause studies tend to include more late biopsies. Donor-specific alloantibody (DSA) seems to be a more common cause of graft loss during this later time frame, but inflammation also may continue to play a role. Taken together, these studies identify a more coherent model of chronic renal allograft injury: injury is not ubiquitous (even with CNI-based therapy), and there are distinct causes of injury with importance that varies at different times after transplantation.
Changes in renal function after kidney transplantation. MDRD, Modification of Diet in Renal Disease. Reprinted from ref. 21, with permission. T12, 1 year; TLast, Time of last follow-up (mean 6.5 years); Q, quintile, patients were divided into quintiles based on their change in GFR over time.
Graft Injury in the First Few Years: Subclinical Inflammation and Recurrent Disease
These more recent surveillance biopsy studies suggested that not all renal allografts show evidence of significant injury in the first few years after transplantation. In a report of 447 surveillance biopsies at 1 and 5 years post-transplant, the Mayo Clinic group studied recipients of solitary kidney transplants performed between 1998 and 2004 on tacrolimus-based immunosuppression.20 Moderate-to-severe interstitial fibrosis was uncommon in both 1- (13%) and 5-year (17%) biopsies, and rates were much lower than the 66% incidence at 5 years described in the earlier Australian study.7 Also, although mild fibrosis (for example, involving ≤25% of the interstitium) was relatively common (a 37% prevalence at 1 year) in the Mayo Clinic study, it rarely progressed to more severe forms by 5 years. In fact, many renal allografts (including those originating in both LDs and deceased donors) showed relatively normal histology at both 1 and 5 years after transplantation. In a subsequent analysis of this cohort, renal function remained stable or improved between 1 and 5 years in 60% of recipients if the 1-year biopsy was normal.21 This concept was further supported by a study that showed that the majority of renal allografts in this cohort with good function at 1 year showed stable or increased renal function at 5 years after transplantation, whereas only a subset shows declining function (Figure 2).
The possible causes of graft loss in these patients were examined in detail and found to be associated with several distinct post-transplant conditions, including recurrent disease, infection, malignancy, polyoma virus associated nephropathy, and surprisingly, ACR. Importantly, chronic antibody-associated injury (transplant glomerulopathy) and calcineurin-inhibitor nephrotoxicity seemed to be uncommon causes of graft loss during the first 5 years after transplantation (Figure 3).
Causes of graft loss after kidney transplantation on the basis of surveillance biopsies. Reprinted from ref. 19, with permission.
Surveillance biopsy studies have also suggested an association between intragraft inflammation, which affects ≤15% of renal allografts at 1 year, and the development of interstitial fibrosis and/or graft loss.22–27 It is unclear if subclinical inflammation represents cell-mediated alloimmunity against the allograft (it often is not severe enough to meet Banff criteria for ACR) or other processes, including a nonspecific response to other forms of injury. In both histologic and gene expression studies, grafts with subclinical inflammation seem qualitatively, if not quantitatively, similar to grafts with clinical ACR.26,27 One study suggested a higher rate of transplant glomerulopathy in grafts with prior subclinical inflammation.28 Thus, there may be a link between the cellular alloimmune response and the development of DSA.
If subclinical inflammation early after transplantation truly represents a failure of conventional immunosuppression to control the alloimmune response, then therapeutic trials aimed to prevent or reverse this process (rather than minimization of immunosuppression) might be a path to improving long-term graft survival in this subset of patients.
Recurrence of native kidney disease has been a recognized cause of renal allograft loss for decades, but early surveillance biopsy studies indicate that its effect on long-term outcomes may have been underemphasized.19 Although early severe recurrence of primary FSGS and primary oxalosis are well documented, surveillance biopsy studies have shown a high incidence of recurrence of several glomerular diseases. For example, membranoproliferative GN has a high rate of recurrence after transplantation (42% in one protocol biopsy series), with graft loss commonly occurring in the first few years.29 IgA nephropathy also commonly recurs but does not often progress rapidly to graft failure.19 In relatively young transplant recipients, recurrent disease may be an especially vexing problem given the frequency of glomerular disease causing ESRD and their lengthy projected post-transplant survival. Obviously, recurrent disease is a heterogeneous problem, but advances in recognition and therapy have the potential to contribute to better long-term graft survival.
Late Graft Injury: The Emergence of Alloantibody
Biopsies performed to evaluate new-onset graft dysfunction or proteinuria >5 years after transplantation consistently indicate a major role for antibody-mediated late graft injury. In the Deterioration of Kidney Allograft Function (DeKAF) Trial, subjects underwent for-cause biopsies 7.5±6.0 years post-transplant. Patients with DSA, C4d, or both were at substantial risk of graft failure over 2 years postbiopsy (Figure 4). The severity of clinical injury correlated with the intensity of antibody response.30
Relationship between DSA and C4d+ staining and graft survival in biopsies for cause. Reprinted from ref. 30, with permission.
Another recent large biopsy-for-cause study reached similar conclusions. The Edmonton group studied 315 allografts, 60 of which failed. As shown below, the incidence of dysfunction caused by antibody-mediated rejection increased over time—especially beyond 5 years after transplantation (Figure 5).31
Causes of graft loss over time in biopsies for cause. Reprinted from ref. 31, with permission.
Although an in-depth review of the role of DSA in late graft failure is beyond the scope of this review, several important aspects of this issue are important to our discussion here. Hamburger et al.32 noted a relationship between alloantibody and chronic rejection almost a half century ago. However, the development of several new techniques, including improved methods to identify DSA (single-antigen bead assays)33, improved histologic assessment (C4d staining and the recognition of histologic changes, such as glomerulopathy and microvascular inflammation),34,35 and specific gene expression profiles, has enabled investigators to clearly establish its role in chronic injury.36,37 However, the exact mechanisms of chronic antibody-associated injury remain somewhat unclear. The recognition of C4d− antibody-mediated injury suggests that complement independent processes, such as microvascular inflammation, may play a major role in chronic antibody-mediated injury.
Several recent longitudinal studies have examined the role of de novo DSA (dnDSA) in late graft loss in more detail.38–40 In previously unsensitized recipients, prevalence of dnDSA at 1 year is documented as approximately 11%, with an increase to around 30% at 10 years. The actual 3-year death-censored graft loss after dnDSA was 24%. Wiebe et al.39 documented dnDSA in 15% of low-risk patients at 4.6±3 years post-transplant; 10-year graft survival was 57% in those with dnDSA and 96% in those without dnDSA.
Beyond DSA, subclinical inflammation also seems to be an important predictor of late graft loss in biopsy-for-cause studies.41,42 However, the inflammation seems to be slightly different from that seen in surveillance biopsies and further illustrates yet another important difference between these two biopsy approaches. Not only are biopsies for cause obtained later, they are obtained from grafts with dysfunctions that commonly have more chronic damage (fibrosis and tubular atrophy, for example) than the average well functioning renal allograft. In these biopsies, inflammation in areas of fibrosis has been shown to be relatively common and an important predictor of subsequent graft loss in biopsy-for-cause studies. For example, analysis of the DeKAF cohort suggested that relatively intense levels of inflammation in areas of fibrosis (for example, a score of 3=50% involvement) were associated with an increased risk of graft loss, even when adjusted for the amount of fibrosis.42 Inflammation with such advanced fibrosis is less common in surveillance biopsies, and it is unclear if this result represents a different pathologic process or a continuum of chronic injury.
Certainly, the biopsy data summarized here underscore the deficiencies associated with using histology alone to classify chronic injury processes. Genomic and proteomic methodologies might enhance diagnostic accuracy of biopsies and provide new insight into mechanisms of damage.43,44 However, the preponderance of existing evidence supports alloimmune mechanisms as of profound significance in causing late allograft failure.
Contributing Factors
In addition to the major factors linked to late graft failure, there are likely others that play a more secondary role in pathogenesis, either enabling or exacerbating chronic injury. For example, the effect of early post-transplant events (ischemia/reperfusion injury and innate immunity) on chronic injury remains unclear, but it may set the stage for other adverse events. Allograft quality and donor age are commonly factors associated with graft loss in multivariate analyses and thus, may affect the ultimate outcome of subsequent alloimmune injury.45,46 Other factors, such as long-standing hypertension and diabetes, also may contribute to graft injury, with the potential to modify the efficacy of any therapeutic intervention. Recently, one group suggested a connection between the microbiome and chronic injury.47 Finally, although its importance was recognized early, recent studies have re-emphasized the importance of nonadherence as either a causative or contributing factor for some cases of late failure.
When viewed in perspective, these studies indicate that there are multiple causes of late renal allograft loss. It is a remarkable calculus in which these heterogeneous pathologic influences, occurring with varying frequency and at different time points, result in an almost linear rate of graft loss over many years.
Clinical Trial Design and Surrogate End Points
This broadened view of chronic injury has important implications for the rational design of clinical trials to prevent or treat chronic renal allograft injury. Two areas that deserve specific mention are the inclusion criteria and the study end points.
Because most patients do not have progressive dysfunction, most will not require alteration of immunosuppression beyond current standards. Including patients who would do well in a study aimed to improve graft survival is unnecessary. It increases costs and detracts from the ability to show an effect of the intervention. Thus, it is unlikely that a single novel therapeutic intervention, particularly if instituted at the time of transplantation, will show improvements in long-term graft survival. Progress will involve the enrollment of patients with a specific cause of graft loss at some point after transplantation. Although this approach risks intervening too late when the injury process may not be easily reversed or graft damage is permanent, it should enable design of more feasible clinical trials. Defining inclusion criteria and the timing of intervention are two of the major challenges facing this area along with relatively nonspecific inclusion criteria, such as fibrosis or inflammation risk, including a heterogeneous group of patients who may not respond equally to the therapeutic intervention.
However, even in a subset of high-risk recipients with a specific identifiable cause of injury, any study of chronic renal allograft injury still will take many years to show efficacy. This brings us to the important role of surrogate end points in the study of chronic renal allograft injury.
In the context of long-term studies, a surrogate end point is used as the study end point rather than graft loss. The primary benefit of a surrogate marker is to decrease the time interval between therapeutic intervention and ability to determine efficacy. Ideally, improvement in a surrogate marker would lead to improvement in subsequent graft survival. Unfortunately, the correlation between improvement in a surrogate marker and improvement in graft survival is always less than perfect. In many instances, the correlation can be quite poor. Thus, we contend that surrogate markers must still be considered interim end points. The final end point remains graft survival.
Surrogate end points, such as renal function, DSA, proteinuria, and graft histology, have been used in renal transplant trials for many years, and all have been shown to have some correlation with late transplant outcomes (Table 1). The past several years have seen an explosion in the number of biomarkers in renal transplantation. Although any factor is technically a biomarker, most studies have focused on specific molecules (cytokines, chemokines, cell surface molecules, gene expression signatures, microRNA, etc.) studied in urine, blood, or tissue.42,48–59 Many studies of biomarkers have aimed to detect surrogates for ACR using either a single biomarker or a set of biomarkers. Other studies have focused on markers of delayed graft function or progression of native renal disease. A few biomarker studies have focused on chronic injury (Table 2).
Surrogate biomarkers in kidney transplantation: Surrogate end points shown to correlate with post-transplant outcome
Additional biomarkers proposed to correlate with post-transplant outcome
Several recent reviews have examined biomarkers in detail,42,48–50 and we will confine our discussion to relatively few, with a special emphasis on issues related to the study of chronic injury.
As we describe here, the problem with many of the biomarkers that have been suggested is that they are relatively nonspecific and thus, may be problematic end points for clinical trials aimed at treating a specific cause of late graft loss.
Renal Function
Improvement in renal function at early time points (for example, at 1 year) is a commonly used surrogate end point for improved graft survival. Beyond the ongoing debate regarding the optimal approach for determining GFR and the need for accurate serial measurements, there may be pitfalls to this approach to preventing graft failure. Most important is the fact that the correlation between early renal function and subsequent graft survival is relatively poor. Although it is true that a low GFR at 1 year is associated with a higher rate of graft loss, its ability to predict graft failure is relatively limited.60 Indeed, most patients with graft loss between 1 and 5 years have normal renal function at 1 year.21 There also is little evidence that improving early GFR actually improves graft survival. Declining GFR (ΔGFR) over time may be more predictive of late allograft failure and therefore, a better surrogate end point.
Histology
Histologic changes, such as transplant glomerulopathy or subclinical inflammation, are moderately good predictors of subsequent graft loss.23 However, even if transplant glomerulopathy could be both accurately quantified and associated with a 50% incidence of graft loss within 5 years of diagnosis, very large numbers of subjects would be necessary to show efficacy. Studies using subclinical inflammation at 1 year would require even larger numbers. Histologic markers also are invasive, costly, and prone to errors in sampling, quantitation, and interpretation. It is also not certain whether surveillance biopsies or biopsies for cause offer the best chance for identification of high-risk patients at a time point when chronic injury is reversible (likely more common in surveillance biopsies and less common in for-cause biopsies). Still, using histologic changes as inclusion criteria or their reversal or stabilization as an end point would seem a fertile area for future study of interventions in chronic injury.
DSA
DSA seems to be a relatively good surrogate marker in studies of chronic antibody-mediated injury. It is relatively noninvasive and thus, can be performed sequentially in the same patient, allowing for pre- and post-treatment assessment with a primary end point of a decrease in DSA or even total disappearance of DSA in the treatment group. Alternatively, preventing dnDSA altogether might be an appropriate end point in a broader study of newly transplanted patients.
However, there are several problems even with this relatively objective measure. First, DSA has only a moderately high correlation with graft loss. As described above, the actual 3-year death-censored graft loss after dnDSA was 24%. Thus, even studies with DSA as a surrogate end point will require relatively large numbers of patients followed for many years to show improvement in graft survival. Second, there is no consensus on how to measure DSA. For example, LABScreen and other solid-phase assays are not licensed by the FDA for measuring different levels of DSA—only their presence or absence. There also can be significant interlaboratory variability in these tests. Recently, C1q-binding antibodies have been suggested to correlate better with both acute and chronic antibody-mediated injury and thus, may be a better surrogate marker.61,62 Finally, serum DSA may not always correspond with tissue injury. In the DeKAF Study, for example, patients whose biopsies showed C4d+ had the same risk of graft failure with or without DSA. This result could reflect the effect of non-HLA antibody or low levels of circulating alloantibody, which are difficult to distinguish at present.
Gene Expression Signatures
As mentioned above, relatively few biomarkers have been identified as being related to chronic injury, and it is unclear which approach will emerge as being most important. The most widely studied biomarker has been gene expression signatures in allograft biopsies associated with chronic antibody-mediated injury, which were described by the Edmonton group.63 Using an Affymetrix gene chip platform, this group has identified a group of genes associated with antibody-mediated rejection (primarily genes related to natural killer cells and other microvascular inflammation).63 This test set of genes was validated in a separate group of biopsies and thus, has been shown to be reproducible.64 This group has suggested that gene expression enhances the diagnosis of AMR when used in conjunction with other factors, including histology.65 As we gain more experience in this area, changes in gene expression signatures or other biomarkers will likely emerge as important surrogate end points for chronic injury intervention trials.
A Path Forward
The rapid advance in transplant therapeutics that characterized the last quarter of the 20th century has slowed significantly. This trend, recognized in recent FDA workshops, has become a major concern to the transplant community. In its recent Call to Action, the American Society of Transplantation highlighted the issue by noting that additional advances are unlikely without novel clinical trial designs.66 This finding is especially true for trials in therapy for the prevention of chronic renal allograft injury.
Table 3 outlines an example of the type of protocol that would address the unique problems facing studies of chronic injury. This protocol would enroll patients with a clearly defined cause of graft injury that places them at high risk for subsequent graft loss (in this case, dnDSA). The protocol uses a surrogate marker that is closely related to the underlying pathologic process—the reduction in DSA or prevention of transplant glomerulopathy. Achieving this primary end point would be sufficient to obtain interim or expedited drug approval from agencies, such as the FDA (for example, Subpart H).67 The demonstration of improved graft survival would lead to final approval at some later time point.
Proposed Clinical Trial Design for new therapy to prevent chronic renal allograft injury
Similar studies could be designed to study recurrent disease, polyoma virus associated nephropathy, or even nonadherence. Studies like these could be used to validate any biomarkers as surrogates for graft loss, including intragraft gene expression or other biomarkers. Although using this approach does not guarantee success, it is likely to yield important knowledge related to the mechanisms of antibody-mediated injury regardless of the outcome.
We are optimistic regarding our ability to improve long-term renal allograft survival. Our knowledge of late graft failure has advanced significantly in the last decade and is sufficient to generate several new testable hypotheses. We, like our predecessors, may be at risk of launching therapeutic trials on the basis of faulty paradigms. However, delaying new trials until each nuance is carefully explained and can be addressed will serve neither our patients nor our profession. Advances will come only if all interested parties adopt new investigative approaches to improving kidney transplant survival.
Disclosures
The authors have received research contracts from Millennium and Alexion.
Footnotes
Published online ahead of print. Publication date available at www.jasn.org.
- Copyright © 2015 by the American Society of Nephrology