Abstract
ABSTRACT. Small-solute clearance targets for peritoneal dialysis (PD) have been based on the tacit assumption that peritoneal and renal clearances are equivalent and therefore additive. Although several studies have established that patient survival is directly correlated with renal clearances, there have been no randomized, controlled, interventional trials examining the effects of increases in peritoneal small-solute clearances on patient survival. A prospective, randomized, controlled, clinical trial was performed to study the effects of increased peritoneal small-solute clearances on clinical outcomes among patients with end-stage renal disease who were being treated with PD. A total of 965 subjects were randomly assigned to the intervention or control group (in a 1:1 ratio). Subjects in the control group continued to receive their preexisting PD prescriptions, which consisted of four daily exchanges with 2 L of standard PD solution. The subjects in the intervention group were treated with a modified prescription, to achieve a peritoneal creatinine clearance (pCrCl) of 60 L/wk per 1.73 m2. The primary endpoint was death. The minimal follow-up period was 2 yr. The study groups were similar with respect to demographic characteristics, causes of renal disease, prevalence of coexisting conditions, residual renal function, peritoneal clearances before intervention, hematocrit values, and multiple indicators of nutritional status. In the control group, peritoneal creatinine clearance (pCrCl) and peritoneal urea clearance (Kt/V) values remained constant for the duration of the study. In the intervention group, pCrCl and peritoneal Kt/V values predictably increased and remained separated from the values for the control group for the entire duration of the study (P < 0.01). Patient survival was similar for the control and intervention groups in an intent-to-treat analysis, with a relative risk of death (intervention/control) of 1.00 [95% confidence interval (CI), 0.80 to 1.24]. Overall, the control group exhibited a 1-yr survival of 85.5% (CI, 82.2 to 88.7%) and a 2-yr survival of 68.3% (CI, 64.2 to 72.9%). Similarly, the intervention group exhibited a 1-yr survival of 83.9% (CI, 80.6 to 87.2%) and a 2-yr survival of 69.3% (CI, 65.1 to 73.6%). An as-treated analysis revealed similar results (overall relative risk = 0.93; CI, 0.71 to 1.22; P = 0.6121). Mortality rates for the two groups remained similar even after adjustment for factors known to be associated with survival for patients undergoing PD (e.g., age, diabetes mellitus, serum albumin levels, normalized protein equivalent of total nitrogen appearance, and anuria). This study provides evidence that increases in peritoneal small-solute clearances within the range studied have a neutral effect on patient survival, even when the groups are stratified according to a variety of factors (age, diabetes mellitus, serum albumin levels, normalized protein equivalent of total nitrogen appearance, and anuria) known to affect survival. No clear survival advantage was obtained with increases in peritoneal small-solute clearances within the range achieved in this study.
The role of small-solute clearances, determined on the basis of creatinine and urea kinetics, in influencing outcomes among patients undergoing dialysis is under increased scrutiny (1–14). Clinical treatment guidelines proposed by professional renal societies have consistently emphasized small-solute clearance targets as a prominent component of the overall adequacy of renal replacement therapy (3,15–17). In most cases, these targets are based on the interpretation of observational studies (1,2,4,7–9,12–14,18–24). Although surveys of adequacy measures have demonstrated progressively more patients achieving higher clearance targets with time, they have also identified a significant proportion of patients who are unable to achieve the higher targets (25–27). Furthermore, there is a risk that these targets, which are unsubstantiated by controlled studies, could become institutionalized by regulatory and governmental agencies.
For peritoneal dialysis (PD), small-solute clearance targets have often been established on the basis of the tacit assumption that peritoneal and renal clearances are equivalent and therefore additive (1,3,10,14,16,17,19,22,23). Total small-solute clearance targets not only have been defined differently by various expert nephrology committees (3,15–17) but also have been modified with time by given committees (3,15). The quality of the evidence on which recommendations for clearance targets are based has come under increasing scrutiny (1,5,6). Most studies that examined the relationship between small-solute clearances and mortality rates noted that patient survival was directly correlated with renal clearance (1,2,7,9,11,12,14,23). The contribution of peritoneal clearance has remained unclear (1,2,12,18,23). However, the assumption that renal and peritoneal clearances are additive continues to prevail and drive clinical guidelines that influence current clinical practice (3,15–17). This hypothesis has become so widely accepted that even studies that demonstrate no effect of peritoneal small-solute clearances on outcomes have been interpreted as indicating that maintained levels of total clearance (renal plus peritoneal) determine survival (28). Furthermore, this hypothesis has fostered the idea that, as the contribution of renal clearance decreases with time, it can be replaced in its effect on survival by increases in peritoneal clearances (3,15–17).
The perceived need to enhance peritoneal clearance increases the logistic burden of therapy. The drive for higher volumes and/or more exchanges has resulted in more cost, lower quality of life, increased rates of withdrawal because of an inability to meet defined targets, reluctance to initiate PD for large or anuric patients, and attempts to develop technologies that would enhance peritoneal small-solute clearances. What has heretofore been missing is a prospective, randomized, controlled, interventional examination of the effects of increased peritoneal small-solute clearances (beyond a standard minimal prescription) on survival for patients undergoing PD.
Materials and Methods
Study Design
We conducted a prospective, randomized, controlled, clinical trial called ADEMEX (ADEquacy of PD in MEXico), which examined the effects of increased PD small-solute clearances on mortality rates among patients with end-stage renal disease (ESRD) who were being treated with continuous ambulatory PD (CAPD). The study protocol was approved by the local clinical research committees of all participating centers, and all study subjects gave written informed consent. Patients were recruited from 24 dialysis centers in 14 Mexican cities. Twenty-one of the dialysis centers were part of the Instituto Mexicano del Seguro Social, and two were part of the Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado. The remaining center was the Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán in Mexico City. Between June 1998 and May 1999, the study enrolled 965 patients undergoing CAPD, who were randomized into the control group or the intervention group. By design, the study was terminated in May 2001, when the last enrolled patient had completed 2 yr of follow-up monitoring. With an intent-to-treat (ITT) analysis, the total number of patient-months at risk was 10,464.7 patient-mo for the control group (mean follow-up period, 21.6 mo) and 10,629.0 patient-mo for the intervention group (mean follow-up period, 22.1 mo). As-treated follow-up periods averaged 18.9 mo for the control group and 18.8 mo for the intervention group. The study was performed in Mexico because the only prescription available at the time of study initiation was CAPD with four exchanges of 2 L daily.
Power Calculations
With a target sample of 800 patients (400 patients/group), the study was originally powered at 80% to detect a 5% absolute difference in 1-yr survival or, equivalently, a 30% reduction in the mortality rate [relative risk (RR) = 0.70]. This was based on the assumptions that the mortality rate for the control group would be 23 deaths/100 patient-yr, that the accrual period would be 4 mo, and that there would be 2 yr of follow-up monitoring from the date on which the last patient was enrolled. The final sample size of 965 patients exceeded the original study plan and, with the aforementioned assumptions, actually yields a power of 90% (power = 0.895) to detect a 30% reduction in the mortality rate or, alternatively, a power of nearly 75% (power = 0.742) to detect a 25% reduction in the mortality rate (RR = 0.75). The mortality rate of 23 deaths/100 patient-yr was based on the assumptions that new patients would average 17 deaths/100 patient-yr, prevalent dialysis patients would average 25 deaths/100 patient-yr, and 25% of the patients would be new patients. In actuality, the observed mortality rate for the control group was 18 deaths/100 patient-yr (157 deaths in 10,464.7 patient-mo) and accrual required 1 yr, thus extending the total length of the study to nearly 3 yr. Under these actual conditions, the final sample of 965 patients (484 control, 481 test) provides an observed power of 85% to detect a 30% reduction in the mortality rate (RR = 0.70) and nearly 70% power (power = 0.69) to detect a 25% reduction in the mortality rate (RR = 0.75). Therefore, with the extended accrual and observation periods and the increased number of patients, the study achieved an “observed power” sufficient to detect a reduction in the mortality rate of 25 to 30%, which is similar to that targeted in the hemodialysis study (29).
Selection of Patients
Study subjects were recruited primarily via screening of patients with ESRD who were treated with PD at the participating centers. Chronically treated and new (within 3 mo after initiation of PD) patients were eligible for participation if they met the inclusion and exclusion criteria. All patients who were between 18 and 70 yr of age were eligible if they were undergoing CAPD (with a prescription of four daily exchanges of 2 L) and exhibited measured peritoneal creatinine clearance (pCrCl) values of <60 L/wk per 1.73 m2, irrespective of their residual renal function. Patients who were unable to give informed consent, were seropositive for hepatitis B or HIV, were receiving immunosuppressive drugs, had active malignancies, abdominal hernias, or cardiac failure, or had experienced a peritonitis episode ≤1 mo before being screened for the study were excluded.
Randomization
A total of 965 subjects in 24 centers were randomly assigned to the intervention or control group (in a 1:1 ratio) through a central randomization center. Subjects in the control group continued with their existing PD prescriptions, which consisted of four daily exchanges of 2 L of standard PD solution (the only prescription available in Mexico). Subjects randomized to the intervention group were prescribed a modified PD regimen to achieve a pCrCl value of 60 L/wk per 1.73 m2. Only two prescription modifications to achieve that target were allowed for each patient. The first new prescription was based on body size; patients with a body surface area (BSA) of ≤1.78 m2 received a prescription of four exchanges of 2.5 L in 24 h (4 × 2.5 L) and patients with a BSA of >1.78 m2 received a prescription of four exchanges of 3.0 L in 24 h (4 × 3.0 L). A BSA of 1.78 m2 corresponds to the value separating the highest tertile of the BSA distribution for an unselected population of patients undergoing PD in Mexico from the middle tertile. After 2 mo of this therapy, pCrCl was measured again. If the patients had reached the pCrCl target with the first prescription change, they continued with the same regimen for the remainder of the study, provided that they tolerated the increased fill volume. Patients who failed to reach the clearance goal and who manifested no intolerance for the increased fill volumes were issued a second modified prescription, which was also based on body size. Patients with a BSA of ≤1.78 m2 received a second prescription of five exchanges of 2.5 L in 24 h (5 × 2.5 L), with the aid of an automated nighttime exchange device (Quantum; Baxter Healthcare Corp., Deerfield, IL). Similarly, patients with a BSA of >1.78 m2 who did not reach the clearance target but tolerated the 3.0-L dwell volumes received five exchanges of 3.0 L in 24 h (5 × 3.0 L), with the aid of an automated nighttime exchange device (Quantum).
Patients with a BSA of >1.78 m2 who achieved a pCrCl above the clearance target but were intolerant of the increased fill volumes underwent prescription modification as follows: if the achieved pCrCl was >70 L/wk per 1.73 m2, the prescription was modified to four exchanges of 2.5 L; if the achieved pCrCl was between 60 and 70 L/wk per 1.73 m2, the prescription was modified to two exchanges of 2.5 L (morning and afternoon) and two exchanges of 3.0 L (evening and night). Patients with a BSA of >1.78 m2 who did not reach the clearance target and did not tolerate the 3.0-L dwell volumes received three exchanges of 2.5 L (morning, afternoon, and evening) and two exchanges of 3.0 L with Quantum (night).
Study Endpoints
Death was the primary endpoint for the study. The secondary endpoints were hospitalizations, therapy-related complications, correction of anemia, and effects on nutritional status (as determined on the basis of normalized protein equivalent of total nitrogen appearance [nPNA] measurements and serum albumin, prealbumin, and transferrin concentrations). Primary and secondary outcome events (deaths, hospitalizations, and clinical events) were counted from the time of randomization, whereas clinical and laboratory features were assessed at scheduled intervals after randomization for the control group and after stabilization of the dialysis prescription for the intervention group.
Clinical and Biochemical Assessments
Follow-up visits were scheduled at 2-mo intervals, beginning immediately after randomization for the control group and after stabilization with a final prescription for the intervention group. At baseline and at each of these visits, a clinical history assessment and a physical examination were performed. Laboratory assessments were completed at every other follow-up visit (every 4 mo). At those times, the patients brought in 24-h dialysate and urine collections, for clearance measurements, and blood samples were obtained. Hematologic features, serum electrolyte, calcium, and phosphate levels, aminotransferase levels, and bilirubin concentrations were measured in the local laboratories of the participating centers. Serum samples, as well as aliquots of dialysate and urine, were sent to the coordinating center for measurement of glucose levels, blood urea nitrogen levels, and creatinine levels [for Kt/V and creatinine clearance (CrCl) calculations], as well as cholesterol, triglyceride, HDL, and LDL levels, with conventional techniques (Synchron CX-5 analyzer; Beckman, Brea, CA). Albumin, prealbumin, and transferrin levels were measured by nephelometry (Array; Beckman). Peritoneal transport characteristics were determined at baseline for all patients (while they were receiving a standard 4 × 2 L CAPD regimen) with the dialysis adequacy and transport test (DATT), the results of which were demonstrated in the same population to be closely correlated with the classification of peritoneal transport with the peritoneal equilibration test (PET) (30).
Monitoring
Patient compliance was ascertained via dialysate bag counts during unscheduled home visits, taking into account the number of bags delivered and the expected utilization (based on the prescription). Patient noncompliance was also assessed at each office/clinic visit, in an interview with the attending nurse or physician. Patients were asked how many exchanges they had missed in the week before the current visit, and the results were recorded for analysis. Finally, serum creatinine levels were recorded as a more objective measure of compliance. Adverse events and deaths were monitored by a safety committee and an interim analysis was performed at a scheduled time, to ensure patient safety.
Statistical Analyses
Overall patient survival analysis was performed by using life-table techniques, with comparisons made on the basis of the log-rank test (31,32). Adjustments for baseline differences in demographic characteristics and comorbid conditions and for differences in nutritional parameters were performed by using the Cox proportional-hazards model (31–34). The model makes use of both time-independent and time-dependent covariates. To account for possible nonproportional hazards, a piecewise exponential survival model (or interval Poisson regression) with time-dependent covariates was also used to compare the RR between the intervention and control groups at 6-mo intervals (32). All primary analyses were performed using an ITT approach. In the ITT analysis, all patients randomized to the intervention and control groups were analyzed and compared, regardless of the actual dialysis prescription achieved by each patient (i.e., regardless of whether patients in the intervention group achieved a target pCrCl of 60 L/wk per 1.73 m2). To safeguard against informative censoring, patient follow-up data with respect to death were maintained until the study end date for all patients who switched to alternative forms of dialysis (either hemodialysis or PD). Therefore, in the ITT analysis, patient survival times were censored only when patients received a transplant, experienced a return of renal function, were lost to follow-up monitoring, or completed the study. An as-treated analysis was also performed, in which patients were censored if they withdrew from the study alive on or before May 6, 2001, regardless of the reason for withdrawal. In this analysis, patient data were analyzed according to the treatment group to which the patients were randomized but follow-up monitoring ceased upon death, withdrawal from the study, or May 6, 2001, whichever came first.
In addition to providing an overall comparison of patient survival data, ITT analyses were performed for select subgroups of patients. Two subgroups were formed, on the basis of patient baseline serum albumin concentrations and baseline nPNA values. For serum albumin levels, the two subgroups corresponded to patients with baseline serum albumin levels of <3.0 and ≥3.0 g/dl. With respect to nPNA, the two subgroups corresponded to patients with nPNA values of <0.80 g/kg per d and ≥0.80 g/kg per d. Another subgroup analysis examined whether cases were prevalent or incident at the time of study initiation. Incident cases included all patients for whom dialysis was initiated ≤3 mo before the time of randomization. Analyses were also performed with stratification according to diabetic status (diabetes mellitus or no diabetes mellitus) and age (<50 or ≥50 yr). Separate analyses were performed with adjustments for possible center-to-center differences. Lastly, a separate analysis was performed for patients who were functionally anephric (GFR of <1 ml/min) at baseline.
Hospitalization and infection rates (e.g., hospital admissions per patient-years and peritonitis episodes per patient-years) were analyzed by using Poisson regression techniques for count data. Corrections for overdispersed Poisson counts were incorporated as necessary, including the use of γ-Poisson regression (or negative binomial regression) for peritonitis rates (35,36).
Linear models for repeated measures, including ANOVA and linear mixed-effects models, were used to analyze continuous measurements recorded repeatedly with time (37,38). Both normal theory likelihood methods and semiparametric generalized estimating equations were applied, using regression procedures included in the SAS software system (39). Specifically, a repeated-measures ANOVA, incorporating the effects of time, treatment, and their interaction, was performed with generalized estimating equations. With this model, least-squares means at each time point were computed and compared for the intervention and control groups, and a test was performed to determine whether there was any significant treatment-time interaction. In the absence of such an interaction, overall least-squares means across time were computed and compared for the two treatment groups, using a main-effects ANOVA model. As a precaution against potential bias resulting from nonignorable missing data attributable to patient withdrawal, a conditional mixed-effects regression analysis was performed in which the outcome variable was modeled as a quadratic function of time, conditional on patient withdrawal times (37,38). This model was used to assess whether there were any linear or quadratic trends with time and whether such trends were the same for the intervention and control groups when data were adjusted for patient withdrawal times. Using this model, time-averaged values for the intervention and control groups were also computed and compared on the basis of least-squares means evaluated at the average withdrawal times for the two groups. For both analyses, an exchangeable correlation structure was assumed to account for correlation across observations recorded for the same patient. However, to safeguard against possible misspecification of this assumed correlation structure, all comparisons were performed by using a robust estimate of the standard errors.
For discrete variables, Pearson’s χ2 test and Fisher’s exact test were used to compare baseline characteristics (e.g., gender and diabetic status) for patients in the intervention and control groups. The t test and Wilcoxon’s rank-sum test were used to compare baseline differences between the intervention and control groups for continuous measurements (e.g., baseline serum albumin levels and nPNA values).
Finally, a single interim ITT analysis was performed approximately 1 yr after the enrollment phase of the study. The method of O’Brien and Fleming (40) was used to compute the interim P value necessary to terminate the study in favor of the intervention group, demonstrating significantly better patient survival.
Results
Baseline Characteristics of the Patients
Between June 1998 and May 1999, 965 patients undergoing CAPD were enrolled in the study and were randomized into the control or intervention group. Baseline clinical and laboratory parameters are summarized in Tables 1 and 2. The study groups were similar with respect to demographic characteristics, causes of renal disease, prevalence of coexisting conditions, residual renal function, peritoneal clearances before intervention, hematocrit values, and multiple indicators of nutritional status. Total (renal plus peritoneal) CrCl values and total Kt/V values were similar for the control and intervention groups (total CrCl, 61.8 ± 26.3 versus 59.8 ± 20.2 L/wk per 1.73 m2; total Kt/V, 1.95 ± 0.67 versus 1.93 ± 0.57; mean ± SD; P = NS). Less than 40% of the patients in either group reached or exceeded a total CrCl of 60 L/wk per 1.73 m2 or a total Kt/V of 2.0. The incidences of preexisting ischemic heart disease (control, 4.3%; intervention, 3.1%) and stroke (control, 1.7%; intervention, 1.5%) were similar for the two groups. These findings illustrate the success of the randomization procedure.
Baseline clinical characteristics for the two study groupsa
Baseline laboratory measurements for the two study groupsa
Assessment of Intervention Effects
Patients in the control group continued to receive four daily exchanges of 2 L for the duration of the study. In the intervention group, 64% of the patients were assigned four daily exchanges of 2.5 L and 36% of the patients received four daily exchanges of 3 L at the time of the first prescription. Additional changes were made with the second prescription, with 85 patients (22%) in the intervention group being assigned a fifth daily exchange (with the aid of a nighttime exchange device). In the intervention group, the total prescribed daily dialysate volume was 10 L for 37% of the patients, 11 L for 20%, 12 L for 21%, 12.5 L for 8%, and 15 L for 14%.
Among the patients in the control group, pCrCl and peritoneal urea clearance (pKt/V) values remained constant, at near-baseline levels, for the duration of the study (Figures 1 and 2). In the intervention group, pCrCl and pKt/V values predictably increased and remained separated from the measurements for the control group for the entire duration of the study (P < 0.01). In the intervention group, 59% of the patients achieved a pCrCl of ≥60 L/wk per 1.73 m2, and 78% reached a total CrCl at or above this level. A slightly higher percentage of patients in the intervention group (83%) reached or exceeded a total Kt/V of 2.0.
Figure 1. Time courses of peritoneal creatinine clearance (pCrCl) values for the control and intervention groups during the study. The groups became significantly separated soon after randomization and remained distinct during the follow-up period. Values shown are means and 95% confidence limits for the means.
Figure 2. Time courses of peritoneal Kt/V (pKt/V) values for the control and intervention groups during the study. The groups became significantly separated soon after randomization and remained distinct during the follow-up period. Values shown are means and 95% confidence limits for the means.
To further examine the separation in achieved peritoneal clearances between the two groups, we compared the tertile-defining values for pCrCl and pKt/V in the two groups. We computed the average postrandomization pKt/V and pCrCl values for each patient in the intervention and control groups. Using these values, we then computed the corresponding tertiles. The 33rd and 67th percentile pCrCl values were as follows: control, 42.5 and 49.1 L/wk per 1.73 m2; intervention, 53.4 and 60.5 L/wk per 1.73 m2, respectively. For pKt/V, the 33rd and 67th percentile values were as follows: control, 1.45 and 1.74; intervention, 1.94 and 2.24, respectively. The clear separation between the two groups is illustrated by the fact that 67% of the patients in the control group exhibited an average pKt/V value of <1.74, whereas 67% of the patients in the intervention group exhibited an average pKt/V value of >1.94.
Although it is unlikely that a systematic bias would occur uniformly and consistently in a randomized trial of this size, measures of compliance were assessed to validate the sustained separation of the two groups in terms of peritoneal small-solute clearances. The numbers of missed exchanges per year per patient were similar for the two groups (control, 15.1 exchanges/yr per patient; intervention, 18.6 exchanges/yr per patient; P = NS), as indicated by the consumption of dialysis solutions. We could not ascertain compliance with the regimen (timing of exchanges and length of dwell), but the intended increases in clearance were consistently achieved during the study. Serum creatinine and blood urea nitrogen levels were consistently lower for the intervention group throughout the duration of the study, whereas the urea generation rates did not differ between the groups (see below).
The profiles of residual renal function during the study were also similar for the two groups. The use of larger fill volumes and particularly the use of the nighttime exchange device for 85 patients in the intervention group resulted in enhanced peritoneal ultrafiltration, compared with the control group.
Primary Outcomes
Despite differences in small-solute clearances, patient survival was similar for the control and intervention groups, as indicated by ITT analysis (Figure 3), with a RR of death (intervention/control) of 1.00 [95% confidence interval (CI), 0.80 to 1.24]. This was also reflected in the similarity of the 6-mo-interval mortality rates computed by Poisson regression analysis. The time-dependent RR (intervention/control) established by Poisson regression analysis for the consecutive 6-mo intervals were as follows: 0 to 6 mo, RR = 1.17 (P = 0.84); 6 to 12 mo, RR = 1.07 (P = 0.91); 12 to 18 mo, RR = 1.07 (P = 0.93); 18 to 24 mo, RR = 0.73 (P = 0.68); 24 to 30 mo, RR = 1.06 (P = 0.96); 30 to 36 mo, RR = 0.93 (P = 0.98). Overall, the control group exhibited a 1-yr survival of 85.5% (CI, 82.2 to 88.7%) and a 2-yr survival of 68.3% (CI, 64.2 to 72.9%). Similar values were observed for the intervention group, with a 1-yr survival of 83.9% (CI, 80.6 to 87.2%) and a 2-yr survival of 69.3% (CI, 65.1 to 73.6%). The as-treated analysis revealed results similar to those obtained with the ITT analysis (overall RR = 0.93; CI, 0.71 to 1.22; P = 0.6121).
Figure 3. Life-table intent-to-treat (ITT) analysis of patient survival, comparing the study groups. The P value was 0.9842 (log-rank test). RR, relative risk; CI, confidence interval.
In additional ITT analyses, mortality rates for the two groups remained similar when patients within each group were stratified according to a variety of measures known to be associated with patient survival (e.g., age, diabetes mellitus, serum albumin levels, nPNA values, and anuria). Age had a significant effect on outcomes for both groups. Patients <50 yr of age exhibited significantly better survival than did those ≥50 yr of age (P < 0.0001) (Figure 4). In both age strata, however, there was no difference in survival between the control and intervention groups. As expected, diabetic status negatively affected survival (P < 0.0001). However, within the diabetic or nondiabetic stratum, no effect of the intervention on survival could be discerned (Figure 5). Serum albumin levels had an effect on survival (P < 0.0001), with higher values conferring a survival advantage (Figure 6). Within the strata for this measure, however, survival was similar for the control and intervention groups. Life-table ITT analyses of patient survival stratified according to nPNA values (>0.8 g/kg per d or <0.8 g/kg per d) and study group revealed a significant nPNA effect (P = 0.0001) but no significant overall treatment effect (P = 0.8514). Similarly, there was no significant difference in patient survival between the intervention and control groups when the two were compared for the subset of functionally anephric patients (GFR of <1 ml/min).
Figure 4. Life-table ITT analysis of patient survival stratified according to age and study group. The age effect was significant at P = 0.0001. The overall treatment effect was NS (P = 0.5146).
Figure 5. Life-table ITT analysis of patient survival stratified according to diabetic status and study group. The diabetic status effect was significant at P = 0.0001. The overall treatment effect was NS (P = 0.7797). DM, diabetes mellitus.
Figure 6. Life-table ITT analysis of patient survival stratified according to serum albumin levels (S. Alb) and study group. The albumin effect was significant at P = 0.0001. The overall treatment effect was NS (P = 0.5204).
A variety of factors were observed to have no independent effects on survival, and no differences in survival between the intervention and control groups were discernible in subsets stratified according to these measures, including case type (incident versus prevalent), gender, and baseline peritoneal transport characteristics (characterized with the dialysis adequacy and transport test into the four standard classifications of high, high average, low average, and low transporters). The relationship between body size and survival was examined in great detail. Three indices of body size (stratified by tertiles) were examined, i.e., BSA, total body water (estimated by using Watson’s formula [3,15]), and body mass index. No differences in survival between the intervention and control groups were discernible in subsets stratified according to any of these measures of body size. Additionally, no center effect on the results was discerned.
To further examine the potential interactions between total body water and Kt/V, we examined the RR of death on the basis of total body water tertiles and pKt/V quintiles, using the middle total body water tertile and the middle pKt/V quintile as references and ignoring group assignments. This analysis was performed with a time-dependent Cox regression analysis adjusted for gender, age, diabetes mellitus, baseline albumin levels, and GFR. As demonstrated in Figure 7, we observed no evidence of a size effect or a dose effect within the range of values studied.
Figure 7. Results of time-dependent Cox regression analysis adjusted for gender, age, diabetes mellitus, baseline albumin levels, and GFR. RR and 95% CI are plotted according to time-dependent pKt/V values (divided into quintiles) and baseline total body water (TBW) values (divided into tertiles).
There were a total of 157 deaths in the control group and 159 deaths in the intervention group. Acute myocardial infarction was the most common cause of death in both groups (control, 22.4%; intervention, 27.8%; P = NS). Greater proportions of patients in the control group died as a result of congestive heart failure (13.4% versus 5.7% in the intervention group, P < 0.05) or a combination of uremia/hyperkalemia/acidosis (12.2% versus 5.1% in the intervention group, P < 0.05). Generalized infections, strokes, and peritonitis were equally common as causes of death in the two groups.
Multivariate Cox regression analysis in this trial demonstrated that several factors were powerful predictors of outcomes for the study population as a whole. Listed in Table 3 are the results of a Cox regression analysis performed by ignoring the effect of the treatment group. The Cox regression model was chosen to mirror, as closely as possible, earlier analyses performed with the Canada-United States (CANUSA) study data, in which the effects of peritoneal and renal clearances on patient survival were assessed independently of each other (47). With the exception of subjective global assessments, the model summarized in Table 3 includes the factors identified in the CANUSA study as being significantly associated with patient survival. Age, diabetes mellitus, serum albumin levels, residual renal function, and, to a lesser extent, nPNA values were all identified as significant factors associated with patient survival.
Predictors of outcomes for the study population as a whole, by multivariate Cox regression analysisa
Secondary Outcomes
The results of selected laboratory measures averaged across the study period are summarized in Table 4. The two groups were similar with respect to most of the measures. pCrCl and pKt/V were different by design, and the values for the two groups remained significantly separated. Serum albumin levels averaged for the duration of the study were slightly higher for the intervention group. This finding was thought to be attributable to the slightly higher baseline value for the intervention group, because the two groups were similar when changes in serum albumin levels from baseline values were considered. A minimal (approximately 100 ml/d) but statistically significant increase in peritoneal ultrafiltration was observed in the intervention group. Similar results were obtained with a conditional linear mixed-effects model, which takes into account the possible effects of nonignorable missing data attributable to patient withdrawal.
Selected clinical and laboratory measures averaged across the study durationa
Although overall withdrawal and technique survival was similar for the two groups, the specific causes of withdrawal were different. More patients in the control group withdrew from the study because of uremia [24 patients (5%) in the control group versus no patients in the intervention group, P < 0.0001], and more patients in the intervention group withdrew from the study because of peritoneal exchange volume-related discomfort [17 patients (3.5%) in the intervention group versus one patient (0.2%) in the control group, P < 0.001]. Similar numbers of patients received transplants in the two groups [control, 37 patients (7.6%); intervention, 26 patients (5.4%); P = NS] or were lost to follow-up monitoring [control, 48 patients (9.9%); intervention, 42 patients (8.7%); P = NS]. Technique survival, as determined by life-table analysis, was not affected by treatment group, diabetic status, or baseline peritoneal transport characteristics (as assessed with the dialysis adequacy and transport test).
Hospitalization rates were similar for the two groups, in both unadjusted numbers of admissions per patient per year and rates adjusted for age, gender, diabetic status, serum albumin concentration, and previous time on dialysis; only the adjusted analysis is described here (control, 1.03 admissions/patient per yr; intervention, 1.17 admissions/patient per yr; P = 0.166). The adjusted numbers of hospital days were also similar for the two groups (control, 6.8 d/patient per yr; intervention, 7.2 d/patient per yr; P = 0.593), as were the adjusted peritonitis rates (control, 24.4 patient-mo/episode; intervention, 23.3 patient-mo/episode; P = 0.622). Similarly, no differences in adjusted exit site infection rates were observed between the two groups (control, 64.9 patient-mo/episode; intervention, 51.9 patient-mo/episode; P = 0.326).
Discussion
This study provides evidence that variations in peritoneal small-solute clearances within the range studied have a neutral effect on patient survival, both for the groups overall and for the groups stratified according to a variety of measures (e.g., age, diabetes mellitus, serum albumin concentrations, nPNA values, and anuria) known to influence survival. These results were obtained with a rigorous experimental design that ensured proper baseline randomization of the two groups. Furthermore, the intervention group was separated from the control group by significant increases in peritoneal small-solute clearances, which were maintained for the duration of the study. Therefore, the goals of comparing two groups that were identical at baseline and were separated on the basis of distinct peritoneal small-solute clearances during the study were achieved.
Although surveys of small-solute adequacy measures have demonstrated progressive improvements in past years in the numbers of patients achieving higher target clearances, studies continue to identify significant proportions of patients below these proposed targets (25–27). The United States core indicator study of 1997 noted that only 47% of patients met or exceeded the Dialysis Outcomes Quality Initiative (DOQI) CrCl target and 56% met the Kt/V target. Furthermore, 30 to 50% of the patients who met the DOQI guidelines for Kt/V and CrCl values did so only with the contribution of residual renal function (25,26). These findings were observed despite the frequent use of a fifth exchange in CAPD (23%) and a mid-day exchange in automated PD (APD) (37%). In addition, a large percentage of those patients were using dwell volumes of >2 L (38% of those undergoing CAPD and 42% of those undergoing APD). In the 2000 ESRD Clinical Performance Measures report, 65% of CAPD patients and 60% of APD patients met the weekly DOQI Kt/V goal and 61% of CAPD patients and 51% of APD patients met the CrCl goal (27). Greater proportions of patients in the intervention group in our trial exceeded these measures; 83% (versus 65% in the United States study) exceeded the Kt/V guideline and 78% (versus 61% in the United States study) exceeded the CrCl guideline. The corresponding proportions in our control group were similar to the values observed in the United States in 1997. The two groups in our study encompass the spectrum of clinical experience and bracket the ranges of small-solute clearances achieved in North America before the introduction of the DOQI PD adequacy guidelines and after their widespread adoption in the United States (3,15).
The neutral effect of peritoneal small-solute clearance enhancement on patient survival may appear counterintuitive and discordant with the standard view on PD adequacy that underlies the clinical guidelines in use. It is obvious that, in the absence of any peritoneal clearance, these patients would ultimately have died as a result of their terminal renal failure. The issue we have examined in our study, however, is whether variations within the range of clearances achievable in clinical practice have effects on patient survival. Our findings suggest that the range of small-solute clearances used in the study, reflecting current clinical practice, may represent a “plateau” on the curve relating clearance dose and mortality rates. Once this plateau has been reached, further increases in peritoneal small-solute clearances would not be expected to materially affect outcomes, unless the increases reach a theoretical next break point in the relationship between the two parameters.
A closer examination of the pertinent literature indicates that our findings are consistent with a wealth of studies that also demonstrate a neutral effect of the ranges of peritoneal clearances achievable in current practice. These studies are summarized in Table 5.
Summary of pertinent literaturea
The CANUSA trial has played a pivotal role in influencing the definition of small-solute clearance adequacy, particularly in North America (1,14). The trial was originally interpreted as indicating that increases in total solute clearances would result in improved survival (14). Because the renal component of small-solute clearance decreases with time, the assumption was made that, if small-solute clearance could be increased via enhanced peritoneal contributions, then improved outcomes would be observed. Central to this hypothesis was the assumption that renal and peritoneal clearances are equivalent in influencing outcomes. What the study assumed, on the basis of model projections, was that maintenance of constant higher levels of small-solute clearance (mostly contributed by residual renal function) should theoretically result in better survival (14). However, because the CANUSA trial was not a controlled interventional trial, it did not actually test whether maintenance of high small-solute clearances, via increases in peritoneal contributions, affected survival rates (14).
The dominant role of renal clearance in the findings of the CANUSA trial was recognized by the principal investigators. Subsequent analyses and reports by the group have repeatedly emphasized the primary importance of renal clearance (28).
A limitation of the observational studies listed in Table 5 that has been noted as a possible reason for the lack of correlation between peritoneal clearances and survival is the narrow range of peritoneal clearance values observed in those studies. This limitation does not apply to our study, which, by design, involved a wide range of peritoneal clearances within the population studied. Our ability to achieve a wide range of peritoneal small-solute clearances and still demonstrate no relationship between peritoneal small-solute clearances and survival strongly suggests that the ranges of peritoneal clearances observed in usual practice indeed have a neutral effect on survival.
A study of 140 anuric Chinese patients (13) did note a positive correlation between peritoneal clearances and survival, suggesting that the effect of peritoneal clearance may become more apparent in the absence of residual renal function. In that study, 42.1% of patients were receiving 3 × 2 L exchanges, 45.0% were receiving 4 × 2 L exchanges, and 12.9% were receiving 10 L/d (13). The effect of peritoneal clearance was observed at prescription levels below those used for the control group in our study. We therefore cannot address the issue of whether increases in peritoneal clearances from levels lower than those observed for our control group could affect survival. A study of anuric patients in Canada, however, has not demonstrated an effect of peritoneal small-solute clearances on survival (41)
It is important to note the limitations of some of the studies that are commonly considered to support the role of peritoneal small-solute clearances in determining survival (3). Many of those studies did not separate renal clearance from peritoneal clearance (19–23). Furthermore, they were all retrospective (19–23), with the majority consisting of <100 total patients/study (19–23) and some including <20 patients (21). Moreover, careful reading of those reports indicates that their authors were cognizant of the effects of confounding factors influencing the results and they were not as definitive in their conclusions as they are frequently quoted as being.
It is clear that our findings are consistent with the existing body of knowledge in the field and seem counterintuitive only in comparison with interpretations influenced by preconceived notions, rather than a thorough and objective examination of the evidence. It is clear that correlations between small-solute clearances and survival noted in previous studies reflect almost exclusively the contribution of residual renal function. Such findings are also being noted for hemodialysis (43–45).
In agreement with other studies, ADEMEX has confirmed a set of clinical and laboratory predictors of outcomes (1,2,11,28,46). These include age, diabetes mellitus, albumin concentrations, and residual renal function.
The representativeness of our clinical trial population is an important issue for the assessment of its relevance. Although the full concordance of our results with the existing body of knowledge derived from other studies and populations supports the overall relevance of our study, the issue deserves to be examined further. The clinical and laboratory profiles for our trial population are comparable to those for various dialysis populations around the world (25–27,47,48). Furthermore, we have documented that, within the trial population, measures that are predictive of outcomes (e.g., age, diabetes mellitus, and albumin concentrations) operate in a manner indistinguishable from that observed in studies of other populations.
The similarities in survival between the control and intervention groups persisted when both groups were stratified according to body size, to determine whether a survival advantage of the intervention could be demonstrated for larger patients. No such survival advantage was observed with the range of sizes included in this study. The lack of association between body size (expressed as total body water) and outcomes, when examined according to different Kt/V quintiles, is at variance with observations for patients undergoing hemodialysis. This may be attributable to the nature of the therapies examined (continuous versus intermittent) or the populations in which these associations have been explored. Associations among body size, observed clearances, and outcomes must be examined at two levels, i.e., between the control and intervention groups and within each of these two groups. It is clear from our data that the two groups achieved a clear separation of clearances with similar body sizes (Tables 1 and 4), and this separation in clearances did not result in a difference in outcomes, overall or at any level of body size stratification. Two potential confounding effects must be considered in examinations of the associations of body size, clearances, and survival within each group. First, in the intervention group, clearance was adjusted to body size by design, thus narrowing the range of observed normalized clearances to defined therapeutic targets. Second, the inclusion criterion of a pCrCl value of <60 L/wk per 1.73 m2 with a standard 4 × 2 L regimen would exclude patients with very small body size, who would have achieved a high normalized clearance. The latter effect would also result in a narrowing of the range of clearances observed within the control group. These limitations on within-group analyses should not, however, detract from the clear observation of no differences in survival between groups at similar body sizes. It remains to be determined, however, whether the effect of increases in small-solute clearances above the minimum obtained with a prescription of four exchanges of 2 L would also be neutral in a population with a dominant body size distribution significantly different than that evaluated in this study or with a protein intake greater than that observed.
One aspect true of clinical trials in general involves the relevance to groups of patients who were excluded from the trial by design. In this trial, we excluded patients who could be broadly categorized as being at risk of imminent death (e.g., those with active malignancies, HIV, or evident cardiac failure). These exclusions, however, do not prevent our trial from being representative of the broader PD population. The outcomes for PD patients in the control group were comparable to those achieved in the rest of North America and were also similar to those achieved with hemodialysis (47,48). This suggests that reliance on PD as the dominant modality for the majority of patients in Mexico does not place these patients at any survival disadvantage.
The increase in peritoneal clearance in this trial had a neutral effect on technique survival for the population as a whole. However, some patients in the control group switched to hemodialysis because of uremic symptoms; for those patients, an increase in clearance might have prevented withdrawal. When a composite outcome measure of death and withdrawal because of uremia was used, however, no difference in survival between the control and intervention groups was detected.
The increase in peritoneal clearance had a neutral effect on a variety of secondary parameters examined during the study. It is important to note, however, that this neutral effect was observed under conditions in which no interventions related to these factors were performed. In the case of anemia, for example, the neutral effect on red cell indices was observed among patients not receiving exogenous erythropoietin treatment and does not preclude an effect of increased small-solute clearance on erythropoietin responses when exogenous hormone is administered. Similarly, no nutritional interventions were undertaken in this study, and it is reasonable to consider that increases in small-solute clearances would be required to satisfactorily control the metabolic consequences of increased protein intake (e.g., higher urea generation and increased acid and phosphate loads).
The dialytic needs of individual patients cannot be determined solely from modality outcome studies. Although our results indicate that survival is not altered by enhancement of peritoneal small-solute clearances, the welfare of individual patients may well be better served with higher clearances. Although not demonstrating a survival advantage, the results of this study should not be interpreted as indicating that no clearance enhancement is required for any patient. The neutral findings of our study should not encourage a sense of complacency regarding PD prescriptions but should focus attention on the adequacy of dialysis care, rather than on attaining a target level of small-solute clearance. It would be unfortunate if the results of our trial were unilaterally interpreted to eliminate the need to pay close attention to peritoneal clearances or, worse, to reduce prescriptions below the levels we studied, with potential underdialysis of patients. Our findings should elicit further debate and study regarding the overall definition of adequacy and whether reliance on small-solute clearances should be reexamined.
Finally, the neutral effect of small-solute clearances on survival was established within the range of clearances achievable with current dialysis technologies, beginning with a minimal standardized dose. Our results do not exclude the possibility that much higher clearances, achievable with future technologies, might elicit a survival advantage.
Our results demonstrated that variations in peritoneal small-solute clearances with current prescription patterns, applied in the absence of a discrete clinical goal, did not lead to improved survival. A corollary is that, when such variations in clearance are used to address specific clinical needs (such as uncontrolled metabolic consequences of renal failure), such interventions may result in clearly improved outcomes. With respect to this point, there were many fewer cases of death or withdrawal resulting from poor metabolic control in the intervention group, compared with the control group.
In summary, ADEMEX is a unique and provocative study in the field of PD. ADEMEX is the only large-scale, prospective, randomized, interventional study to examine the effects of enhancement of peritoneal clearances on patient outcomes. The findings of ADEMEX are consistent with the current body of knowledge regarding predictors of outcomes for large patient groups and are in agreement with the observations of other studies on the neutral role of peritoneal clearances in determining patient outcomes at the group level. These findings suggest that the survival benefit of PD is obtained within a range of clearances achievable in usual practice. The inability of a patient to achieve the target clearances defined by current clinical guidelines should not disqualify the patient from continuing to undergo PD if other aspects of patient care are satisfactorily addressed by PD (e.g., an absence of uremic symptoms and adequate fluid control). Finally, our results suggest that further research is required to assess factors other than small-solute clearances and to determine their effects on survival.
Appendix
The following institutions and investigators participated in the study: steering committee: Ramon Paniagua, M.D., Dante Amato, M.D., Ricardo Correa-Rotter, M.D., Alfonso Ramos, M.D., Edward Vonesh, Ph.D., John Moran, M.D., and Salim Mujais, M.D.; safety committee: Ramón Paniagua, M.D., Dante Amato, M.D., Aurora Maravilla, M.D., and Ma. de Jesús Ventura; coordination and data management: Ramon Paniagua, M.D., Dante Amato, M.D., Ma. de Jesús Ventura, and Alejandro B. Hinojosa-Rojas; clinical monitors: Ma. de Jesús Ventura, Olga Lozano, Clara Madonia, Silvia Garnica Salazar, and Inés Vaquera-Rodríguez; central laboratory: Ramón Paniagua, M.D., Ernesto Rodríguez, Marcela Ávila-Díaz, and Raquel Becerril-Becerril; advisory committee: Peter G. Blake, M.D., John M. Burkart, M.D., Bengt Lindholm, M.D., Karl D. Nolph, M.D., and Robert E. Wolfe, Ph.D.; managing committee: Laura Trujillo, Magdalena Nemer, and Angel Rodriguez; participating centers and investigators: Ricardo Correa-Rotter, M.D., Mario Cortés-Pérez, Eduardo Quintana-Piña, Josefina Loredo-Alonso, and Yolanda Martínez-Cerca (Department of Nephrology and Mineral Metabolism, Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán, Mexico City); Imelda Hernández-Reyes, M.D., and Ma. Estela Zepeda-Covarrubias (Hospital General 89, Instituto Mexicano del Seguro Social, Guadalajara, Jalisco); Enrique Hernández-Maldonado, M.D., and Magaly Cevallos-Fernández (Hospital de Especialidades, Centro Médico Nacional Ignacio García Téllez, Instituto Mexicano del Seguro Social, Mérida, Yucatán); Marcos Martínez-García, M.D., and Alicia Herrera-Molina (Hospital General 11, Instituto Mexicano del Seguro Social, Jalapa, Veracruz); Guillermo Valadez-Juvera, M.D., and Alma Leticia Valenzuela (Hospital de Especialidades 1, Centro Médico Nacional del Noroeste, Instituto Mexicano del Seguro Social, Ciudad Obregón, Sonora); José Alejandro García-Larumbe, M.D., and Bertha Espinoza-Pérez (Hospital General 30, Instituto Mexicano del Seguro Social, Mexicali, Baja California); José Luis Medina-Gómez, M.D., and Celestina Jiménez-Ronquillo (Hospital de Especialidades, Centro Médico Nacional Manuel Avila Camacho, Instituto Mexicano del Seguro Social, Puebla, Puebla); Eduardo Aguilar, M.D., and Javier Ortiz, Elizabeth Almanza-Medina (Hospital General 1, Instituto Mexicano del Seguro Social, Zacatecas, Zacatecas); José Martínez-Bárcenas, M.D., and Eloisa López-Pinales (Hospital General 33, Instituto Mexicano del Seguro Social, Monterrey, Nuevo León); Ma. Elena Hurtado-González, M.D., and Guadalupe Alcántara-Ortega (Hospital General 25, Instituto Mexicano del Seguro Social, Mexico City); Alejandra Cisneros-García, M.D., Juan Guillermo Oros, M.D., and Josefina Luna-Tapia (Hospital General 27, Instituto Mexicano del Seguro Social, Mexico City); Jorge Prieto-Fierro, M.D., and Leonor Abularach-Nevares (Hospital de Especialidades, Centro Médico Nacional, Instituto Mexicano del Seguro Social, Torreón, Coahuila); Abraham Sanjurjo-Gallardo, M.D., Dolores Gómez-Noriega, M.D., and Ma. de Lourdes Barrón-Andrade (Hospital General 32, Instituto Mexicano del Seguro Social, Mexico City); Mario Enríquez-Rivas, M.D., and Ma. Jesús Galaz-Valdez (Hospital General 2, Instituto Mexicano del Seguro Social, Hermosillo, Sonora); Cristina Avilés-Hernández, M.D., and Ma. de la Cruz Pérez-Hernández (Hospital de Especialidades, Centro Médico Nacional la Raza, Instituto Mexicano del Seguro Social, Mexico City); Gerardo Rodríguez del Villar, M.D., and Ma. Elena Ríos-Estrada (Hospital General 1, Instituto Mexicano del Seguro Social, Durango, Durango); Roberto Baca-Baca, M.D., and Ma. Graciela Pacheco-Hernández (Hospital General 1, Instituto Mexicano del Seguro Social, Querétaro, Querétaro); Ma. Juana Sil-Acosta, M.D., and Ma. Teresa Galindo-Salinas (Hospital General 1 Gabriel Mancera, Instituto Mexicano del Seguro Social, Mexico City); José Amado Chimalpopoca, M.D., and Ma. Isabel González-Yañez (Hospital General 8, Instituto Mexicano del Seguro Social, Mexico City); Francisco Belio-Caro, M.D., and Olivia Figueroa-Rosas (Hospital General 1, Instituto Mexicano del Seguro Social, Morelia, Michoacán); Julio Kaji-Kiyono, M.D., Lucía Isabel Aranda-Contreras, and Virginia Becerril-Morales (Hospital General Regional 1° de Octubre, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado, Mexico City); Hugo Breien-Alcaraz, M.D., and Ana Gabriela Ruelas-Peralta (Hospital General Valentín Gómez Farías, Instituto de Seguridad y Servicios Sociales de los Trabajadores del Estado, Guadalajara, Jalisco); Juan Alberto Aguilar-Martínez, M.D., and Leticia Martínez-Guzmán (Hospital General Gaudencio González Garza, Centro Médico Nacional la Raza, Instituto Mexicano del Seguro Social, Mexico City); and Francisco Monteón, M.D., José Luis Camarena, M.D., and Alma Rosa Martínez (Hospital de Especialidades, Centro Médico Nacional de Occidente, Instituto Mexicano del Seguro Social, Guadalajara, Jalisco).
Acknowledgments
This study was supported by grants from Baxter Healthcare Corp. (Deerfield, IL) and Baxter S.A. de C.V. (Mexico City, Mexico). We are grateful to the following individuals for critical review of the manuscript and support during its preparation: Sally Benjamin Young, Jill Schaaf, Robert Holt, Amy Guo, Lisa Scheff, and Eric Noshay.
- © 2002 American Society of Nephrology