Those who treat patients with myelodysplastic syndromes (MDS) have been forced to become comfortable with a rather uncomfortable truth. MDS is a bone marrow failure syndrome that represents the most commonly diagnosed myeloid malignancy and predominantly affects older adults, with a median age at diagnosis of 71 years.1,2 The only cure for MDS is hematopoietic stem-cell transplantation (HSCT). For a variety of reasons, including patient comorbidities, availability of related or matched donors, related donor comorbidities, physician and patient preference, and treatment-related adverse events, transplantation is only considered in approximately 5% of patients with MDS.2 Thus, even when we offer disease-modifying therapies such as azacitidine, decitabine, and lenalidomide, we are ultimately palliating 95% of our patients.3–6Despite this, patients often perceive these drugs to have curative potential in this setting, but cure is unfortunately not possible with these agents.7
How do we change this paradigm? Although some factors, such as patient comorbidities and availability of donors, are largely immutable, others factors have improved, making HSCT more appealing. One such advance is reduced-intensity conditioning transplantation, which greatly reduces the toxicity of the preparative regimen without compromising efficacy, and in so doing has raised the age for potentially eligible transplantation candidates into the eighth decade.8 Another modifiable area is in identifying patients for whom the risk-benefit analysis for transplantation is more favorable compared with managing the disease with palliative intent. This, in turn, could affect patient and physician preferences.
In the article that accompanies this editorial, Koreth et al9 report on a Markov decision analysis exploring the role of reduced-intensity allogeneic HSCT in older patients with MDS. This statistical technique relies on assumptions, which themselves are based on best estimates of outcome given in previously published studies, to play out scenarios of what would happen in real life to a given patient if he or she decided to undergo HSCT early, at or near diagnosis, or instead to pursue supportive care, growth factor, or disease-modifying therapy. Although this approach is not perfect, it does allow for sensitivity analyses in which assumptions can be changed to see if the same conclusion holds, and it is the best substitute available in the absence of prospective, randomized studies. This is also not the first time some of these investigators have tackled this question, or this methodology. In 2004, Cutler et al10 published a decision analysis of patients with MDS treated with myeloablative conditioning transplantation. Given this conditioning regimen, patients were younger (with a median age of 40.4 years), and given the timing at which this analysis was conducted, a paucity of individual patient data were available to appropriately reflect nontransplantation treatment approaches. So, although the results of the study by Cutler et al make clinical sense, namely, that early transplantation provides maximal quality-adjusted survival in higher-risk patients with MDS (those falling into intermediate-2 and high-risk categories of the International Prognostic Scoring System [IPSS]), these conclusions have always given treating doctors pause because the participants did not reflect the full spectrum of patients with MDS who are seen in everyday clinical practice.
The analysis by Koreth et al9 addresses these shortcomings. Now, given the nonmyeloablative preparative regimen, the median age of the 132 patients undergoing transplantation gleaned from the Center for International Blood and Marrow Transplant Research, Dana-Farber Cancer Institute, and Fred Hutchinson Cancer Research Center data sets is 64 years—closer to what we see in clinic. Patients who did not undergo transplantation included 132 with lower-risk disease (IPSS low and intermediate-1) receiving best supportive care; 91 anemic or transfusion-dependent patients receiving erythropoiesis-stimulating agents; and 164 higher-risk patients with MDS receiving azacitidine or decitabine. Patients being treated with lenalidomide, immunosuppressive approaches, or drug combinations were not included. Primary end points of the model were life expectancy (LE) and quality-adjusted life expectancy, an end point adjusted for quality of life, the values of which were derived from studies in which patients may not reflect those included in the current analysis. The authors tried to keep the assumptions used in an already complicated model to a minimum, and in so doing ignored some real-life scenarios, such as a patient initially in the nontransplantation arm deciding at a later time to undergo transplantation. That being said, the results suggest that for lower-risk patients with MDS, median LE for those avoiding HSCT was approximately double that of those undergoing HSCT, at 77 versus 38 months. For higher-risk patients, a more modest advantage was seen for early HSCT, with a median LE of 36 months, versus 28 months for nontransplantation approaches. Interestingly, in the Kaplan-Meier survival curve, that advantage starts to become apparent only after 40 months of follow-up, when the therapy-related adverse effects of HSCT have been realized.
In a separate article accompanying this editorial, Voso et al11 report on a validation of the revised IPSS (IPSS-R) in a cohort of 380 patients with MDS who were registered in the Gruppo Romano Mielodisplasie and diagnosed over a 10-year period. The IPSS-R was developed to improve on what have been regarded as shortcomings of the classic IPSS, including both an underrepresentation and relative discounting of the importance of cytogenetic abnormalities, sensitivity to degrees of cytopenias, and weight given to blast percentage.12,13 The authors found that the IPSS-R was able to predict leukemia-free and overall survival in their population and that it was able to make these predictions better than the classic IPSS and WHO prognostic scoring system. This is not in itself novel—the initial publication of the IPSS-R included validation in a separate cohort from the Medical University of Vienna and demonstrated improved discriminatory capacity compared with the classic IPSS. However, this article does advance the field in showing the ability of the IPSS-R to retain its predictive abilities in a small cohort of patients treated with disease-modifying agents—a group not included in the development or validation of the IPSS-R previously. It remains to be seen whether the IPSS-R remains robust in larger cohorts of treated patients, or whether additional revisions to the IPSS-R may be required for treated patients as a group or for specific therapies. This task (determining whether further revisions are needed) is already being initiated by the International Working Group.
How can we apply these two publications to the next patient with an MDS who walks into clinic? In practice, the IPSS and IPSS-R are used both to predict survival and to help determine therapeutic approach. A patient falling into lower-risk categories is much more likely to be treated with erythropoiesis-stimulating agents, lenalidomide, immunosuppressants, or supportive care, whereas a higher-risk patient should be considered for hypomethylating agents or HSCT. The article by Voso et al11 helps refine our definition of lower and higher risk and starts to substantiate it in treated patients, whereas the article by Koreth et al9 adds further support to pursuing HSCT in higher-risk patients at presentation—as defined by the IPSS, not the IPSS-R. What remains are questions regarding the best approach for patients in the IPSS-R intermediate-risk category, who are neither lower nor higher risk, and the need to validate these approaches prospectively, given that our best data for most MDS management principles remain circumstantial. Unfortunately, there’s the rub.