New Diagnostic Test for Blood Cancers Will Help doctors.


Pictured: Ross Levine
Physician-scientist Ross Levine

A new diagnostic test that identifies genetic alterations in blood cancers will enable physicians to match patients with the best treatments for leukemias, lymphomas, and myelomas. Co-developed by Memorial Sloan-Kettering and cancer genomics company Foundation Medicine, the test analyzes samples from patients with the blood diseases and provides information about hundreds of genes known to be associated with these disorders.

The genetic profile will help physicians make more-accurate prognoses and also guide them in treatment recommendations — from deciding whether to take an intensive approach with existing drugs such as chemotherapy to enrolling patients in clinical trials investigating novel therapies. The new test is produced commercially by Foundation Medicine and is expected to be available by the end of this year.

Medical oncologist Ross Levine, who led research at Memorial Sloan-Kettering contributing to the development of the test along with physician-scientists Marcel van den BrinkAhmet Dogan, and Scott Armstrong, presented results demonstrating its accuracy today at the annual meeting of the American Society of Hematology in New Orleans.

A Tool with Broad Impact

The test will play an essential role in the clinical care of most patients with blood disorders at Memorial Sloan-Kettering and, it is expected, in the care of patients throughout the United States. According to the Leukemia and Lymphoma Society, an estimated 1.1 million people in the nation are currently living with, or in remission from, leukemia, lymphoma, and myeloma, and an estimated combined total of more than 148,000 will be diagnosed with one of these diseases in 2013.

“Our hope is that this test becomes available to all patients in the country with these malignancies,” Dr. Levine says. “We were particularly excited that we weren’t just developing a tool for the relatively small number of people who are treated at our institution, but providing access to state-of-the-art cancer genomics more broadly.”

The diagnostic test was developed and validated using more than 400 samples from Memorial Sloan-Kettering patients with the three blood disorders. Dr. Levine explains that it is far more comprehensive than existing tests, which focus on a small number of genetic mutations associated with specific blood cancer types. The new test analyzes more than 400 cancer-related genes, and unlike most standard tests, it looks for alterations in both DNA and RNA.

Sequencing RNA along with DNA is especially useful in the detection of certain kinds of genetic alterations that commonly occur in blood cancers. These include translocations (which occur when pieces of DNA are exchanged between two chromosomes) and fusion genes (new genes that include parts of two different genes). In addition to improving the treatment of patients, Memorial Sloan-Kettering will use information gleaned from the test to further advance research into blood cancers.

Clinically Relevant Mutations

Dr. Levine explains that Memorial Sloan-Kettering researchers worked with Massachusetts-based Foundation Medicine to annotate, or define, every gene in the panel to correlate it with clinical data and to provide insight into how this information can be used to guide clinical decision making.

“What’s vital about the test is that it’s not just reporting the presence of specific alterations but also indicating how a particular genetic event detected in a patient can guide either prognosis or therapy,” he says. “We identified clinically relevant mutations that were not found using standard tests. These mutations are ‘actionable,’ meaning that targeting them can change the course of the disease, including directing patients to innovative clinical trials.”

Initially, the goal for the test is to produce the full genetic profile from a patient sample within three to four weeks. “With the exception of someone who has very acute leukemia that requires immediate treatment decisions, this test is going to be valuable in clinical care,” Dr. Levine says.

 

Ovarian Cancer Screen Shows promise.


Researchers from the University of Texas MD Anderson Cancer Center have developed a highly specific screening procedure for ovarian cancer, the fifth-most common form of cancer among women. The screen includes a blood test, followed by a vaginal ultrasound for high-risk individuals. The findings were published Monday (August 26) in Cancer.

The symptoms of early-stage ovarian cancer—bloating and abdominal pain—are common to many ailments. If caught early, women with the disease have a 90 percent survival rate. Late detection decreases survival to 30 percent, making early detection critical.

For the current study, the researchers first assessed baseline levels of the carbohydrate antigen (CA)-125 protein. Women with low levels were retested yearly, whereas intermediate-risk women were tested again every three months. High-risk women were screened with a vaginal ultrasound and referred to a gynecologist. The team tested the procedure on 4,051 post-menopausal women between the ages of 50 and 74 during an 11-year period. Only 10 women were recommended for surgery. Of those, four were found to have invasive ovarian cancer. Overall, the new test showed 99.9 percent specificity, meaning there was a very low risk of false-positive results.

“The results from our study are not practice-changing at this time,” Karen Lu, lead author of the study and an oncologist at MD Anderson, said in a statement. “However, our findings suggest that using a longitudinal (or change over time) screening strategy may be beneficial in post-menopausal women with an average risk of developing ovarian cancer.”

Sarah Blagden, an oncology consultant at Imperial College London, told The Guardian that there is a risk that such a test would give women a false sense of security and make them less likely to report symptoms to their doctor. “If you have a screening program that tests people once a year, it might have a dangerous effect,” she said.

Meanwhile, the UK Collaborative Trial of Ovarian Cancer Screening, a randomized trial studying 300,000 women using the same two-stage procedure, is slated for completion in 2015.

IBM’s Watson Now Tackles Clinical Trials At MD Anderson Cancer Center.


IBM continues to expand the use of its Watson supercomputer from winning Jeopardy to handling incoming call-center questions to guiding cancer doctors at Memorial Sloan Kettering to better diagnoses. Today it announced a new pilot program for Watson at Houston’s renowned MD Anderson Cancer Center. The institution has been trying out Watson for a little under a year in its leukemia practice as an expert advisor to the doctors running clinical trials for new drugs.

According to the FDA, some $95 billion is spent on clinical trials each year and only 6% are completed on time. Even at the best cancer centers, doctors running trials are feeling around in the dark. There are so many variables in play for each patient and clinicians traditionally only look at the few dozen they feel are the most important. Watson, fed with terabytes of general knowledge, medical literature and MD Anderson’s own electronic medical records, can riffle through thousands more variables to solve the arduous task of matching patients to the right trial and managing their progress on new cancer drugs.

“It’s still in testing and not quite ready for the mainstream yet, but it has the infrastructure to potentially revolutionize oncology research,” says MD Anderson cancer doctor Courtney DiNardo. “Just having all of a patient’s data immediately on one screen is a huge time saver. It used to take hours sometimes just to organize it all.”

In a blog post published today, Dr. DiNardo gave an example of filling in for a colleague who was out of town. She had to meet with one of his patients who had a particularly complicated condition that needed a management decision. “Under normal circumstances, it may have taken me all afternoon to prepare for the meeting with enough insight to provide the most appropriate treatment decisions. With Watson, I am able to get a patient’s history, characteristics, and treatment recommendations based on my patient’s unique characteristics in seconds.”

Watson isn’t taking over the work from doctors. It’s more of an advisor, giving evidence-based treatment advice based on standards of care, while providing the scientific rationale for each choice it makes. Doctors can click on any option and drill down to the medical literature or patient data used to generate that option, along with the level of confidence Watson has in that source. “It’s tracking everything and alerts you when something’s wrong, like when you need to start a patient on prophylactic antibiotics if they have severe neutropenia (an abnormally low number of a certain type of white blood cell),” said Dr. DiNardo. “We might know that here at MD Anderson because we see hundreds of leukemia patients, but a less expert center might not. Now everyone in the world becomes a leukemia expert.”

So far only ten oncologists on faculty at MD Anderson have been involved in the pilot and, while Watson is using real patient data, its advice hasn’t been applied to real patients yet. That might come in early 2014. But for DiNardo, just the Big Data opportunity alone of using Watson to look across all of genomics and molecular research while assimilating every structured and unstructured byte of patient data, is tantalizing. “The potential is really amazing,” she says.

Crunching Numbers: What Cancer Screening Statistics Really Tell Us.


Over the past several years, the conversation about cancer screening has started to change within the medical community. Be it breast, prostate, or ovarian cancer, the trend is to recommend less routine screening, not more. These recommendations are based on an emerging—if counterintuitive—understanding that more screening does not necessarily translate into fewer cancer deaths and that some screening may actually do more harm than good.

Much of the confusion surrounding the benefits of screening comes from interpreting the statistics that are often used to describe the results of screening studies. An improvement in survival—how long a person lives after a cancer diagnosis—among people who have undergone a cancer screening test is often taken to imply that the test saves lives.

But survival cannot be used accurately for this purpose because of several sources of bias.

Sources of Bias

A graphic illustrating lead-time bias. Click to enlarge the image and to read the full caption. (Image from O. Wegwarth et al., Ann Intern Med, March 6, 2012:156)

Lead-time bias occurs when screening finds a cancer earlier than that cancer would have been diagnosed because of symptoms, but the earlier diagnosis does nothing to change the course of the disease. (See the graphic on the right for further explanation.)

Lead-time bias is inherent in any analysis comparing survival after detection. It makes 5-year survival after screen detection—and, by extension, earlier cancer diagnosis—an inherently inaccurate measure of whether screening saves lives. Unfortunately, the perception of longer life after detection can be very powerful for doctors, noted Dr. Donald Berry, professor of biostatistics at the University of Texas MD Anderson Cancer Center.

“I had a brilliant oncologist say to me, ‘Don, you have to understand: 20 years ago, before mammography, I’d see a patient with breast cancer, and 5 years later she was dead. Now, I see breast cancer patients, and 15 years later they’re still coming back, they haven’t recurred; it’s obvious that screening has done wonders,'” he recounted. “And I had to say no—that biases could completely explain the difference between the two [groups of patients].”

Another confounding phenomenon in screening studies is length-biased sampling (or “length bias”). Length bias refers to the fact that screening is more likely to pick up slower-growing, less aggressive cancers, which can exist in the body longer than fast-growing cancers before symptoms develop.

A graphic illustrating overdiagnosis bias. Click to enlarge the image and to read the full caption. (Image from O. Wegwarth et al., Ann Intern Med, March 6, 2012:156)

Dr. Berry likens screening to reaching into a bag of potato chips—you’re more likely to pick a larger chip because it’s easier for your hand to find, he explained. Similarly, with a screening test “you’re going to pick up the slower-growing cancers disproportionately, because the preclinical period when they can be detected by screening—the so-called sojourn time—is longer.”

The extreme example of length bias is overdiagnosis, where a slow-growing cancer found by screening never would have caused harm or required treatment during a patient’s lifetime. Because of overdiagnosis, the number of cancers found at an earlier stage is also an inaccurate measure of whether a screening test can save lives. (See the graphic on the left for further explanation.)

The effects of overdiagnosis are usually not as extreme in real life as in the worst-case scenario shown in the graphic; many cancers detected by screening tests do need to be treated. But some do not. For example, recent studies have estimated that 15 to 25 percent of screen-detected breast cancers and 20 to 70 percent of screen-detected prostate cancers are overdiagnosed.

How to Measure Lives Saved

Because of these biases, the only reliable way to know if a screening test saves lives is through a randomized trial that shows a reduction in cancer deaths in people assigned to screening compared with people assigned to a control (usual care) group. In the NCI-sponsored randomized National Lung Screening Trial (NLST), for example, screening with low-dose spiral CT scans reduced lung cancer deaths by 20 percent relative to chest x-rays in heavy smokers. (Previous studies had shown that screening with chest x-rays does not reduce lung cancer mortality.)

However, improvements in mortality caused by screening often look small—and they are small—because the chance of a person dying from a given cancer is, fortunately, also small. “If the chance of dying from a cancer is small to begin with, there isn’t that much risk to reduce. So the effect of even a good screening test has to be small in absolute terms,” said Dr. Lisa Schwartz, professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice and co-director of the Veterans Affairs Outcomes Group in White River Junction, VT.

For example, in the case of NLST, a 20 percent decrease in the relative risk of dying of lung cancer translated to an approximately 0.4 percentage point reduction in lung cancer mortality (from 1.7 percent in the chest x-ray group to 1.3 percent in the CT scan group) after about 6.5 years of follow-up, explained Dr. Barry Kramer, director of NCI’s Division of Cancer Prevention.

A recent study published March 6 in the Annals of Internal Medicine by Dr. Schwartz and her colleagues showed how these relatively small—but real—reductions in mortality from screening can confuse even experienced doctors when pitted against large—but potentially misleading—improvements in survival.

Tricky Even for Experienced Doctors

To test community physicians’ understanding of screening statistics, Dr. Schwartz, Dr. Steven Woloshin (also of Dartmouth and co-director of the Veterans Affairs Outcomes Group), and their collaborators from the Max Planck Institute for Human Development in Germany developed an online questionnaire based on two hypothetical screening tests. They then administered the questionnaire to 412 doctors specializing in family medicine, internal medicine, or general medicine who had been recruited from the Harris Interactive Physician Panel .

The effects of the two hypothetical tests were described to the participants in two different ways: in terms of 5-year survival and in terms of mortality reduction. The participants also received additional information about the tests, such as the number of cancers detected and the proportion of cancer cases detected at an early stage.

The results of the survey showed widespread misunderstanding. Almost as many doctors (76 percent of those surveyed) believed—incorrectly—that an improvement in 5-year survival shows that a test saves lives as believed—correctly—that mortality data provides that evidence (81 percent of those surveyed).

Recent Screening Recommendation Changes

About half of the doctors erroneously thought that simply finding more cases of cancer in a group of people who underwent screening compared with an unscreened group showed that the test saved lives. (In fact, a screening test can only save lives if it advances the time of diagnosis and earlier treatment is more effective than later treatment.) And 68 percent of doctors surveyed said they were even more likely to recommend the test if evidence showed that it detected more cancers at an early stage.

Doctors were three times more likely to say they would recommend the test supported by irrelevant survival data than the test supported by relevant mortality data.

In short, “the majority of primary care physicians did not know which screening statistics provide reliable evidence on whether screening works,” Dr. Schwartz and her colleagues wrote. “They were more likely to recommend a screening test supported by irrelevant evidence…than one supported by the relevant evidence: reduction in cancer mortality with screening.”

Teaching the Testers

“In some ways these results weren’t surprising, because I don’t think [these statistics] are part of the standard medical school curriculum,” said Dr. Schwartz.

“When we were in medical school and in residency, this wasn’t part of the training,” Dr. Woloshin agreed.

“We should be teaching residents and medical students how to correctly interpret these statistics and how to see through exaggeration,” added Dr. Schwartz.

Some schools have begun to do this. The University of North Carolina (UNC) School of Medicine has introduced a course called the Science of Testing, explained Dr. Russell Harris, professor of medicine at UNC. The course includes modules on 5-year survival and mortality outcomes.

The UNC team also recently received a research grant to form a Research Center for Excellence in Clinical Preventive Services from the Agency for Healthcare Research and Quality. “Part of our mandate is to talk not only to medical students but also to community physicians, to help them begin to understand the pros and cons of screening,” said Dr. Harris.

Drs. Schwartz and Woloshin also think that better training for reporters, advocates, and anyone who disseminates the results of screening studies is essential. “A lot of people see those [news] stories and messages, so people writing them need to understand,” said Dr. Woloshin.

Patients also need to know the right questions to ask their doctors. “Always ask for the right numbers,” he recommended. “You see these ads with numbers like ‘5-year survival changes from 10 percent to 90 percent if you’re screened.’ But what you always want to ask is: ‘What’s my chance of dying [from the disease] if I’m screened or if I’m not screened?'”

Sharon Reynolds

Source:NCI.

 

 

Researchers Look to Single Cells for Cancer Insights


When asked about the biggest challenges to better understanding cancer, one word practically leaps from the mouths of many researchers: heterogeneity.

A tumor, the researchers stress, is not a uniform mass of identical cells with identical behaviors. Cells can act quite differently in one part of a tumor than in another. Genes critical to cell proliferation, for instance, may be active in one area but not another, or a subpopulation of cancer cells may be dormant, practically hiding from any drug that may try to enter their lair.

This heterogeneity has been blamed, for example, for the limited success of targeted therapies and of efforts to identify better diagnostic and prognostic markers of disease.

Researchers are now discovering what many have long suspected: much of what makes tumors heterogeneous overall is the substantial heterogeneity among individual cancer cells.

Until recently, the meticulous scrutiny of individual cells has been nearly impossible, particularly because of the relative scarcity in each cell of the key components that need to be measured, such as DNA and RNA. But thanks to technological advances that can help overcome some of those limitations, a growing number of investigators are beginning to delve deeper into the biology of the single cell.

The studies conducted to date “show us how much diversity there is among cancer cells in a given tumor,” said Dr. Garry Nolan, an immunologist at Stanford University whose lab is focused on mapping communication networks in individual cancer cells.

Even with improved technology, however, conducting studies at the single-cell level is difficult and can be time-consuming and expensive. But with growing interest—and $90 million over 5 years from the NIH Common Fund initiative (see the sidebar)—there is cautious optimism that over the next decade single-cell research may begin to pay dividends for patients with cancer and other diseases.

Moving beyond the Average

Most research on the molecular biology of tumors requires the use of mixtures of tens or hundreds of thousands of cells. Those samples “have immune cells, endothelial cells, and other infiltrating cells that make up the milieu of what a tumor actually is,” explained Dr. Dan Gallahan of NCI’s Division of Cancer Biology. “That really makes it difficult to get a grasp on what defines or, more importantly, how to treat a tumor.”

Results from studies that involve a bulk population of cells, Dr. Gallahan continued, essentially represent an average measurement.

Studying single cells is a way to “defy the average,” Dr. Marc Unger, chief scientific officer of Fluidigm Corporation, said earlier this year at a stem cell conference in Japan. (Fludigm, which develops tools for single-cell analysis, and the Broad Institute recently announced plans to establish a single-cell genomics research center.)

Single-cell analysis may be able to provide important clinical insights, said Dr. Nicholas Navin of the University of Texas MD Anderson Cancer Center, who has used next-generation sequencing to study variations in the number of genes (copy number variation) in single cancer cells.

Single-cell analysis might, for example, help identify “pre-existing [cell populations] that are resistant to chemotherapy or rare subpopulations that are capable of invasion and metastasis,” he said. “We may also be able to quantify the extent of heterogeneity in a patient’s tumor using single-cell data and use this index to predict how a patient will respond to treatment,” Dr. Navin continued.

Results from several recent studies have highlighted the challenges posed by tumor heterogeneity.

For example, researchers at BGI (formerly Beijing Genomics Institute) sequenced the protein-coding regions of DNA (the exome) of 20 cancer cells and 5 normal cells from a man with metastatic kidney cancer. The researchers found a tremendous amount of genetic diversity across the cancer cells, with very few sharing any common genetic mutations.

Much of the work in the analysis of single cells is still quite preliminary, and any potential clinical impact is still some years away, researchers agree.

“The problem with the single-cell data is that we don’t really know yet what they mean,” Dr. Sangeeta Bhatia of the Massachusetts Institute of Technology commented recently in Nature Biotechnology.

And studies involving bulk populations of cells will not be going away any time soon, noted Dr. Betsy Wilder, director of the NIH Office of Strategic Coordination, which oversees the NIH Common Fund.

“Single-cell analysis isn’t warranted for every question that’s out there,” Dr .Wilder stressed. “Studies using populations of cells will continue to be done, because it makes a lot of sense to do them.”

Technology, a Driving Force

Beyond just an interest in learning more about single cells—what Dr. Gallahan called “the operational units in biology”—technology has been the driving force behind the growth of this field.

Dr. Stephen Quake, also of Stanford, has pioneered the use of microfluidics, which typically uses small chips with microscopic channels and valves—often called lab-on-a-chip devices—that allow researchers to single out and study individual cells. Dr. Quake, who co-founded Fluidigm, and others are increasingly using these devices for gene-expression profiling and for sequencing RNA and DNA of individual cells.

Dr. Nolan’s research involves a hybrid approach that combines two technologies: a souped-up method of flow cytometry, which has been used for several decades to sort cells and to perform limited analyses of single cells, and mass spectrometry, which is often used to identify and quantify proteins in biological samples.

Dr. Nolan’s lab is using this “mass cytometry” approach—developed by Dr. Scott Tanner of the University of Toronto—to characterize the response of individual cells to different stimuli, such as cytokines, growth factors, and a variety of drugs. Much of the group’s work has focused on analyzing normal blood-forming cells.

They published an influential study last year in Science that revealed some of the subtle biochemical changes that occur during cell differentiation. The study also described how dasatinib (Sprycel), a drug used to treat chronic myelogenous leukemia, affects certain intracellular activities. The research, Dr. Nolan said, is a prelude to studying individual cells from patients with blood cancers. The approach, he believes, may prove particularly useful for identifying new drugs and for testing them in the lab.

A tumor is not a uniform mass of identical cells with identical behaviors. Cells can act quite differently in one part of a tumor than in another.

The Microscale Life Sciences Center (MLSC), an NIH Center of Excellence in Genomic Science that is housed at Arizona State University, develops and applies the latest technology to single-cell research.

The center—a collaboration of investigators from Arizona State, the University of Washington, Brandeis University, and the Fred Hutchinson Cancer Research Center—includes researchers from numerous disciplines, including microfluidics, computer science, physics, engineering, and biochemistry, explained principal investigator Dr. Deirdre Meldrum.

“All of these disciplines are needed to develop the new technologies we’re working on,” said Dr. Meldrum, an electrical engineer by training.

In its initial work, the MLSC has measured metabolic processes in single living cells, including cellular respiration—the process by which cells acquire energy—as it relates to an individual cell’s ability to resist or succumb to cell death. The workhorse of this effort is a platform called the Cellarium, developed by Dr. Meldrum’s team. Individual cells are isolated in controlled chambers, Dr. Meldrum explained, “where we perturb them and measure how they change over time.”

Investigators at the MLSC and elsewhere have also developed technologies to image single cells. MLSC scientists are using a device developed by VisionGate, called the Cell-CT, “that enables accurate measurement of cellular features in true 3D,” Dr. Meldrum said.

MLSC researchers have studied abnormal esophageal cells from people with Barrett esophagus, a condition that increases the risk of esophageal adenocarcinoma. In particular, they’ve looked at how these cells respond to very low oxygen levels, or hypoxia.

Acid reflux, which can cause Barrett esophagus, can damage the esophagus “and lead to transient hypoxia in the epithelial lining of the esophagus,” explained Dr. Thomas Paulson, an investigator at Fred Hutchinson. In effect, he continued, the Cellarium system provides a snapshot of how this hypoxic environment selects for variants of cells that are able to survive and grow in it, providing insights into the factors that influence the evolution of cells from normal to cancerous.

Although Dr. Paulson’s work at MLSC is focused on Barrett esophagus, he believes the approach represents an excellent model system for studying cancer risk in general.

“I think our understanding of what constitutes risk is probably going to change as we understand the types of changes that occur at the single-cell level” that can transform a healthy cell into a cancerous cell, he said.

Deeper Dives Ahead

There’s a general acknowledgement in the field that single-cell analysis still has important limitations. Technological improvements are needed that can allow for the same type of molecular and structural “deep dives” that can be achieved by studying batches of cells. And powerful computer programs will be needed to help interpret the data from single-cell studies.

In addition, the research will eventually have to move beyond the confines of the mostly artificial environments in which single cells are now being tested, Dr. Gallahan noted. “As the technology gets better, we should be able to do more of this work in an in vivo setting.”

Although much more work is needed, the potential for what can be learned from studying single cells is quite large, Dr. Nolan believes.

“The fact that we’ve been able to make good decisions and learn as much as we have, even at the level of resolution [of cell populations], means that there’s something of even greater value to mine when you get to the level of the single cell,” he said.

Transforming the Field of Single-Cell Research

This month, the National Institutes of Health will announce grant recipients for the NIH Common Fund’s single-cell analysis program.

The program, which includes three funding opportunities, “is largely a technology building program,” explained Dr. Wilder. The NIH Common Fund launched this program now because “there’s a sense that the technologies exist that can enable us to do the sort of analysis required to look at single cells in their native environment,” such as in a piece of excised tissue.

Although the focus is on technology, an important goal of the initiative is to support research that will “identify a few general principles of how single cells behave in a complex environment,” added Dr. Ravi Basavappa, the program director for the single-cell analysis program.

From the planning discussions, it was clear that the program should not limit the types of technology under consideration, Dr. Wilder commented. “Our analysis indicated that there are a lot of possibilities, so we left it up to the imaginations of the investigators to determine what technologies would be most transformative for the field as a whole.”

Source: NCI

Researchers Use Gene Deletions to Find Cancer Treatment Targets.


Chromosomal damage that can transform healthy cells into cancer cells may also create weaknesses that can be exploited to kill the cancer cells, a new study suggests. The idea, called “collateral vulnerability,” could be used to identify new targets for drug therapy in multiple cancers, according to researchers from the Dana-Farber Cancer Institute and the University of Texas MD Anderson Cancer Center. The study was published August 16 in Nature.

Directly targeting genetic mutations that drive cancer with drugs is difficult, particularly in the case of mutations that delete tumor suppressor genes. Using data on the brain cancer glioblastoma multiforme (GBM) from The Cancer Genome Atlas (TCGA) initiative, the research team identified a number of “collateral” or “passenger” gene deletions that occurred during chromosomal damage that resulted in the loss of tumor suppressor genes.

The researchers next looked for passenger gene deletions that met two criteria: the deleted genes were involved in functions vital to cell survival, and the deleted genes were closely related to existing genes that perform similar functions. This loss of redundancy caused by passenger gene deletions can potentially be exploited to selectively kill tumor cells, the authors explained.

One gene that met these criteria is ENO1. ENO1 produces enolase 1, an enzyme that plays a central role in a process cells use to make energy. Human cells have a closely related gene (ENO2) that produces the enzyme enolase 2, which acts as a back-up for enolase 1 in brain tissue. Brain cells normally have a high level of enolase 1 activity and a small amount of enolase 2 activity. In some patients with GBM, however, the tumor cells lack enolase 1 activity because ENO1 was deleted when a tumor suppressor gene was deleted. This lack of enolase 1 activity could make these tumor cells more vulnerable to enolase inhibition.

This idea was tested using two targeting strategies. First, in GBM cell lines that lacked ENO1, the investigators showed that silencing ENO2 gene expression with a short hairpin RNA (a short RNA sequence that blocks the production of enolase 2 protein from ENO2 messenger RNA) sharply reduced cell growth, and tumors failed to form in mice injected with the treated cells.

The second approach involved a drug that targets the enolase 1 and enolase 2 proteins. Treatment of GBM cell lines lacking ENO1 with the drug caused the cancer cells to die because of the low overall enolase levels in these cells. But drug treatment had little effect on normal brain cells or GBM cells that had ENO1, since these cells have high levels of ENO1 gene expression and are, therefore, less sensitive to the drug.

The collateral vulnerability concept is similar in some respects to the idea of synthetic lethality, which uses genetic mutations in cancer-associated genes to identify other potential cellular vulnerabilities, explained the study’s co-lead author, Dr. Florian Muller of MD Anderson.

There are many more passenger gene deletions than tumor suppressor gene deletions, “and some of these passenger-deleted genes perform functions critical for cell survival,” Dr. Muller continued. “So, by expanding the concept to passenger genes, we vastly expand the possibility of finding these relationships, and, in the case of essential-redundant gene pairs like ENO1 and ENO2, we also provide a rational, knowledge-based method of drug-target discovery.”

The researchers are extending their work to other passenger gene deletions in GBM, Dr. Muller said.

This research was supported in part by the National Institutes of Health (CA95616-10 and CA009361).

Also in the Journals: Youth Tobacco Use Dropped between 2000 and 2011

Tobacco use and cigarette smoking fell among middle and high school students between 2000 and 2011, according to data from the National Youth Tobacco Survey, a school-based, self-administered questionnaire given to students in grades 6 through 12. Researchers from the Centers for Disease Control and Prevention published the findings last month in Morbidity and Mortality Weekly Report.

Percentage of U.S. Middle and High School Students Using Tobacco

Middle School Students

High School Students

2000

2011

2000

2011

Current Tobacco Use

14.9

7.1

34.4

23.2

Current Smoked Tobacco Use

14.0

6.3

33.1

21.0

Current Cigarette Use

10.7

4.3

27.9

15.8

Source: NCI

 

 

 

 

 

c-Met inhibition to radiosensitize NSCLC tumors.


Despite advances in diagnosis and treatment over the past several years, unresectable lung cancer remains a highly lethal disease, with a 5-year survival rate of only about 14% to 15% among patients selected for combined-modality treatment in clinical trials. However, a larger percentage of patients…

Abstract

Introduction The radiation doses used to treat unresectable lung cancer are often limited by the proximity of normal tissues. Overexpression of c-Met, a receptor tyrosine kinase, occurs in about half of non–small-cell lung cancers (NSCLCs) and has been associated with resistance to radiation therapy and poor patient survival. We hypothesized that inhibiting c-Met would increase the sensitivity of NSCLC cells to radiation, enhancing the therapeutic ratio, which may potentially translate into improved local control.
Methods We tested the radiosensitivity of two high-c-Met–expressing NSCLC lines, EBC-1 and H1993, and two low-c-Met–expressing lines, A549 and H460, with and without the small-molecule c-Met inhibitor MK-8033. Proliferation and protein expression were measured with clonogenic survival assays and Western blotting, respectively. [gamma]-H2AX levels were evaluated by immunofluorescence staining.
Results MK-8033 radiosensitized the high-c-Met–expressing EBC-1 and H1993 cells but not the low-c-Met–expressing cell lines A549 and H460. However, irradiation of A549 and H460 cells increased the expression of c-Met protein at 30 minutes after the irradiation. Subsequent targeting of this up-regulated c-Met by using MK-8033 followed by a second radiation dose reduced the clonogenic survival of both A549 and H460 cells. MK-8033 reduced the levels of radiation-induced phosphorylated (activated) c-Met in A549 cells.
Conclusions These results suggest that inhibition of c-Met could be an effective strategy to radiosensitize NSCLC tumors with high basal c-Met expression or tumors that acquired resistance to radiation because of up-regulation of c-Met.

Bhardwaj, Vikas Phd*; Zhan, Yanai MS; Cortez, Maria Angelica Phd*; Ang, Kie Kian Md, Phd*; Molkentine, David BS; Munshi, Anupama Phd§; Raju, Uma Phd; Komaki, Ritsuko MD*; Heymach, John V.Md, Phd[forms Double Vertical]; Welsh, James Phd*

Departments of *Radiation Oncology, Investigational Cancer Therapeutics, Experimental Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX; §Department of Radiation Oncology, College of Medicine, The University of Okhlahoma, Okhlahoma City, OK; and [forms double vertical]Thoracic Head & Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX.
Disclosure: The authors declare no conflict of interest.
Address for correspondence: James Welsh, MD, Department of Radiation Oncology, Unit 97, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd, Houston, TX 77030. E-mail: jwelsh@mdanderson.org

Source: www. getinsidehealth.com