On December 12th Vadim Gladyshev from the Harvard Medical School, Boston, USA, gave a conference in a packed room at the PRBB invited by Roderic Guigó from the CRG. Gladyshev investigates the molecular basis for natural changes in longevity and the biological mechanisms involved in aging.
The first part of the conference focused on the mechanisms of aging. Gladyshev’s main question was: why and how do things go wrong with age?
At the beginning he introduced several aging theories that have contributed most significantly to the aging debate in the research community. Some of them were built in the 50s based on 19th century insights, whereas others are very recent. According to him, these theories are very different, each of them touching on a particular aspect of the aging process and, within that context, each has its merit, but all are incomplete.
He continued with his own view about aging. He suggested that imperfectness of biological processes leads to inevitable damage accumulation – called deleteriome – causing aging. His research group is now characterizing properties of cumulative damage and its impact on the aging process. They also study cancer as a disease of aging.
While the mechanisms of aging and the process of lifespan control may seem highly related topics, he maintained that they are different areas. To explain the difference, he used a metaphor of a river, where a lifespan would be equivalent to the time needed for the water to flow from the mountain to the ocean. According to him, the route of the river can be changed to make the journey longer, just like lifespan of humans can be extended. However, the fact that the river flows because of gravity can’t be changed, just like we cannot change the fact that the aging process occurs because of imperfectness. So the cause of aging is different from the determinants of longevity.
The second part of the conference was about mechanisms of lifespan control, trying to answer the questions: why do cells and organisms live as long as they do? and how does Nature adjust lifespan?
Gladyshev’s research team uses multiple approaches to address this question. One methodology involves studying the genes of exceptionally long-lived mammals, such as the naked mole rat, the Brandt’s bat and the bowhead whale. The Brandt’s bat (Myotis brandtii) is found throughout most of Europe and parts of Asia, and it often lives more than 40 years.
The naked mole rat (Heterocephalus glaber) is a burrowing animal commonly found in East Africa, well-adapted to their underground existence. They are characterized by small eyes, short and thin legs, hairless body (hence the common name) and wrinkled pink or yellowish skin. Their large front teeth are used to dig. This animal can live up to 31 years, the record for the longest living rodent.
Gladyshev’s group recently sequenced and analyzed the genomes of these animals, and they discovered some of the adaptations that contribute to their long lifespans. They also identified general gene expression and metabolic changes that associate with longer life.
In addition to the evolutionary study of long-lived animals, Gladyshev’s lab focuses on cell types that have different lifespan and in long-lived mouse models. They also do analysis across species and cell culture-based profiling in order to find unique and common mechanisms of longevity. Longevity signatures (based on gene expression) identify candidate interventions for lifespan extension. Ultimately, the researchers would like to find treatments or some other approaches which would help extend life span and diminish the consequences of age-related diseases.
At the end of the talk the public showed great interest on Gladyshev’s research, posing many questions about aging in yeast, epigenetic drift in aging and the relation between lifespan and maturity. In a fruitful and interesting conversation, some in the audience also suggested research approaches such as studying aging in single cells or focusing on the physics of aging. We’ll have to wait for Gladyshev’s next talk to see if some of these suggestions gave their fruits!
A report by Mari Carmen Cebrián
Guillaume Filion’s latest post is aimed at those wanting to understand the details of how the Burrows–Wheeler transform (an algorithm used in data compression) works. It may be of particular interest to those genomics researchers working on alignments, since, Filion says, the Burrows-Wheeler indexing is used to perform the seeding step of the DNA alignment problem, and it’s exceptionally well adapted to indexing the human genome.
For those of you who are not afraid of the small mathematical details, you can see this “The grand locus” post here.
A new collaboration amongst scientists at different centres at the PRBB, with Mar Albà from the IMIM as leading author, has come up with a new mechanism for explaining the formation of de novo genes. Although commonly new genes arise by gene duplication and diversification of the copy, some genes appear in genomic regions which did not previously contain any gene, as compared with other species. How do these genes originate from nothing?
In a preprint submitted to arXiv.org the authors propose – based on transcriptomic comparisons between humans and three other mammals – that first new regulatory motifs/promoters appeared in those regions, which lead to an activation of transcription and the origin of new potentially functional genes. Alba’s group have actually identified hundreds of putative de novo genes in the human genome.
Marc A. Marti-Renom is interested in three-dimensional structures. After eight years in the US dedicated to the world of proteins, the biophysicist returned to his native country, first Valencia and then Barcelona, to specialise in RNA and DNA folding. In 2006 he set up his own group, which today is divided between the CNAG, where there are ten people, and the CRG, where there are two. “We do the experimental part, the sample preparation, here in the CRG, and the sequencing and analysis happens in the CNAG”, he explains. For his research he requires a large sequencing and computing capacity, which he can get at the CNAG, the second-most important sequencing analysis centre in Europe. “We are fortunate to be in one of the best places in the world to do these studies,” he says proudly.
Proteins with clinical application
Proteins caught his attention while he was doing his PhD, and in 2004, when he was at the University of California (UCSF), he collaborated in the creation of the “Tropical Disease Initiative,” a drug-discovery initiative linking people from both academia and companies to try to reposition drugs in favour of neglected diseases such as malaria and tuberculosis. “The idea was to make it all open source so everything we found was published directly to the web and couldn’t be patented”, says Marti-Renom.
The Structural Genomics group was a major player in one of the first instances that genome sequencing was used at the clinical level. “There was a patient with tuberculosis and a high resistance to antibiotics. We sequenced samples from the patient and found out he was infected by two different strains, and one of them was mutated. When we made models of the protein structure resulting from this mutation we saw how it was affecting the function”, explains the scientist. According to Marti-Renom, in a few years not only will everyone have their genome sequenced, but it will happen several times. “When someone develops a disease like cancer we will sequence them again to see what has changed and why”, he predicts.
Beyond proteins: RNA and DNA
Proteins, the cell’s building blocks, are not the be-all and end-all of life. Since the 1960s we have known that RNA has essential functions other than converting the information in DNA into proteins. But of its three-dimensional structure very little is known, and in the end, the function occurs in 3D. For this reason the group is developing computational tools to incorporate experimental data and make structural predictions.
The most recent biological component to enter the ‘3D world’ was the genome. In this case, too, little is known about how it folds in space. The group of Marti-Renom, along with three other groups at the CRG (Miguel Beato, Guillaume Fillion and Thomas Graf) is carrying out the 4DGenome project, which has a budget of 12.2 million euros, in order to understand the structure of the genome and how it changes over time. “We know the genome sequence very well, thanks to molecular biology and the big genome projects. We also understand the chromosomal macrostructure, thanks to advances in microscopy; but we can’t see the middle ground, the step between the tangled skein and the well-defined chromosome”, says the head of the group. In 2006 they began using Chromosome Conformation Capture (3C) data to develop software that allows you to view the entire genome at high-resolution, a kind of ‘molecular microscope’. With this and other technologies, like Hi-C, and using computational algorithms they have been able to observe how different regions of the same chromosome tend to interact with each other. They have also seen that the 3D ‘photo’ of a moment when, for example, there is high gene expression may be very different to another where the expression is low. “Without this three-dimensional information it is much more difficult to characterise how the genome works”, concludes the researcher.
Xavier Estivill and his Genomics and Disease research group at the CRG are trying to find the genetic causes of complex diseases using the latest genomic technologies. Focused on central nervous system diseases and on non-coding RNAs, he is also involved in international sequencing projects such as the International Cancer Genome Consortium (ICGC). Hear him explain his research in this short video!
Fátima Al-Shahrour, from the CNIO in Madrid, came last week to the PRBB to give a talk entitled “Bioinformatics challenges for personalized medicine”. She explained what they do at her Translational Bioinformatics Unit in the Clinical Research Programme. And what they do is both exciting and promising.
They start with a biopsy of a tumour from a cancer patient who has relapsed after some initial treatment – they concentrate mostly in pancreatic cancer, but it would work with any, in principle. From this sample, they derive cell lines, but also – and they are quite unique in this – they generate a personalised xenograft. That is, they implant the human tumour in an immunocompromised mouse, creating an ‘avatar’ of the patient. After passing it from one mouse to another (they do about 60 mice per patient), they extract the tumour to analyse it by exome sequencing (and sometimes gene expression data, etc). They then have about 8 weeks to find, using bioinformatics, druggable targets that they then test on the avatar. Those drugs that work on the mouse are then given to the patient.
The advantages of this system are many and obvious: not only the in vivo model can be used to validate the hypothesis generated by the genetic analysis, but we basically have a personalised cancer model for a patient in which we can try as many drugs as we want. It can be cryopreserved, so we have unlimited access to the sample. And, since cancer is not a disease we can cure yet, but instead patients must keep checking out for possible relapses, metastasis, or resistances to treatment, keeping the mouse in parallel with the patient can help predicting how the patient will react to all these: whether he will develop resistance to the drug, which other mutations might appear, etc.
But there are several disadvantages, too. One is hinted in Fátima’s talk title: the bioinformatics analysis of the tumours to find which mutations are important (drivers) in the disease and which can have drugs that affect them is challenging, not the least because an individual cancer genome can have hundreds to thousands of mutations.
Perhaps the biggest barrier is that, at the moment, making these avatars is inefficient, very expensive and slow. And since the patients who are benefit from this technology are already in a very bad clinical condition, many of them don’t get to live enough to enjoy those benefits. But there are some successful cases, and Fátima mentioned a couple. In one case, a man with pancreatic cancer who was treated with mitomycin after all the tests in his avatar, survived more than 5 years, when he had been given 1 year at the most.
So there is hope in the field of personalised medicine, despite the fact that this is still not standard, and won’t probably be for the near future. And, as someone in the audience mentioned, in an ideal future, we might even have personalised prevention, according to our genetic makeup. Wouldn’t that be great?
A report by Maruxa Martinez, Scientific Editor at the PRBB
Complex genetic disorders often involve multiple proteins interacting with each other, and pinpointing which of them are actually important for the disease is still challenging. Many computational approaches exploiting interaction network topology have been successfully applied to prioritize which individual genes may be involved in diseases, based on their proximity to known disease genes in the network.
In a paper published in PLoS One, Baldo Oliva, head of the Structural bioinformatics group at the GRIB (UPF–IMIM) and Emre Guney, have presented GUILD (Genes Underlying Inheritance Linked Disorders), a new genome-wide network-based prioritization framework. GUILD includes four novel algorithms that use protein-protein interaction data to predict gene-phenotype associations at genome-wide scale, and the authors have proved that they are comparable, or outperform, several known state-of-the-art similar approaches.
As a proof of principle, the authors have used GUILD to investigate top-ranking genes in Alzheimer’s disease (AD), diabetes and AIDS using disease-gene associations from various sources.
GUILD is freely available for download at http://sbi.imim.es/GUILD.php
Guney E, Oliva B. Exploiting Protein-Protein Interaction Networks for Genome-Wide Disease-Gene Prioritization. PLoS One. 2012;7(9):e43557
The symposium will focus on the latest and most important advances in genomics but also in genetics, molecular and cell biology, or biotechnology. Several scientists in the international arena, such as Angus LAMOND (Wellcome Trust Centre for Gene Regulation and Expression, Dundee, UK), Tom MANIATIS (Columbia University, New York, US), or Iain MATTAJ (EMBL Heidelberg, Germany) will showcase the achievements of the CRG in the last 10 years in these fields. You can check here the full program.
Registration is free of charge, but finishes next Oct 8, so hurry up!
A study led by the ICREA researcher Mar Albà, head of the evolutionary genomics research group at IMIM/UPF, clarified the evolution of insertion and deletion accumulation of DNA sequences in different primate and rodent branches. By using the algorithm Prank+F they have observed that, contrary to previous reports, the only branch with a marked deletion to insertion mutational bias, resulting in substantial sequence shortening, is the rodent ancestral branch. It also appears that protein sequences tolerate deletions better than insertions, resulting in an increase in the deletion to insertion ratio for coding sequences in all branches. These results were published in the journal Genome Research. Further research will be applied to identify with more precision when the rodents experienced their greatest DNA loss.
It has been known for some years that short DNA insertions and deletions account for a significant amount of the variation in mammalian genomes and are likely to make an important contribution to species-specific traits. Their importance for medical genetics is highlighted by the fact that they have been implicated in a wide range of human diseases, the archetypal example being the phenylalanine deletion at position 508 in the CFTR protein that results in cystic fibrosis.
Laurie S, Toll-Riera M, Radó-Trilla N, Albà MM. Sequence shortening in the rodent ancestor. Genome Res. 2011 Nov 29