On December 12th Vadim Gladyshev from the Harvard Medical School, Boston, USA, gave a conference in a packed room at the PRBB invited by Roderic Guigó from the CRG. Gladyshev investigates the molecular basis for natural changes in longevity and the biological mechanisms involved in aging.
The first part of the conference focused on the mechanisms of aging. Gladyshev’s main question was: why and how do things go wrong with age?
At the beginning he introduced several aging theories that have contributed most significantly to the aging debate in the research community. Some of them were built in the 50s based on 19th century insights, whereas others are very recent. According to him, these theories are very different, each of them touching on a particular aspect of the aging process and, within that context, each has its merit, but all are incomplete.
He continued with his own view about aging. He suggested that imperfectness of biological processes leads to inevitable damage accumulation – called deleteriome – causing aging. His research group is now characterizing properties of cumulative damage and its impact on the aging process. They also study cancer as a disease of aging.
While the mechanisms of aging and the process of lifespan control may seem highly related topics, he maintained that they are different areas. To explain the difference, he used a metaphor of a river, where a lifespan would be equivalent to the time needed for the water to flow from the mountain to the ocean. According to him, the route of the river can be changed to make the journey longer, just like lifespan of humans can be extended. However, the fact that the river flows because of gravity can’t be changed, just like we cannot change the fact that the aging process occurs because of imperfectness. So the cause of aging is different from the determinants of longevity.
The second part of the conference was about mechanisms of lifespan control, trying to answer the questions: why do cells and organisms live as long as they do? and how does Nature adjust lifespan?
Gladyshev’s research team uses multiple approaches to address this question. One methodology involves studying the genes of exceptionally long-lived mammals, such as the naked mole rat, the Brandt’s bat and the bowhead whale. The Brandt’s bat (Myotis brandtii) is found throughout most of Europe and parts of Asia, and it often lives more than 40 years.
The naked mole rat (Heterocephalus glaber) is a burrowing animal commonly found in East Africa, well-adapted to their underground existence. They are characterized by small eyes, short and thin legs, hairless body (hence the common name) and wrinkled pink or yellowish skin. Their large front teeth are used to dig. This animal can live up to 31 years, the record for the longest living rodent.
Gladyshev’s group recently sequenced and analyzed the genomes of these animals, and they discovered some of the adaptations that contribute to their long lifespans. They also identified general gene expression and metabolic changes that associate with longer life.
In addition to the evolutionary study of long-lived animals, Gladyshev’s lab focuses on cell types that have different lifespan and in long-lived mouse models. They also do analysis across species and cell culture-based profiling in order to find unique and common mechanisms of longevity. Longevity signatures (based on gene expression) identify candidate interventions for lifespan extension. Ultimately, the researchers would like to find treatments or some other approaches which would help extend life span and diminish the consequences of age-related diseases.
At the end of the talk the public showed great interest on Gladyshev’s research, posing many questions about aging in yeast, epigenetic drift in aging and the relation between lifespan and maturity. In a fruitful and interesting conversation, some in the audience also suggested research approaches such as studying aging in single cells or focusing on the physics of aging. We’ll have to wait for Gladyshev’s next talk to see if some of these suggestions gave their fruits!
A report by Mari Carmen Cebrián
On March 11th, Venkatesh Murthy from the Department of Molecular and Cellular Biology of Harvard University, US, gave a conference at the PRBB invited by the CRG. He explained his study “An olfactory cocktail party: figure-ground segregation of odorants in rodents”, which was the cover of Nature Neuroscience in September 2014. After a brief introduction to the anatomy of the olfactory system of rodents, he explained that many odours are complex mixtures: different chemicals combine and then we can smell a particular object. His main question was: how well can a mouse pick out an individual odorant from a mixture?
They wanted the mice to pick a single ingredient within an odour cocktail. In order to do that, mice were trained to recognize target odorants embedded in unpredictable and variable background mixtures. They used 14 different chemicals, so there were more than 16.000 possible mixtures. It was impossible for the mice to memorize the combination; they had to recognize the single odorant. The test used was the go/no go, in which stimuli, in this case smell, are presented in a continuous stream and mice perform a binary decision on each stimulus. One of the outcomes (the correct smell) requires mice to make a motor response (go) in order to receive the reward, whereas the other requires mice to withhold a response (no-go). Accuracy and reaction time are measured for each event.
Mice could learn this task in a few days, they performed it well, but performance dropped with increasing number of background odours. To understand why, the researchers first had to overcome a problem particular to olfaction.
While the relationship among different visual stimuli is relatively simple – differences in colour can be described as differences in wavelength of light – there is not a simple explanation for describing how odours relate to each other. Instead, the researchers tried to describe scents according to how they activate neurons in the brain. They used optical imaging and computational models to relate behavioural performance to the combinatorial neural representation of odorants in odour receptors.
Using fluorescent proteins, they created images that showed how each of 14 different odours stimulated neurons in the olfactory bulb. Each odour gave rise to a particular spatial pattern of neural responses. When the spatial pattern of the background odours overlapped with the target odour, the ability of mice to identify the target was diminished. Therefore, the difficulty of picking out a particular smell among a cocktail of other odours depends on how much the background interferes with the target smell.
All in all, it was a very interactive session, with the public discussing several issues all through the talk, especially about methodology, so both the speaker and the public got new ideas!
A report by Mari Carmen Cebrián
On January 21, Judit Vall Castelló from the Centre for Research in Health and Economics (Pompeu Fabra University) gave a conference at the PRBB invited by the CREAL. She talked about her last study on the effect of business cycle conditions on children’s weight.
She explained that the majority of the research connecting recessions with body-weight has so far focused on adults or babies. In adults, most of the literature finds a link between better economy and weight increase, which would suggest that recessions are “good” for adult’s health. But, is it the same for children?
Spain is one of the ten countries of the OECD with a higher prevalence of infant overweight – about 25% of children aged 5 to 17. Children’s obesity rates represent an important public policy issue as a number of short-term adverse effects and risks have been associated with obesity in the early stages of life. For example, obese children have a greater risk of being bullied and they are more likely to stay obese into adulthood, therefore having a higher probability of suffering certain chronic diseases later in life.
The relationship between the business cycle conditions and children’s weight is not in the political agenda of Spanish politicians, it’s an unexplored topic on the scientific literature about children, and it has relevant consequences in the short and long-term. These were the main motivations for Vall Castello’s research.
Her team used data from 8 waves (1987-2012) of the Spanish National Health Survey. The pooled sample contained 37,562 observations of children between the ages of 2 and 15 years old.
She explained to an attentive audience how their strategy takes advantage of the variation in the unemployment rate across regions and survey years to look for potential effects. They used the regional unemployment rate as a proxy for the business cycle phase at the local level.
The researchers found that an increase in the unemployment rate is associated with lower obesity incidence, especially for children under 6 years old and over 12 years old – similarly to what was known in adults. A decrease in obesity is actually good news, but what happens to the other extreme of the population, the ones that were already underweight when the economy was good?
They found that negative economic conditions increased the prevalence of infant underweight, particularly for those under 6. So, an increase in the unemployment rate shifts the entire weight distribution to the left, decreasing the probability of suffering obesity and overweight but at the same time increasing the probability of being underweight for children under 6 and children over 12.
Vall Castello was also interested in the possible channels through which the economy could be impacting infant underweight and obesity, such as changes in the nutritional composition of the children’s diet or in the frequency of exercise.
Their results suggest that an increase in the local unemployment rate may be linked to a decrease in the probability of following a Mediterranean Diet, which is considered as one of the healthiest dietary options. More worryingly, this negative correlation was most significant for children under 6 years old.
Since compulsory education starts at age 6 in Spain – and, for most children, it includes lunch at school – this research seems to point out just how important it is to ensure that all children, regardless of their parents’ economic situation, have at least one balanced meal a day, and the key role schools play in this.
A report by Mari Carmen Cebrián
Keeping detailed records of your research and taking the right decisions when analysing your data is easier said than done. Yet, despite its importance, researchers often receive no formal training in these and other issues key to scientific integrity.
The PRBB Good Scientific Practice Working Group – formed by members of all the centres at the park, including myself – run a survey at the PRBB last year in which improper record keeping was the most relevant (mis)behaviour identified by scientists at the park, with over 40% of the 521 respondents saying they had “sometimes or often” noticed it. Several surveys (Martinson BC et al. Scientists behaving badly. Nature 2005; 435:737-8) from around the world show this is not unusual – so the group decided to tackle this seemingly general problem in its first action campaign since it was created at the end of 2014.
A series of activities were organized for the week starting on the 25th of January.
The BIG QUIZ were a series of questions regarding data recording and managing that invited scientists to discuss amongst themselves in the restaurant or the lifts, and to record their opinion via the Good Scientific Practice website.
The questions were posted via Twitter as well as in posters around the building during the whole week. More than 285 people visited the website during that time, with between 70 and 120 replies to each of the questions.
Without really aiming at answering those questions – rather, in any case, at opening new ones – three special workshops were held during the week. These were aimed at slightly different audiences, as a way of trying to cater for the great variety of science that takes place at the PRBB, and the different needs of each field.
“Keeping the data record straight in the lab” – aimed at people working on wet labs – had Lola Mulero from the CMRB explaining the audience her centres’ system to keep track of the more than 100 experiments they deal with in parallel. This was followed by an open discussion on do’s and dont’s of a good lab notebook, and the seminar ended with a look to the future with the last talk focusing on the CRG’s pilot experiment of using electronic notebooks such as Onenote.
“In silico data tsunami: will you survive?” was the suggestive title of the second workshop. It was led by Cedric Notredame from the CRG, who set the ground for the following discussions on reproducibility, traceability and sharing in computational data with a statement (“Science is about being able to measure something in a reproducible way”), a question (What to do with the growing amount of unused – but potentially useful for others – data we are producing?) and a reference to the #data#parasites recent controversy. Three short talks followed about the importance of metadata, how to ensure your experiments are reproducible, and the specific challenges of creating software for clinical applications. At the end of the workshop, group discussions took place on several open questions and ideas were put together with Ivo Gut, director of the CNAG, as the host.
The last workshop “Managing data in human research” gave some tips about how to create and maintain reliable and secure databases with human data and tackled the issues of privacy, anonymisation and data protection, before going on to the second, interactive part. This consisted of three case studies that made the audience think twice about the issues at hand when designing a study or the huge problem they could face if their data manager left without warning – just at the tip of the iceberg of problems would be finding the final version of a document/analysis/experiment amongst the files called “final”, finalv2”, “supefinal”, “final_draft”, “final_MM”,…
All three workshops were well attended, with over 60 people in each, and the feedback from the assistants was positive. You can see the presentations for all the seminars here.
The aim was achieved: to raise awareness about the intricacies and difficulties of proper record keeping and data management and to discuss with colleagues about possible solutions.
And the next challenge was set for the PRBB Good Scientific Practice working group. Watch this space for more upcoming activities!
Different targeted strategies have recently emerged in the field of proteomics that enable the detection and quantification of a predetermined subset of proteins with a high degree of sensitivity and reproducibility across many samples. Major advances have been achieved in the targeted proteomics workflow, including advances in instrumentation, the generation of thousands of publicly available targeted assays, and the development of multiple computational tools for convenient data analysis.
This field is known as targeted proteomics and although it has successfully been applied in several research projects of molecular biology, systems biology and translational medicine, there is still a strong gap between theory and real application.
To address this mismatch and boost the applications of targeted proteomics, the PRBB held a new edition of the 6-day long EMBO Practical Course “Targeted proteomics: Experimental design and data analysis” co-organised by Eduard Sabidó, head of the CRG/UPF Proteomics Unit and Ruedi Aebersold from the ETH in Zurich.
Twenty-five participants coming from 20 countries from all continents attended the course, which offered a combination of keynotes, practical demonstrations and tutorials to provide the participants with the required knowledge and skills to design and analyse their own targeted proteomic experiments using the most advanced and state-of-the-art methods.
Each day started with a keynote open to all PRBB residents by different renowned proteomics researchers that reviewed the latest achievements in the field of targeted proteomics and introduced the “topic of the day”. The students also attended to several practical session to master the complete workflow associated to targeted proteomics, thus filling the gap between theory and the actual implementation of targeted proteomics experiments. During the practical sessions the students generated, refined and optimized targeted proteomics methods for a set of selected proteins of interest, and automated manual data analysis by reviewing concepts such as peak picking, quality assessment, and statistics for accurate protein quantitation. The program was complemented with poster sessions and several social events to foster informal scientific discussions and to adapt the technology to each participant’s particular interests.
The participants, ranging from PhD students to postdocs and senior researchers, gave a very positive feedback on the course, with an overall score of 4’75 on a scale from 0 to 5 and very encouraging comments:
“Very motivating meeting with very interesting researchers from very different fields but same interest in proteomics, and high quality seminars and tutorials that made me want to continue research in proteomics”
“I feel confident to start my own experiments now”
“Informative and engaging with strong relevance to my research”
“The most useful course so far and also great to share knowledge and tips between us!”
“Excellent speakers, plenty usufull, practical informations, nice atmosphere it is shortest description of the course”
“A wonderful course instructed by cutting edge and innovative field leading scientists whom together have passed on the necessary tools to enable me to design and implement successful targeted proteomic workflows”
This course disseminated the know-how present at the CRG/UPF Proteomics Unit to a broader scientific community and strengthened the interdisciplinary exchange of knowledge and ideas. Thus, by transferring the expertise on experimental design and data analysis for targeted proteomics, the CRG/UPF Proteomics Unit aims to facilitate a wider and more routine application of targeted proteomics among non-proteomic laboratories worldwide.
For those who could not attend this year, the EMBO Practical Course “Targeted proteomics: Experimental design and data analysis” will take place again at the PRBB in November 2016. You will soon be able to access this website to register for the 2016 edition – do not miss this opportunity!
You can see the videos of the 2015 edition here.
Course instructors that participated in the 2015 edition
Abersold, Ruedi; ETH Zürich, Switzerland
Altelaar, Maarten; Utrecht University, The Netherlands
Borràs, Eva; CRG-UPF Proteomics Unit, Spain
Bensimon, Ariel; ETH Zürich, Switzerland
Chiva, Cristina; CRG-UPF Proteomics Unit, Spain
Guillet, Ludovic; ETH Zürich, Switzerland
Ludwig, Christina; ETH Zürich, Switzerland
MacCoss, Michael; University of Washington, USA
MacLean, Brendan; University of Washington, USA
Reiter, Lukas; Biognosys GmbH, Switzerland
Sabidó, Eduard; CRG-UPF Proteomics Unit, Spain
Vitek, Olga; Northeastern University, USA
Next 21-23 September 2015 an International Conference on System Level Approaches to Neural Engineering (ICSLANE) will take place at the PRBB. Organised by the Neural Engineering Transformative Technologies (NETT) Consortium, the conference presents an outstanding list of invited speakers.
Neural Engineering is an inherently new discipline that brings together engineering, physics, neuroscience and mathematics to design and develop brain-computer interface systems, cognitive computers and neural prosthetics. Neural Engineering Transformative Technologies (NETT) is a Europe-wide consortium of 18 universities, research institutes and private companies. NETT consortium announces registration for this event is now open, and introduces a remarkable list of prominent Invited Speakers with Keynote Lecturers:
- Eugene Izhikevich– a Co-Founder, Chairman and CEO of the cutting edge technology company Brain Corporation, located in San Diego, USA. The company’s mission is to design, produce and bring to everyday life intelligent machines equipped with the first-in-the-world operating system based on learning: BrainOS. He is also a former scientist well known for his rich contributions to the mathematical theory of dynamics of spiking neurons.
- Nikos Logothetis– a pioneer in engaging fMRI measurements to neuronal activity studies, director of the department of Physiology of Cognitive Processes at the Max Planck Institute for Biological Cybernetics in Germany. His current research is focused on neural mechanisms of perception and object recognition. It involves a wide variety of brain imaging techniques, which allow to gather and consolidate data from different domains of neuronal activity.
The aim of this conference is to bring together theoretical and experimental neuroscientists and roboticists to discuss the state of the art in the field of Neural Engineering. This three-day long event will also provide young researchers with the opportunity to present their work.
The full list of confirmed speakers, divided into five different theme panels is:
Brain-on-chip – engineering of neuronal circuits in-vitro with emphasis on microfluidics
Albert Folch – Department of Bioengineering, University of Washington, Seattle, WA, USA
Thibault Honegger – Laboratoire des Technologies de la Microelectronique, CNRS-CEA, Grenoble, France
Yoonkey Nam – Department for Bio and Brain Engineering, KAIST, South Korea
Optical neurotechnology Methodology – imaging and engineering techniques that allow recording of neuronal activity
Amanda Foust – Neural Coding Laboratory, Imperial College London, London, UK
Fritjof Helmchen – Brain Research Institute, University of Zürich, Zürich, Switzerland
Adam Packer – Department of Neuroscience, Physiology and Pharmacology, University College London, London, UK
Eftychios Pnevmatikakis – Department of Statistics and Center for Theoretical Neuroscience, Columbia University, New York, NY, USA
Neural Dynamics – mathematical description of neuronal activity
Viktor Jirsa – Institut de Neurosciences des Systèmes, Marseille, France
David Liley – Swinburne University of Technology, Melbourne, Australia
Benjamin Lindner – Bernstein Center for Computational Neuroscience, Berlin, Germany
John Terry – College of Engineering, Mathematics and Physical Sciences, University of Exeter, UK
Neural learning and control – motion planning, controlling and learning neuro-inspired techniques for robotics
Dario Farina – Bernstein Center for Computational Neuroscience, Göttingen, Germany
Sami Haddadin – Institute of Automatic Control, Hannover, Germany
Alexandre Pouget – CMU, Geneva, Switzerland
Gregor Schöner – Institut für Neuroinformatik, Ruhr-Universität Bochum, Germany
Reza Shadmehr – John Hopkins University, Baltimore, MD, USA
Patrick van der Smagt – BRML labs, TUM, Germany
Neural Coding – investigation of neuronal strategies for encoding information
Andre Bastos – The Picower Institute for Learning & Memory at MIT, Boston, MA, USA
Romain Brette – Institut de la Vision, Paris, France
Sophie Deneve – Laboratoire de Neurosciences Cognitives, LNC, Paris, France
Kenneth Harris – Institute of Neurology and the Department of Physiology, Pharmacology and Neuroscience, UCL, London, UK
Stefano Panzeri – Neural Computation Lab, IIT, Rovereto, Italy
Jan Schnupp – Auditory Neuroscience Group, Oxford, UK
We invite you to submit poster abstracts and apply for contributed talks. We introduced a one-day participation option: now you can attend one day of the conference for 80 Euros. The cost of participation in the whole event is 200 Euros (plus 50 Euros for optional conference dinner).
There is a 50% fee reduction for students who will present posters. Registration is available on the event’s on the registration form and all necessary information is on the event’s website. The registration deadline is on June 20th, so hurry up!!
According to Gene Myers (near) perfect genome assembly is within reach for any organism of your choice.
Time will tell if he’s right, but being an influential bioinformatician who has made key contributions in sequence comparison algorithms such as BLAST, whole-genome shotgun sequencing and genome assembling, one will think he knows what he’s talking about!
In a conference at the PRBB auditorium today, he explained to a mixed audience of biologists and computer scientists how, after a few years dedicated to other issues (mostly image analysis), he was now coming back to sequencing with great excitement. The reason: PacBio RSII. This sequencing device is able to produce very long reads (of more than 10,000bp!) and has a couple of other characteristics that can potentially make full assembly possible: although error rates are high (10-15%) they are random, not like with other techniques that tend to make always the same errors. And sampling is also random. This randomness and the length of the reads mean that, with enough sequencing coverage, you can always get the right sequences.
So now all we need, Myers says – apart from waiting for the cost of the PacBio to go down, which he promised will happen soon (4x in one year) – is to build an efficient assembler. He talked about what he and some colleagues have been doing in that sense. The main element is a ‘scrubber’ to clear and edit the reads while removing as little data as possible. Because his point was that even though people have been focusing on the assembly, the real problem is the data, the contaminants, chimeras, excessive error rates,… So he presented his personal ‘data cleaner’, DAscrub, soon to be released.
You can read more details about his recent work on this in his blog,
In the meantime, his advice to the world – stop the 10,000 genomes project right away and wait a couple of years to have better sequences!
The director of the International Agency for Research on Cancer (IARC), Christopher Wild, celebrated his birthday in style this year. On that special day, February 21, he gave a talk to a full auditorium at the PRBB, in what was the 3rd Global Health session co-organised by ISGlobal, CRESIB and CREAL. This was his second visit to the park, the first one being six years ago, when the building was pretty much empty. “It’s great to see how now everything is thriving!”, he said.
Wild started pointing out the three aims of the IARC, the cancer agency of the World Health Organization (WHO): describing occurrence of cancers, evaluating prevention strategies, and supporting implementation in clinical settings. He highlighted particularly the low-income countries where cancer cases are increasing exponentially, with 60% of cancers worldwide now being in developing countries.
The role IARC is crucial if we take into account that 30% of non-transmissible diseases in 30-70 years-old are due to cancer. And especially so if we look at the predictions based on demographics: according to the agency’s director, by 2030 there might be 21.7 million cases of cancer, when in 2012 there were 14 million.
“Cancer patterns are not static; as countries develop, so they do. We need to think forward”
As Wild pointed out, we cannot treat our way out of cancer, so what we need is prevention. Half of the cancers could be prevented by the knowledge we currently have. And taking into account that most cancers have environmental or life style causes, the potential to act is even greater. We have known for years that tobacco, infections, alcohol, lack of physical activity and obesity are factors that can increase your risk of cancer. And we know prevention works, as proven by the decrease of lung cancer cases in countries such as Finland of the UK after tobacco bans were introduced. But it takes a long time. Take the example of cervix cancer: screening and vaccinations against the papilloma virus can decrease its incidence, but at least 20 to 30 years have to pass before we can see an effect on the population! So, as Wild stressed, political vision and leadership is essential in order for prevention to work.
But if prevention is proving difficult, there’s an area which is even more neglected: implementation. The speaker explained some successful cases. One involved aflatoxin, a known carcinogen produced by a fungus that grows on peanuts and corn. In 2005, intervention in some 20 villages in Africa, where simple resources were given to reduce exposure to the fungus (by using mats to reduce humidity, etc.), lead to 60% reduction in exposure. In turn, this lead to only 2% of the villagers having the toxin in blood, as opposed to 20% of people in villages in which intervention hadn’t taken place.
But despite the success of this proof of concept, eight years later nothing has been implemented at a general level.
It is clear that there is a lot of work to do in this area. When asked how far the IARC should go in terms of pushing for this kind of actions, the director was cautious. “Once you become an advocate, your science is under suspicion”, he declared. Sadly, this is the reality faced by some scientists working in the health sector, whose research result can be seen as the outcome of hidden interests if they are too active in pursuing policy changes. Should scientists then just publish their results, perhaps act as advisors in some committees, and then sit back patiently and wait until politicians decide is time to take action? Hearing some of Chris Wild’s arguments and examples, I personally think not. But his point about the dangers of advocacy was a good one. The debate is open….
A report by Maruxa Martinez, Scientific Editor at the PRBB
Wouter de Laat was one of the developers of 4C, a technique highly used to find out DNA interactions between different regions within or between chromosomes. He came from the Hubrecht Institute in Utrecht, The Netherlands, to give a talk to the PRBB today, invited by Guillaume Filion, from the CRG.
The room was packed, with more than 70 researchers ready to learn about how much function is actually within the genome structure. We learned about ‘gene kissing’ – or how genes functionally related but far away in a chromosome come close together during transcription. Interestingly, when de Laat and colleagues inhibited transcription, these interactions (kisses) did not change. The same happened when transcription was overexpressed; and even when they forced mono-allelic expression (silencing just one of the two alleles for a specific gene) and checked by allele-specific 4C, they saw that the contacts with the rest of the chromosome still had not changed.
He used a good metaphor to explain how these 3D localisation in the nucleus takes place: each gene in a chromosome is like a “dog-on-a-leash” – the gene goes wherever the chromosome goes (in space), as the dog does with its owner, although once in that location, a gene is ‘free’ to interact with whoever they want – choose which tree they want to pee on, so to speak. However, there are some genes (mostly largely repetitive regions, such as rRNA genes or centromeres) which are able to decide their preferred location and actually bring the rest of the chromosome: these would be the Pit Bulls amongst the genes.
He talked about much more his lab is studying, mostly comparing the 3D spatial organization of differentiated cells versus embryonic cells (both ESC and IPs), and showed that differentiated cells are also more spatially defined than totipotent cells.
De Laat talked about other uses of 4C, and amongst others he mentioned, at the end of his talk, how he is taking this technique further and using it in diagnostics. Indeed, he has co-founded a company called Cergentis that uses 4C to identify DNA regions which are rearranged.
A report by Maruxa Martinez, Scientific Editor at the PRBB
Shigeru Kondo (Institute of Frontier Biosciences, Osaka University, Japan) gave one of the last talks at the “Computational approaches to networks, cells and tissues” meeting that took place this week at the PRBB Auditorium.
Co-organised by James Sharpe (CRG) and Hernán López-Schier (HZM), the meeting was supported by QuanTissue, a collaborative European network to bridge the gap between the traditional developmental cell biology, biophysics and systems biology. And so it did!
Most of the nearly 200 participants were physicysts or mathematicians, as one could tell from their presentations and posters full of complicated mathematical formulae. But the subjects they studied were all related to the development of tissues and organs within organisms.
Kondo, for example, talked about the pigmentation pattern of zebrafish and how the Turing model could explain it.
Although his lab found there is no actual diffusion of any molecules, they showed that the interaction between the two types of pigment cells that define the skin patterns in the fish can still be explained by the Turing reaction-diffusion model. Melanophores, one of the cell types, elongate long projections towards xanthophores, the other cell type, and the effect of this is mathematically equivalent to the classical Turing model. Interestingly, he showed how, changing one single gene his lab was able to generate fish with skin patterns resembling most of those present in nature, from leopards and jaguars to zebras. Hence, the title of this posts, with which he finished his talk: “If you want horses with spots or giraffes with stripes, I can make it!”.
The meeting is still going on – another two hours of good science if you rush!
A report by Maruxa Martinez, Scientific Editor at the PRBB