Tag Archive | pharma

“Personalised medicine and Big Pharma need bioinformatics”

???????????????????????????????

David Searls retired three years ago from his position as senior Vice President of Bioinformatics in GlaxoSmithKline. Since then, this computer scientist who spent 16 years in academia and 19 years in industry has gone back to his theoretical studies on linguistic analysis of biological sequences. He was invited to the PRBB and talked to us about drugs and computers.

This interview was published in Ellipse, the monthly magazine of the PRBB.


What part does bioinformatics have in drug development? 

It is an essential step along the way. This is because not only drug discovery, but all biology, has become, since the human genome and the high throughput technologies, an information science. It is very data-intensive, and you need computers to analyse that data.

How is the industry crisis affecting the pharmaceutical companies? 

The industry is indeed in great difficulty at the moment, as costs are increasing while the number of new drugs is going down. One way the large pharmaceutical companies are adapting is by starting to drop some of their therapeutic areas. Fundamentally, R+D is becoming smaller, due to the merging of companies and the reduction of costs. They are also depending more on in-licensing, i.e. buying drugs at different stages of development from smaller biotech companies or from universities. This way the ideas, the basic science and the early testing, are done by smaller companies while Big Pharma does only the last stage, the clinical trials, which is what they are best at. Basically, a more spread out economic model is being created.

Can bioinformatics help? 

Yes, it can. One of the reasons why the cost of developing drugs is so high is that many of the molecules studied as potential drugs aren’t effective, or have undesired side effects. Better use of the information that predicts interactions between molecules can prevent early failure, since the side effects are usually due to interactions between the drug and proteins other than the target.

Another way bioinformatics can help is in drug repositioning, which is taking a drug that has been approved for one disease, and looking for other uses for it. Bioinformatics helps us find other protein interactions of a specific drug target, and predict which processes that target might be involved in, as well as potential effects. The advantage is that we already have data on the safety of the drug, which is one of the most costly procedures.

What will be the role of bioinformatics in personalised medicine? 

It is already helping to classify diseases via the analysis of transcriptomics, i.e. which genes are activated in each tissue. This allows us to find subtypes of an apparently homogeneous tumour that are susceptible to different drugs. We can then check the expression pattern of the patients to decide which treatment is best for them. Also, personalised medicine won’t be one drug for one individual, but a combination of drugs for each individual. Again, bioinformatics will help with the prediction of which combinations will be more useful.

“Habitual competitors are now working together to get better toxicity predictions”

Ferran Sanz (IMIM-UPF) tells us about the eTOX project in a recent interview published in El·lipse, the monthly PRBB publication.


The electronic toxicology project (eTOX) started in January 2010 as one of the projects funded in the first call of the IMI (Innovative Medicines Initiative), a unique public-private partnership between the European Community and the European Federation of Pharmaceutical Industries and Associations (EFPIA). Ferran Sanz, director of the Research Programme on Biomedical Informatics (GRIB, IMIM-UPF) and academic coordinator of eTOX, evaluates the project’s achievements so far as very positive.

What exactly is eTOX about?
All IMI projects, including eTOX, bring together European pharmaceutical companies and academic groups to address scientific challenges that are a priority for the pharmaceutical industry. In the case of eTOX, the aim is to facilitate the early prediction of drug toxicity through computational models. It will last a total of five years.

How is that done?
The first step is an intensive data collection exercise in which structural and toxicological information on tested compounds is gathered from the archives of the participating pharmaceutical companies. This is the first time they have agreed to share such sensitive information, originating from animal experiments. On the basis of this shared information, computer models can be created to allow better in silico prediction of toxicology for newly designed drugs. The future perspective is to be able to develop new drugs more efficiently, with less toxicity and in a shorter time period. This procedure will not only reduce the costs of drug development, but also the amount of animal experimentation.

Who are the participating partners?
Out of the 25 partners from different European countries, 13 are pharmaceutical companies, five small and medium-sized enterprises, and seven academic institutions. Originally there were 11 pharmaceutical companies, but two more asked to collaborate and contribute after the project had started. The fact that the participation of the companies implies a substantial financial and manpower contribution from their part without receiving any public funding, indicates their interest in the topic.

What were the achievements in this first one and a half year of the project?
In the first year the highlight has been the positive attitudes of all of the partners, which result in the consolidation of a very productive teamwork. Even though some partners are usually competitors searching for new pharmacological targets and drugs, within the eTOX project they collaborate enthusiastically to achieve a common goal: to find a solution to avoid toxicity and develop safer new drugs.

We have already created the database for the shared information and procedures for semiautomatic data extraction from the toxicology reports. We have also defined the computational architecture of the predictive system and we have already developed the first modules of such system.

What steps will be taken in the future?
Once the database infrastructure has been set up, it is being fed with the information extracted by the archives of the pharmaceutical companies. On the basis of the data accumulated in the database in each moment, the predictive system is being progressively trained. Then, it will be tested within each company with internal data not included in the eTOX database. According to the incremental experience and emerging problems, the system will be improved and further developed.

%d bloggers like this: