b0ca029e7c1ebdd2b22af95cff4f8098.ppt
- Количество слайдов: 35
Machine Learning in Bioinformatics Simon Colton The Computational Bioinformatics Laboratory
Talk Overview n Our research group – Aims, people, publications n Machine learning – A balancing act n Bioinformatics – Holy grails n Our bioinformatics research projects – From small to large n. A future direction – Integration of reasoning techniques
Computational Bioinformatics Laboratory n Our aim is to: – Study theory, implementation and application of computational techniques to problems in biology and medicine n Our emphasis is on: – Machine learning representations, algorithms and applications n Our favourite techniques are: – ILP, SLPs, ATF, ATP, CSP, GAs, SVMs – Kernel methods, Bayes nets, Action Languages n The (major) research tools we’ve produced are: – Progol, HR, Meta. Log (in production)
The Research Group Members n n n n n Hiroaki Watanabe (RA, BBSRC) Alireza Tamaddoni-Nezhad (RA, DTI) Stephen Muggleton (Professor) Ali Hafiz (Ph. D) Huma Lodhi (RA, DTI) Simon Colton (Lecturer) Jung-Wook Bang (RA, DTI) (Nicos Angeloupolos, now in York) (RA, BBSRC) Room 407
Some External Collaborators n n n n n Mike Sternberg (Biochemistry, Imperial) Jeremy Nicholson (Biomedical Sciences, Imperial) Steve Oliver (Biology, Manchester) Ross King (Computing, Aberystwyth) Doug Kell (Chemistry, Manchester) Chris Rawlings (Oxagen) Charlie Hodgman (GSK) Alan Bundy (Informatics, Edinburgh) Toby Walsh (Cork Constraint Computation
Some Departmental Collaborators n Krysia Broda, Allesandro Russo, Oliver Ray – Aspects of ILP and ALP n Marek Sergot – Action Languages n Tony Kakas (Visiting professor, Cyprus) – Abductive Logic Programming
Machine Learning Overview n Ultimately about writing programs which improve with experience – Experience through data – Experience through knowledge – Experience through experimentation (active) n Some common tasks: – Concept learning for prediction – Clustering – Association rule mining
Maintaining a Balance Predictive tasks Supervised learning Know what you’re looking for Don’t know what you’re looking for Descriptive tasks Unsupervised learning Don’t know you’re even looking
A Partial Characterisation of Learning Tasks n Concept learning n Outlier/anomaly detection n Clustering n Concept formation n Conjecture making n Puzzle generation n Theory formation
n Maintaining a Balance in Predictive/Descriptive tasks Predictive tasks – From accuracy to understanding – Need to show statistical significance • But hypotheses generated often need to be understandable – Difference between the stock market and biology n Descriptive tasks – From pebbles to pearls – Lots of rubbish produced • Cannot rely on statistical significance – Have to worry about notions of interestingness • And provide tools to extract useful information from output
Maintaining a Balance in Scientific Discovery tasks n Machine learning researchers – Are generally not domain scientists also n Extremely important to collaborate – To provide interesting projects • Remembering that we are scientists not IT consultants – To gain materials • Data, background knowledge, heuristics, – To assess the value of the output
Inductive Logic Programming n Concept/rule learning technique (usually) – Hypotheses represented as Logic Programs n Search for LPs – From general to specific or vice-versa • One method is inverse entailment – Use measures to guide the search • Predictive accuracy and compression (info. theory) – Search performed within a language bias n Produces good accuracy and understanding – Logic programs are easier to decipher than ANNs n Our implementation: Progol (and others)
Example learned LP fold('Four-helical up-and-down bundle', P) : helix(P, H 1), length(H 1, hi), position(P, H 1, Pos), interval(1 =< Pos =< 3), adjacent(P, H 1, H 2), helix(P, H 2). n Predicting protein folds from helices
Stochastic Logic Programs n Generalisation of HMMs n Probabilistic logic programs – More expressive language than LPs – Quantative rather than qualitative • Express arbitrary intervals over probability distributions n Issues in learning SLPs – Structure estimation – Parameter estimation n Applications
Automated Theory Formation n Descriptive learning technique – Which can also be used for prediction tasks n Cycle of activity – Form concepts, make hypotheses, explain hypotheses, evaluate concepts, start again, … – 15 production rules for concepts – 7 methods to discover and extract conjectures – Uses third party software to prove/disprove (maths) – 25 heuristic measures of interestingness Project: see whether this works in bioinformatics n Our implementation: HR n
Other Machine Learning Methods used in our Group n Genetic algorithms – To perform ILP search (Alireza) n Bayes nets – Introduction of hidden nodes (Philip) n Kernel methods – Relational kernels for SVMs and regression (Huma) n Action Languages – Stochastic (re)actions (Hiraoki)
Bioinformatics Overview n “Bioinformatics is the study of information content and information flow in biological systems and proceses” (Michael Liebman) – Not just storage and analysis of huge DNA sequences n “Bioinformaticians have to be a Jack of all trades and a master of one” (Charlie Hodgman, GSK) n Highly collaborative
From Sequence to Structure attcgatcgatcaggcgcgcta Cgagcggcgaggacctcatcatcgatcag… MRPQAPGSLVDPNEDELRMAPWYWGRISREEA KSILHGKPDGSFLVRDALSMKGEYTLTLMKDG CEKLIKICHMDRKYGFIETDLFNSVVEMINYY KENSLSMYNKTLDITLSNPIVRAREDEESQPH GDLCLLSNEFIRTCQLLQNLENKRNSFN AIREELQEKKLHQSVFGNTEKIFRNQIKLNES FMKAPADA…… n There is a computer program…?
Holy Grail Number One From protein sequence to protein function n HGP data needs to be interpreted n – Genome split into genes, which code for a protein – Biological function of protein dictated by structure n Structure of many proteins already determined – By X-ray crystallography n Best idea so far: given a new gene sequence – Find sequence most similar to it with known structure • And look at the structure/function of the protein n Other alternatives – Use ML techniques to predict where secondary
Holy Grail Number Two n Drug companies lose millions – Developing drugs which turn out to be toxic n Predictive Toxicology – Determine in advance which will be toxic n Approach toxicity 1: Mapping molecules to – Using ML and statistical techniques n Approach 2: – Producing metabolic explanations of toxic effects
Other aims of Bioinformatics n Organisation of Data – Cross referencing – Data integration is a massive problem n Analysing data from – High-throughput methods for gene expression – Ask Yike about this! n Produce Ontologies – And get everyone to use them?
Some Current Bioinformatics Projects n SGC – The Substructure Server n SGC and SHM – Discovery in medical ontologies n SHM – Studying biochemical networks (£ 400 k, BBSRC) – Closed loop learning (£ 200 k, EPSRC) – The Metalog project (£ 1. 1 million, DTI)
A Substructure Server n Lesson from Automated Theorem Proving – Best (most complex) methods not most used • Other considerations: ease of use, stability, simplicity, e. g. , Otter n Aim: provide a simple predictive toxicology program – Via a server with a very simple interface n Sub-projects – Find substructures in many positives, few negatives: Colton • Simple Prolog program, writing Java version, use ILP? ? – Put program on server: Anandathiyagar (MSc. )
The Substructure Server
Using Medical Ontologies n Use Ontology and ML for database integration – Muggleton and Tamaddoni-Nezhad – Bridge between two disparate databases • LIGAND (biochemical reactions) • Enzyme classification system (EC) = ontology n Automated ontology maintenance – Colton and Traganidas (MSc. Last year) – Gene Ontology (big project) – Use data to find links between GO terms • Equivalence and implication finding using HR
Gene Ontology Discovery 55%
Studying Biochemical Networks n Use SLPs to find mappings between genomes – Map function of pairs of homologous proteins • E. g. , mouse and human – Homology is probabilistic n Developed SLP learning algorithms n Initial results applying them in biological networks
Closed Loop Machine Learning n Active learning – Information theoretic algorithm designs and chooses the most informative and lowest cost experiments to carry out • Implemented in the ASE-Progol system – Learning generates hypotheses – Being studied by Ali Hafiz (Ph. D) n Idea: use machine learning to guide experimentation – using a real robot geneticist in a cyclic process n Aims of current project: determine the function of genes
APRIL 2 n Applications of Probabalistic Relational Induction in Logic n Aim: develop representations and learning algorithms for probabilistic logics n Applications: bioinformatics – Metabolic networks – Phylo-genetics n 2 RAs at Imperial (with Mike Sternberg) – Starting in January
The Metalog Project Overview n Aim: – Modelling disease pathways and predicting toxicity – Gap filling: existing representations correct but incomplete – Predict where the toxin is acting (focus) n Multi-layered problem representation – – Meta-network level (Bayes nets) Philip Network level (SLPs) Huma Biochemical reaction level (LPs) Alireza Problog lingua-franca developed • to represent learned knowledge n NMR Data from metabonomics from Jeremy Nicholson
The Metalog Project Progress n Year 1 achievements (all objectives achieved) n Function predictions from LIGAND n Mapping between KEGG and metabolic networks n Initial Bayes-net model – Drawn much interest from experts • Agrees with KEGG, and disagrees in interesting ways • Interaction between metabolytes which are not
Future Directions for Machine Learning in Bioinformatics n In-silico modelling of complete organisms n Representation and reasoning at all levels – From patient to the molecule n Probabalistic models – For more complex biological processes
Biochemical Pathways n 1/120 th of a biochemical network
Future Directions for My Research Descriptive Induction meets Biology data n Most ML bioinformatics projects are predictive n – Very carefully compressed notions of interestingness • Into a single measure: predictive accuracy • Domain scientist not bombarded with a lot of information • A correctly answered question can be highly revealing n Can we push this envelope slightly? – Use descriptive induction (WARMR, CLAUDIEN, HR) • To tell biologists something they weren’t expecting about the data they have collated
More Future Directions n Put “Automated Reasoning” back together again – Essential for scientific discovery n ML, ATP, CSP, etc. , all work well individually – Surely work better in combination… n Improve ATP to prove a different theorem? – Make flexible using CSP and ATP n Improve ML by rationalising input
b0ca029e7c1ebdd2b22af95cff4f8098.ppt