Anita C Faul Author of Evaluating Organization Development
FEATURED AUTHOR

Anita C Faul

Data Sceintist
British Antarctic Survey

I did my undergraduate degree in Germany and at Cambridge, UK. This was followed by a PhD on the Faul-Powell Algorithm for Radial Basis Function Interpolation under the supervision of Mike Powell. I then worked on the Relevance Vector Machine with Mike Tipping at Microsoft Research Cambridge. Ten years in industry followed. In my books "A Concise Introduction to Numerical Analysis" and "A Concise Introduction to Machine Learning" I bring out the underlying mathematics from first principles.

Biography

Anita Faul came to Cambridge after studying two years in Germany. She did Part II and Part III Mathematics at Churchill College, Cambridge. Since these are only two years, and three years are necessary for a first degree, she does not hold one. However, this was followed by a PhD on the Faul-Powell Algorithm for Radial Basis Function Interpolation under the supervision of Professor Mike Powell. She then worked on the Relevance Vector Machine with Mike Tipping at Microsoft Research Cambridge. Ten years in industry followed where she worked on various algorithms on mobile phone networks, image processing and data visualization. After six year as Teaching Associate at the University of Cambridge which included being a Fellow, Director of Studies for Mathematics and Graduate Tutor at Selwyn College, she works as Data Scientist at the British Antarctic Survey. In her books "A Concise Introduction to Numerical Analysis" and "A Concise Introduction to Machine Learning" she brings out the underlying mathematics from first principles. A moodle site accompanying her books is at acfaul.gnomio.com. Please contact her, if you require access.

Education

    PhD, University of Cambridge, Uk, 2000

Areas of Research / Professional Expertise

    Numerical Analysis, Machine Learning

Websites

Books

Featured Title
 Featured Title - A Concise Introduction to Machine Learning - 1st Edition book cover

Articles

15th IEEE International Conference on Machine Learning and Applications (ICMLA)

Relevance Vector Machines with Uncertainty Measure for Seismic Bayesian Compress


Published: Dec 18, 2016 by 15th IEEE International Conference on Machine Learning and Applications (ICMLA)
Authors: Georgios Pilikos, A.C. Faul

Relevance Vector Machines with Uncertainty Measure for Seismic Bayesian Compressive Sensing and Survey Design

Data for Policy 2016, Frontiers of Data Science for Government

The model is simple, until proven otherwise - how to cope in an ever changing wo


Published: Sep 01, 2016 by Data for Policy 2016, Frontiers of Data Science for Government
Authors: A.C. Faul, Georgios Pilikos

The model is simple, until proven otherwise - how to cope in an ever changing world

IMA JOURNAL OF NUMERICAL ANALYSIS

A Krylov subspace algorithm for multiquadric interpolation in many dimensions


Published: Jan 01, 2005 by IMA JOURNAL OF NUMERICAL ANALYSIS
Authors: A.C. Faul, G. Goodsell, M.J.D. Powell
Subjects: Mathematics

Convergence is guaranteed by the inclusion of a Krylov subspace technique that employs the native semi-norm of multiquadric functions. An algorithm is specified, its convergence is proven, and careful attention is given to the choice of the operator that defines the Krylov subspace, which is analogous to pre-conditioning in the conjugate gradient method.

Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics

Fast Marginal Likelihood Maximisation for Sparse Bayesian Models


Published: Sep 01, 2002 by Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics
Authors: M.E.Tipping, A.C.Faul
Subjects: Mathematics

The `sparse Bayesian' modelling approach, as exemplified by the `relevance vector machine', enables sparse classification and regression functions to be obtained by linearly-weighting a small number of fixed basis functions from a large dictionary of potential candidates. We describe a new and highly accelerated algorithm which exploits recently-elucidated properties of the marginal likelihood function to enable maximisation via sequential addition and deletion of candidate basis functions.

Advances in Neural Information Processing Systems

Analysis of Sparse Bayesian Learning


Published: Dec 01, 2001 by Advances in Neural Information Processing Systems
Authors: A.C.Faul, M.E.Tipping

Using a particular form of Gaussian parameter prior, `learning' is the maximisation, with respect to hyperparameters, of the marginal likelihood of the data. This paper studies the properties of that objective function, and demonstrates that conditioned on an individual hyperparameter, the marginal likelihood has a unique maximum which is computable in closed form.

Artificial Neural Networks - ICANN 2001

A Variational Approach to Robust Regression


Published: Jan 01, 2001 by Artificial Neural Networks - ICANN 2001
Authors: A.C.Faul, M.E.Tipping
Subjects: Mathematics

We consider the problem of regression estimation within a Bayesian framework for models linear in the parameters and where the target variables are contaminated by 'outliers'. We introduce an explicit distribution to explain outlying observations, and utilise a variational approximation to realise a practical inference strategy.

ADVANCES IN COMPUTATIONAL MATHEMATICS

Proof of convergence of an iterative technique for thin plate spline interpolati


Published: Oct 01, 1999 by ADVANCES IN COMPUTATIONAL MATHEMATICS
Authors: A.C.Faul, M.J.D.Powell
Subjects: Mathematics

Thin plate spline methods provide an interpolant to values of a real function. The need for iterative procedures arises, since hardly any sparsity occurs in the linear system of interpolation equations. A proof of convergence of this method is given. All the changes to the thin plate spline coefficients reduce a semi‐norm of the difference between the required interpolant and the current approximation.

18th Biennial Conference on Numerical Analysis

Krylov subspace methods for radial function interpolation


Published: Aug 01, 1999 by 18th Biennial Conference on Numerical Analysis
Authors: A.C.Faul, M.J.D.Powell
Subjects: Mathematics

The kth iteration calculates the element in a k-dimensional linear subspace of radial functions that is closest to the required interpolant, the subspaces being generated by a Krylov construction that employs a selfadjoint operator A. Distances between functions are measured by the semi-norm that is induced by the well-known conditional positive or negative definite properties of the matrix of the interpolation problem.

Videos

Unsupervised Iceberg Detection in Copernicus Sentinel 1 Satellite Images

Published: Feb 16, 2021

Contribution to Phi-Week 2020

How Can AI Tackle Climate Change

Published: Jun 03, 2020

Re-work expert seminar

Towards Explainable AI

Published: May 22, 2019

Talk given at the London Machine Learning MeetUp