AMMP Day 2015

Wednesday, 16th December 2015, 12-6 pm, Huxley 340.

As the autumn term is winding down why not come and join us for a get together just before the seasonal break. We have organised 7 talks from Imperial College PhD students within the AMMP section of the Department of Mathematics on various different topics. We are also happy to announce that this year’s plenary speaker will be Prof. Pierre Degond . There will be a prize for the best talk by a PhD student as well as some Christmas treats for all. Lunch and refreshments will be provided to those who attend.

The schedule can be found here: AMMPDay_Schedule

We hope to see many of you there.

Abstracts

Alexis Arnaudon: Shaking geometric mechanics

Geometric mechanics was originally developed for the purpose of understanding the geometry behind physical problems as simple as a rigid body or as hard as magnetohydrodynamics.  As we all know, geometry in its pure mathematical sense is not present on nature, at least at a human scale. In order to model un-understood physical processes at smaller scales, one usually introduces randomness in his model. We will show how to include a particular type of noise into systems well described by geometric mechanics such that the fundamentals of this theory, that are symmetries, are preserved. Simple examples such as the free rigid body will be explored as an illustration.

Eszter Lakatos: Investigating a tumour protein (using Approximate Bayesian Computation)

P53 might be one of the most studied gene-protein pairs in cancer research; known for its important role in cell death upon DNA damage. There are numerous studies investigating the p53-response in cells put under stress, but what contributes to its baseline activity is less known.

Our hypothesis is that the regulation is dominantly through protein degradation – contrary to transcription, which is usually behind the control of baseline levels. My collaborators have measured p53 abundance in two cancer cell lines; and I use an Approximate Bayesian Computation (ABC) framework to compare models based on our competing hypotheses and estimate the rate parameters behind the system. I will give an overview of the experimental data, ABC and the methods behind the study; and summarise the struggles, joys and results of pairing up single-cell measurements and parameter inference.

Alex Rush: Semiclassical quantisation for bosonic atom-molecule conversion systems

In this talk I will consider a simple two state quantum model of atommolecule conversion in cold atom systems, where bosonic atoms can combine into diatomic molecules and vice versa. The many-particle system can be expressed in terms of the generators of a deformed SU(2) algebra, and the mean-field dynamics take place on a deformed version of the Bloch sphere resembling a teardrop, with a cusp singularity. I will demonstrate the mean-field and many-particle correspondence, showing how semiclassical methods can be used to recover features such as the many-particle spectrum from the mean-field description.

Deborah Schneider-Luftman: Network modeling of neurological data

The analysis of neurological data is probably one of the most fascinating yet complex areas of scientific research. While Neurology as a field and modern imaging techniques – such as fMRI and MEG – have been around for 30-40 years, there is still so much to understand on how the brain works. Arguably, one of the best way to approach this problem is through network modelling. The idea is, anytime the brain receives and processes stimuli, information is passed between neurons and through various regions of the brain, and we have a network. Yet a quantitative understanding of these networks is in its infancy. How do networks vary depending on tasks performed or health conditions? What are the different imaging techniques available to measure this, and can we analyse the data they output?

The goal of this talk is to provide some insight into the methodologies used to address these questions from the perspective of nonparametric statistics, and to show some illustrative examples of how tangible conclusions can be achieved.

Abhishek Deshpande: Cost of Computing

What are the physical limits on computation? Landauer [1] has shown that erasing a bit requires atleast kT log 2 units of energy. In addition, if the process is not reversible, it requires strictly greater than kT log 2 units of energy. In other words, there are energy-time tradeoffs in performing computation. Recently, Gopalkrishnan [2] has suggested the need for reliability of operation, obtaining a fundamental three way tradeoff between reliability, speed and cost in a two state markov chain. Taking this further, we explore this tradeoff in a bit represented by a particle in a double well obeying langevin dynamics.

[1] Landauer, R. Irreversibility and heat generation in the computing process. IBM journal of research and development 5(3), 183–191 (1961).

[2] Gopalkrishnan, M. A cost/speed/reliability trade-off in erasing a bit. arXiv preprint arXiv:1410.1710 (2014).

Francesco Ferrulli: Graphene: from nanostructure to spectral analysis. An introduction and open problems

Recently the increased quality in graphene’s production as well as the thriving of applications of this new material, from nano-structures to macro compounds, has attracted the community of physicists, as well as mathematician and engineers, to the study of its astonishing electronic properties. In the first section of the talk I will briefly recover the Dirac’s like structure of the Hamiltonian operator for certain value of the energy close to the Fermi energy level for graphene via the tight binding approach. In the second part of the talk I will then present the spectral properties of the operators derived in the first section giving an insight of the main results of the already known bounds for the complex eigenvalues in the one dimensional case for the single layer and introduce some new results for the two-dimensional case bilayer case.

[1] A. A. ABRAMOV AND A. ASLANYAN AND E. B. DAVIES, Bounds on complex eigenvalues and resonances, J. Phys. A, 1999,[57-72].

[2] CUENIN, J-C AND LAPTEV, A AND TRETTER, C, Eigenvalue Estimates for Non-Selfadjoint Dirac Operators on the Real Line, Annales Henri Poincare, Vol. 15, 2014 [707–736].

[3] NOVOSELOV, K.S. AND GEIM, A.K., The rise of Graphene, Nature Materials, Vol. 6, 2007 [183 -191].

[4] RAZA, H., Graphene Nanoelectronics: Metrology, Synthesis, Properties and Applications, NanoScience and Technology,2012, Springer Publsh.

Alex  Bolton: Malware static trace analysis through bigrams and graph edit distance calculation

Malicious software, or malware, is a growing problem. Gaining information about malware samples is crucial in order to ensure cyber security at institutions such as Los Alamos National Laboratory (LANL). Most new malware is generated by modifying existing malware, and malware samples that are versions of an original malicious program form a malware family. Reverse engineers, who seek to detect the origin and purpose of malware, can work more efficiently if they know the family that generated a piece of malware.
We propose a two-part method for comparing a new malware sample to a database of malware samples, where each database sample has a malware family label. The first part is a fast filtering method to disregard database samples that are dissimilar to the new sample. Once the filter has been applied, the static trace for each database sample and the new sample are converted to graphs with nodes representing subroutines and edges representing calls from one subroutine to another. The shortlisted static trace graphs are compared to the new sample using an approximation of the graph edit distance. The combination of the two methods produces a classifier that can accurately classify malware samples into families.