Competing risks: estimation based on the subdistribution hazard
Friday, 21.10.16, 09:30-10:30, SR 126 (Raumänderung !)
Contrary to the cause-specific hazard, the subdistribution hazard uniquely determines the cumulative incidence for that cause. Its estimate forms the basis for a nonparametric product-limit type estimate of the cause-specific cumulative incidence. We derive a version using inverse probability weights to correct for right censored and left truncated data that is algebraically equivalent to the classical Aalen-Johansen estimator. Fine and Gray formulated a regression model that assumes proportionality of effects on the subdistribution hazard. When estimating the subdistribution hazard, individuals that experience a competing event remain in the risk set. Therefore, it has been debated whether it is possible to include a time-varying covariable, especially when it is internal: we don't know its value after an individual has died. In the classical survival setting with a single event type, the changing value of a covariable can be represented by creating pseudo-individuals. Each row represents a period during which the value remains constant. The start of this interval can be seen as a form of late entry; it has been called internal left truncation. We can take two different approaches when estimating the subdistribution hazard with time-varying covariables. If we interpret these rows as coming from different pseudo-individuals, we use weights to correct for the late entry. In the other approach, we consider the rows as continuing follow-up form the same individuals and therefore no such weights are used. Using a simple example of a dichotomous time-varying covariable, we contrast the interpretation of the estimates as obtained via both approaches.
Reconstructing branching lineages in single cell genomics
Friday, 11.11.16, 12:00-13:00, Raum 404, Eckerstr. 1
Single-cell technologies have recently gained popularity in developmental biology because they allow resolving potential heterogeneities due to asynchronicity of differentiating cells. Popular multivariate approaches for analyzing such data are based on data normalization, followed by dimension reduction and clustering to identify subgroups. However, in the case of cellular differentiation, we cannot expect clear clusters to be present - instead cells tend to follow continuous branching lineages.\n\nWe show that modeling the high-dimensional state space as a diffusion process, where cells move to close-by cells with a distance-dependent probability well reflects the differentiating characteristics. Based on the underlying diffusion map transition kernel, we then propose to order cells according to a diffusion pseudo time, which measures transitions between cells using random walks of arbitrary length. This allows for a robust identification of branching decisions and corresponding trajectories of single cells. We demonstrate the method on single-cell qPCR data of differentiating mouse haematopoietic stem cells as well as on RNA sequencing profiles of embryonic stem cells.\n\nAs outlook if time permits, I will outline how to use this pseudotime in combination with dynamic models to construct a mechanistic understanding of the regulatory process, based on recent work regarding ODE-constrained mixture modeling.
Shape Analysis: Infinite-Dimensional Geometry, Statistics on Manifolds, and Applications
Friday, 2.12.16, 12:00-13:00, Raum 404, Eckerstr. 1
Shape analysis aims at describing the variability of certain classes\nof geometric shapes in a statistical manner. This is of interest in\nmany diverse applications such as computational anatomy, computer\nvision, geology, optics, etc. I will give an overview of the theory,\nwhich involves infinite-dimensional differential geometry and\nstatistics on manifolds, and present some recent results in Riemannian\nshape analysis together with some biomedical applications.\n
Starke Gauß'sche Approximation des Rasch-Mischungsmodells mit Anwendungen
Friday, 9.12.16, 12:00-13:00, Raum 404, Eckerstr. 1
Das Rasch-Modell stellt ein berühmtes Modell aus der Psychometrie\ndar, das zur Auswertung von Umfragen verwendet wird, bei denen n Individuen m\nFragen beantworten müssen. Das Ergebnis lässt sich als binäre Matrix\nausdrücken, deren (j,k). Komponente genau dann gleich 1 ist, wenn die Antwort\ndes j. Individuums auf die k. Frage richtig ist. Im Rasch-Mischungsmodell\ngehen wir davon aus, dass die Individuen rein zufällig aus einer großen\nBevölkerungsgruppe ausgewählt wurden. Wir zeigen, dass das Rasch-\nMischungsmodell als statistisches Experiment asymptotisch äquivalent zu einem\nGauß'schen Beobachtungsmodell im Sinne von Le Cam ist, wenn n gegen\nunendlich strebt und m dabei in einer gewissen Ordnung in n wachsen darf. Als\neine erste Anwendung konstruieren wir ein gleichmäßiges asymptotisches\nKonfidenzellipsoid für die Schwierigkeitsparameter der Fragen. Dieser Vortrag\nbasiert auf einer gemeinsamen Arbeit mit Johanna Kappus und Friedrich Liese\n(beide Universität Rostock).
Statistical learning and patient trajectories in healthcare analytics
Friday, 16.12.16, 12:00-13:00, Raum 404, Eckerstr. 1
Healthcare analytics helps improving the treatment quality for patients suffering\nfrom various illnesses. In this regard, one commonly collects patient-related infor-\nmation, often about their demography and prior illnesses, in order to predict the\noutcome of treatments. We demonstrate this by showing how patient charactistics\ncan forecast the severity of low back pain. In a next step, we follow an innovative\napproach and exploit the prognostic potential of patient trajectories. These stem\nfrom weekly surveys collected throughout a year. By employing a Markov model,\nwe can then gain a detailed understanding of how pain intensity evolves over time.\nThis immediately leads to our vision of helping patients with choosing tailored\ntreatments and the optimal timing thereof.
Noise in autoregulated gene expression
Friday, 23.12.16, 12:00-13:00, Raum 404, Eckerstr. 1
Gene expression is the foundation of molecular biology. Genes can be active or deactive; active genes are transcribed into RNA; RNA is translated into functional protein. Since the chemical reaction network for these processes is linear, it can be solved explicitly. In contrast, we are dealing with genes regulating their own expression. A negative feedback arises when protein binds to the gene and (de-)activates it, leading to a positive (negative) feedback. Using the assumption of fast activation and deactivation of genes, we are interested in gene expression noise under feedback. Using an approach of Kang, Kurtz and Popovic, we can quantify the reduction of noise under negative feedback and the increase in noise under positive feedback. \n\n
Quantum statistical models and inference
Friday, 13.1.17, 12:00-13:00, Raum 404, Eckerstr. 1
Quantum statistics is concerned with the inference for systems\ndescribed by quantum mechanics. After an introduction to the\nmain mathematical notions of quantum statistics: quantum states,\nmeasurements, channels, we describe nonparametric quantum models.\nWe prove the local asymptotic equivalence (LAE) in the sense of\nLe Cam theory of i.i.d. quantum pure states and a quantum Gaussian\nstate. We show nonparametric rates for the estimation of the quantum\nstates, of some quadratic functionals and for the testing of pure\nstates. The LAE allows to transfer proofs to a different model.\nSurprisingly, a sharp testing rate of order n^{-1/2} is\nobtained in a nonparametric quantum setup.\nThis is joint work with M. Guta and M. Nussbaum.
Asymptotic equivalence between density estimation and Gaussian white noise revisited
Friday, 20.1.17, 12:00-13:00, Raum 404, Eckerstr. 1
Asymptotic equivalence between two statistical models means that they\nhave the same asymptotic properties with respect to all decision\nproblems with bounded loss. A key result by Nussbaum states that\nnonparametric density estimation is asymptotically equivalent to a\nsuitable Gaussian shift model, provided that the densities are smooth\nenough and uniformly bounded away from zero.\n\nWe study the case when the latter assumption does not hold and the\ndensity is possibly small. We further derive the optimal Le Cam distance\nbetween these models, which quantifies how close they are. As an\napplication, we also consider Poisson intensity estimation with low\ncount data. This is joint work with Johannes Schmidt-Hieber.
Nonlinear Optimization Methods for Model Predictive Control of Mechatronic Systems
Thursday, 2.2.17, 10:00-11:00, Raum 125, Eckerstr. 1
Model Predictive Control (MPC) for mechatronic systems is based on the online\nsolution of medium scaled constrained nonlinear optimal control problems, with\nsampling times in the milli and microsecond range. This poses specific challenges\nfor the problem formulation and the numerical solution methods. This talk pres-\nents and discusses algorithms and open source software implementations that are\ndesigned to address these challenges, and reports on experimental tests with me-\nchatronic, aerospace and automotive applications. The focus is on recent progress\non numerical integration and derivative generation, as well as embedded quadratic\nprogramming methods.
Value, Size, Momentum and the Average Correlation of Stock Returns
Friday, 10.2.17, 12:00-13:00, Raum 404, Eckerstr. 1
Dynamic average correlations of stock returns are predicted by the volatility of the market excess return and moving average returns of value, size and momentum portfolios. While the influence of market volatility on average correlation is well-known, the role of value, size and momentum appears to be underappreciated. Correlations of stock returns and stock returns share sources of risk like the market volatility, but there are other sources that are distinct. In particular, correlations are increased when value or momentum returns are roughly zero, while strongly negative returns of value or momentum are associated with lower correlations. Using the market volatility and a moving average return of the value portfolio as predictors of average correlation, we obtain a global minimum variance portfolio with a Sharpe ratio that is 1.5% higher relative to the one based on a Dynamic Equicorrelation Garch model, and the difference in portfolio volatility is statistically significant"