Agent-Based Modelling of Single Cell Variability of CRISPR-Cas Interference and Adaptation
Freitag, 27.5.22, 12:00-13:00, online: Zoom
Reporting in Radiology - the challenge of structuring reports
Freitag, 1.7.22, 12:00-13:00, online: Zoom
Summary:\n\nTraditionally, radiologists produce reports using a microphone and speech recognition while browsing through a very large number of images using a mouse. The result is a plain text report that is neither structured nor linked to the content of the images. Most radiologists agree that producing structured reports which are semantically linked to the images would have many advantages over traditional reporting. Systems to produce structured reports have been developed over the past 10-15 years, these systems are all build around graphical, mouse controlled interfaces. These systems are not well accepted by radiologists since they distract visual attention from the images to the reporting interface. Producing reports this way is more time consuming than using a microphone. So called report templates have been developed. These report templates describe the elements radiologists should describe in a given clinical setting (i.e. pancreatic carcinoma). It has been shown that the use of structured reporting using report templates results in more complete radiological reports. Producing structured reports would result in having the data stored in databases that could be used in many different ways. Structured report is easier to search, data elements can be used to trigger actions (actionable reports, i.e. presence of pulmonary embolism triggering further work-up), and the data elements could also be used as labels for the training of AI algorithms. On the other hand, report templates can be pre-populated by AI-algorithms before the reporting process by the radiologist is initiated. Also, since the report template constitutes a set of "questions" to be answered by the radiologist, it could support the extraction of data from the spoken word of radiologists. A combination of both could facilitate the production of radiology reports.
Roll-Over Risk in Benchmark Rates
Donnerstag, 7.7.22, 12:00-13:00, Hörsaal Weismann-Haus, Albertstr. 21a und online (Zoom)
Modelling the risk that a financial institution may not be able to roll over its debt at the market reference rate, the so--called ``roll--over risk'', we construct a model framework for the dynamics of reference term rates (e.g., LIBOR) and their spread vis-a-vis benchmarks based on overnight reference rates, e.g., rates implied by overnight index swaps (OIS). In this framework, different interest rate term structures are endogenously generated for each tenor, that is, a different term structure for each choice of the length of the interest rate accrual period, be it overnight (e.g., OIS), three--month LIBOR, six--month LIBOR, etc. A concrete model instance in this framework can be calibrated simultaneously to available market instruments at a particular point in time, but more importantly, we explicitly obtain dynamics of term rates such as LIBOR. Thus models in our framework are amenable to econometric estimation. For a model class based on affine dynamics, we conduct an empirical analysis on EUR data for OIS, interest--rate swaps, basis swaps and credit default swaps.
Evolutionary Algorithms: what can do for you?
Freitag, 8.7.22, 12:00-13:00, online: Zoom
Evolutionary Algorithms (EAs) are model-free population-based methods which generally include mechanisms inspired by nature (i.e. concepts in Darwinian Evolution) and solve problems through processes that emulate the behaviors of living organisms. EAs consist of a method of initializing a population, mutation, crossover, selection operations, and a notion of fitness. The mix of potential solutions to a problem is populated randomly first. Then the population is tested for fitness -- how well and how quickly it solves a problem. The fittest individuals are then selected for reproduction through mutation and crossover operations. The cycle begins again as the fitness of the population is evaluated and the least fit individuals are eliminated. EAs are excellent at optimizing solutions to problems that cannot be solved easily using other techniques, and seemingly a simple EA can often solve complex problems. It is important to note though that while EAs optimize effectively, they don’t necessarily find the optimal solution. EAs have been known for black-box optimization and successfully used to solve many real-world applications in engineering, economics, bioinformatics, robotics and many others.
Inferring 3 Parameters from 2 Data Points
Freitag, 15.7.22, 12:00-13:00, online: Zoom
Inferring model parameters from measured data, i.e. the “inverse problem”, is a necessary step in evaluating virtually any quantitative model. For linear models, there are already many theoretical results established for parameter inference. Specifically, uniquely estimating the maximum likelihood parameters in a linear model is known to be impossible if there are less data points available than parameters in the model. This is conventionally thought to be also true for non-linear models, setting a threshold for the minimum number of data points necessary to uniquely estimate all the model parameters. However, it is possible to construct examples in which more parameters than data points can be uniquely estimated, i.e. with a unique best estimate and finite 95%-confidence intervals. This is demonstrated on a model with three unknown parameters which can be estimated from just two data points. This talk introduces the basic problem and discusses the two-data-points-three-parameters example, providing background and intuition as well as possible explanations of why this seems to work.
Inferring 3 Parameters from 2 Data Points
Freitag, 29.7.22, 12:00-13:00, online: Zoom
Inferring model parameters from measured data, i.e. the “inverse problem”, is a necessary step in evaluating virtually any quantitative model. For linear models, there are already many theoretical results established for parameter inference. Specifically, uniquely estimating the maximum likelihood parameters in a linear model is known to be impossible if there are less data points available than parameters in the model. This is conventionally thought to be also true for non-linear models, setting a threshold for the minimum number of data points necessary to uniquely estimate all the model parameters. However, it is possible to construct examples in which more parameters than data points can be uniquely estimated, i.e. with a unique best estimate and finite 95%-confidence intervals. This is demonstrated on a model with three unknown parameters which can be estimated from just two data points. This talk introduces the basic problem and discusses the two-data-points-three-parameters example, providing background and intuition as well as possible explanations of why this seems to work.