Das Mathematische Kolloquium ist eine gemeinsame wissenschaftliche Veranstaltung des gesamten Mathematischen Instituts. Es steht allen Interessierten offen und richtet sich neben den Mitgliedern und Mitarbeitern des Instituts auch an die Studierenden. Das Kolloquium findet dreimal im Semester am Donnerstag um 15:00 s.t. im Hörsaal II, Albertstr. 23b statt. Danach (gegen 16:15) gibt es Kaffee und Kekse, zu dem der vortragende Gast und alle Besucher eingeladen sind.
Tree-Forcing Notions
Donnerstag, 24.10.19, 17:00-18:00, Hörsaal II, Albertstr. 23b
During the 1960s Cohen and Solovay introduced and developed the method of\nforcing, which soon became a key technique for building various models of\nset theory. In particular such a method was crucial for answering questions\nconcerning the use of the axiom of choice to construct non-regular objects\n(such as non-Lebesgue measurable sets, non-Baire sets, ultrafilters) and to\nanalyse possible sizes of several types of subsets of reals (such as\ndominating and unbounded families, and other so-called cardinal\ncharacteristics).\nOne of the key ideas in both cases is the notion of a tree-forcing, i.e.\na partial order consisting of a specific kind of perfect trees. In this\ntalk, after a brief historical background, we will focus on some results\non Silver, Miller and Mathias trees. We will also see applications of\ninfinitary combinatorics and tree-forcing in the context of\ngeneralized descriptive set theory and the study of social welfare\nrelations\non infinite utility streams.\n
Donnerstag, 31.10.19, 17:00-18:00, Hörsaal II, Albertstr. 23b
Dynamic learning based on random recurrent neural networks and reservoir computing systems
Donnerstag, 7.11.19, 18:00-19:00, Hörsaal II, Albertstr. 23b
In this talk we present our recent results on a mathematical explanation for the empirical success of dynamic learning based on reservoir computing.\nMotivated by their performance in applications ranging from realized volatility forecasting to chaotic dynamical systems, we study approximation and learning based on random recurrent neural networks and more general reservoir computing systems. For different types of echo state networks we obtain high-probability bounds on the approximation error in terms of the network parameters. For a more general class of reservoir computing systems and weakly dependent (possibly non-i.i.d.) input data, we then also derive generalization error bounds based on a Rademacher-type complexity.\n\nThe talk is based on joint work with Lyudmila Grigoryeva and Juan-Pablo Ortega.
Donnerstag, 21.11.19, 17:00-18:00, Hörsaal II, Albertstr. 23b
Donnerstag, 19.12.19, 17:00-18:00, Hörsaal II, Albertstr. 23b
Donnerstag, 23.1.20, 17:00-18:00, Hörsaal II, Albertstr. 23b