Department of Mathematics,
University of California San Diego
****************************
AI seminar
Michael Mahoney
Continuous Network Models for Sequential Predictions
Abstract:
Data-driven machine learning methods such as those based on deep learning are playing a growing role in many areas of science and engineering for modeling time series, including fluid flows, and climate data. However, deep neural networks are known to be sensitive to various adversarial environments, and thus out of the box models and methods are often not suitable for mission critical applications. Hence, robustness and trustworthiness are increasingly important aspects in the process of engineering new neural network architectures and models. In this talk, I am going to view neural networks for time series prediction through the lens of dynamical systems. First, I will discuss deep dynamic autoencoders and argue that integrating physics-informed energy terms into the learning process can help to improve the generalization performance as well as robustness with respect to input perturbations. Second, I will discuss novel continuous-time recurrent neural networks that are more robust and accurate than other traditional recurrent units. I will show that leveraging classical numerical methods, such as the higher-order explicit midpoint time integrator, improves the predictive accuracy of continuous-time recurrent units as compared to using the simpler one-step forward Euler scheme. Finally, I will discuss extensions such as multiscale ordinary differential equations for learning long-term sequential dependencies and a connection between recurrent neural networks and stochastic differential equations.
Speaker’s Bio:
Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar as well as head of the Machine Learning and Analytics Group at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, scalable stochastic optimization, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, physics informed machine learning, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department. Among other things, he was on the national advisory committee of the Statistical and Applied Mathematical Sciences Institute (SAMSI), he was on the National Research Council's Committee on the Analysis of Massive Data, he co-organized the Simons Institute's fall 2013 and 2018 programs on the foundations of data science, he ran the Park City Mathematics Institute's 2016 PCMI Summer Session on The Mathematics of Data, he ran the biennial MMDS Workshops on Algorithms for Modern Massive Data Sets, and he was the Director of the NSF/TRIPODS-funded FODA (Foundations of Data Analysis) Institute at UC Berkeley. More information is available at https://www.stat.berkeley.
-
CSE 4140 and Zoom
https://ucsd.zoom.us/j/ 94762135992
CSE 4140 and Zoom
https://ucsd.zoom.us/j/
****************************
Department of Mathematics,
University of California San Diego
****************************
Center for Computational Mathematics Seminar
Dmitriy Drusvyatskiy
University of Washington
Optimization Algorithms Beyond Smoothness and Convexity
Abstract:
Stochastic iterative methods lie at the core of large-scale optimization and its modern applications to data science. Though such algorithms are routinely and successfully used in practice on highly irregular problems (e.g., deep learning), few performance guarantees are available outside of smooth or convex settings. In this talk, I will describe a framework for designing and analyzing stochastic gradient-type methods on a large class of nonsmooth and nonconvex problems. The problem class subsumes such important tasks as matrix completion, robust PCA, and minimization of risk measures, while the methods include stochastic subgradient, Gauss Newton, and proximal point iterations. I will describe a number of results, including finite time efficiency estimates, avoidance of extraneous saddle points, and asymptotic normality of averaged iterates.
-
Zoom ID 954 6624 3503
Zoom ID 954 6624 3503
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 243 - Functional Analysis Seminar
Todd Kemp
UCSD
The Bifree Segal--Bargmann Transform
Abstract:
The classical Segal--Bargmann transform (SBT) is an isomorphism between a real Gaussian Hilbert space and a reproducing kernel Hilbert space of holomorphic functions. It arises in quantum field theory, as a concrete witness of wave-particle duality. Introduced originally in the 1960s, it has been generalized and extended to many contexts: Lie Groups (Hall, Driver, late 1980s and early 1990s), free probability (Biane, early 2000s), and more recently $q$-Gaussian factors (Cébron, Ho, 2018).
In this talk, I will discuss current work with Charlesworth and Ho on a version of the SBT in bifree probability, a "two faced" version of free probability introduced by Voiculescu in 2014. Our work leads to some interesting new combinatorial structures ("stargazing partitions"), as well as a detailed analysis of the resultant family of reproducing kernels. In the end, the bifree SBT has a surprising connection with the $q$-Gaussian version for some $q\ne 0$.
-
In-person location TBD and on Zoom
Email djekel@ucsd.edu for Zoom info
In-person location TBD and on Zoom
Email djekel@ucsd.edu for Zoom info
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 278C - Optimization and Data Science
Uday Shanbhag
Pennsylvania State University
Probability Maximization via Minkowski Functionals: Convex Representations and Tractable Resolution
Abstract:
In this talk, we consider the maximization of a probability $\mathbb{P}\{ \zeta \mid \zeta \in K(x)\}$ over a closed and convex set $\mathcal X$, a special case of the chance-constrained optimization problem. We define $K(x)$ as $K(x) \triangleq \{ {\zeta} \in к \mid c(x,\zeta) \geq 0 \}$ where $\zeta$ is uniformly distributed on a convex and compact set $к$ and $c(x,\zeta)$ is defined as either {$c(x,\zeta) \triangleq 1-|\zeta^Tx|m$, $m\geq 0$} (Setting A) or $c(x,\zeta) \triangleq Tx - \zeta$ (Setting B). We show that in either setting, by leveraging recent findings in the context of non-Gaussian integrals of positively homogenous functions, $\mathbb{P}\{ \zeta \mid \zeta \in K(x)\}$ can be expressed as the expectation of a suitably defined ${continuous}$ function $F({\bullet},\xi)$ with respect to an appropriately defined Gaussian density (or its variant), i.e. $\mathbb{E}_{\tilde p} [F(x,\xi)]$. Aided by a recent observation in convex analysis, we then develop a convex representation of the original problem requiring the minimization of ${g(\mathbb{E} [F(x,\xi)])}$ over $\mathcal X$ where ${g}$ is an appropriately defined smooth convex function. Traditional stochastic approximation schemes cannot contend with the minimization of ${g(\mathbb{E} [F(\bullet,\xi)]
To the best of our knowledge, this may be the first such scheme for probability maximization problems with convergence and rate guarantees. Preliminary numerics on a portfolio selection problem (Setting A) and a vehicle routing problem (Setting B) suggest that the scheme competes well with naivemini-batch SA schemes as well as integer programming approximation methods. This is joint work with Ibrahim Bardakci, Afrooz Jalilzadeh, and Constantino Lagoa. Time permitting, a brief summary of ongoing work will be provided on ongoing research in hierarchical optimization and games under uncertainty.
-
https://ucsd.zoom.us/j/93696624146
Meeting ID: 936 9662 4146
Password: OPT2022SP
https://ucsd.zoom.us/j/93696624146
Meeting ID: 936 9662 4146
Password: OPT2022SP
****************************
Department of Mathematics,
University of California San Diego
****************************
Final Defense
Jason O'Neill
UCSD
Combinatorics of intersecting set systems
-
Email Jason O'Neill for Zoom link
Email Jason O'Neill for Zoom link
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 211B - Group Actions Seminar
Yair Hartman
Ben-Gurion University
Tight inclusions
Abstract:
We discuss the notion of "tight inclusions" of dynamical systems which is meant to capture a certain tension between topological and measurable rigidity of boundary actions, and its relevance to Zimmer-amenable actions. Joint work with Mehrdad Kalantar
-
Zoom ID 967 4109 3409
Email an organizer for the password
Zoom ID 967 4109 3409
Email an organizer for the password
****************************
Department of Mathematics,
University of California San Diego
****************************
Math 209 - Number Theory Seminar
Gyujin Oh
Princeton University
A cohomological approach to harmonic Maass forms
Abstract:
We interpret a harmonic Maass form as a variant of a local cohomology class of the modular curve. This is not only amenable to algebraic interpretation, but also nicely generalized to other Shimura varieties, avoiding the barrier of Koecher's principle, which could be useful for developing a generalization of Borcherds lifts. In this talk, we will exhibit how the theory looks like in the case of Hilbert modular varities.
-
Pre-talk at 1:20 PM
APM 6402 and Zoom
See https://www.math.ucsd.edu/~nts /
Pre-talk at 1:20 PM
APM 6402 and Zoom
See https://www.math.ucsd.edu/~nts
****************************
Department of Mathematics,
University of California San Diego
****************************
Postdoc Seminar
Yuming Zhang
UCSD
McKean-Vlasov equations involving hitting times: blow-ups and global solvability
Abstract:
We study two McKean-Vlasov equations involving hitting times. Let $(B(t); t \geq 0)$ be standard Brownian motion, and $\tau:= \inf\{t \geq 0: X(t) \leq 0\}$ be the hitting time to zero of a given process $X$. The first equation is $X(t) = X(0) + B(t) - \alpha \mathbb{P}(\tau \leq t)$.
We provide a simple condition on $\alpha$ and the distribution of $X(0)$ such that the corresponding Fokker-Planck equation has no blow-up, and thus the McKean-Vlasov dynamics is well-defined for all time $t \geq 0$. We take the PDE approach and develop a new comparison principle.
The second equation is $X(t) = X(0) + \beta t + B(t) + \alpha \log \mathbb{P}(\tau \leq t)$, $t \geq 0$, whose Fokker-Planck equation is non-local. We prove that if $\beta,1/\alpha > 0$ are sufficiently large, the McKean-Vlasov dynamics is well-defined for all time $t \geq 0$. The argument is based on a relative entropy analysis. This is joint work with Erhan Bayraktar, Gaoyue Guo and Wenpin Tang.
-
AP&M B402A
AP&M B402A
****************************
Department of Mathematics,
University of California San Diego
****************************
AWM Colloquium
Si Tang
Lehigh University
On convergence of the cavity and Bolthausen’s TAP iterations to the local magnetization
Abstract:
The cavity and TAP equations are high-dimensional systems of nonlinear equations of the local magnetization in the Sherrington-Kirkpatrick model. In the seminal work, Bolthausen introduced an iterative scheme that produces an asymptotic solution to the TAP equations if the model lies inside the Almeida-Thouless transition line. However, it was unclear if this asymptotic solution coincides with the true local magnetization. In this work, motivated by the cavity equations, we introduce a new iterative scheme and establish a weak law of large numbers. We show that our new scheme is asymptotically the same as the so-called Approximate Message Passing (AMP) algorithm that has been popularly adapted in compressed sensing, Bayesian inferences, etc. Based on this, we confirm that our cavity iteration and Bolthausen’s scheme both converge to the local magnetization as long as the overlap is locally uniformly concentrated. This is a joint work with Wei-Kuo Chen (University of Minnesota).
-
https://ucsd.zoom.us/j/ 97738771432
Zoom ID: 977 3877 1432
https://ucsd.zoom.us/j/
Zoom ID: 977 3877 1432
****************************
Department of Mathematics,
University of California San Diego
****************************
Advancement to Candidacy
Patrick Girardet
UCSD
On the cohomology of Quot schemes
-
https://ucsd.zoom.us/j/ 6593549582
Zoom meeting ID: 659 354 9582
https://ucsd.zoom.us/j/
Zoom meeting ID: 659 354 9582
****************************