Fridays 3:30pm in KAP 414; tea is usually provided at 3:00pm
Organizer: Stanislav Minsker
For seminars/colloquia on other topics, see the Department of Mathematics webpage.
Older seminars: Fall 2016, Spring 2016, Fall 2015, Spring 2015, Fall 2014, 2013-2014
-
Spring 2017 seminars
January 13: Søren Asmussen (Aarhus University)
Lévy processes, phase-type distributions, and martingales
Lévy processes are defined as processes with stationary independent increments. In finance, they generalize the Black-Scholes model by accommodating jumps and non-Gaussian marginals, and some popular examples there are variance Gamma, CGMY and NIG processes. We survey here how to explicitly compute a number of relevant quantities by restricting to the dense class of compound Poisson processes with phase-type jumps in both directions and an added Brownian component (phase-type distributions are certain generalizations of the exponential distribution). The solutions are in terms of roots to polynomials, and the basic equations are derived by purely probabilistic arguments using martingale optional stopping, and the method also applies to regime switching. The approach is illustrated via a worked-out numerical example dealing with equity default swaps.
January 20: Adel Javanmard (USC Marshall School of Business)
Online Rules for Control of False Discovery Rate
Multiple hypothesis testing is a core problem in statistical inference and arises in almost every scientific field. For a given set of null hypotheses, Benjamini and Hochberg introduced the notion of false discovery rate (FDR), which is the expected proportion of false positives among rejected null hypotheses, and further proposed a testing procedure that controls FDR below a pre-assigned significance level. Nowadays FDR is the criterion of choice for large- scale multiple hypothesis testing. In this talk, we consider the problem of controlling FDR in an “online manner”. Concretely, we consider an ordered, possibly infinite, sequence of null hypotheses where at each step the statistician must decide whether to reject current null hypothesis having access only to the previous decisions. We introduce a class of generalized alpha-investing procedures and prove that any rule in this class controls FDR in online manner.
January 27: Venkat Chandrasekaran (Caltech)
Learning Regularizers from Data
Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties due to ill-posedness. In their most common manifestation, these methods take the form of penalty functions added to the objective in optimization-based approaches for solving inverse problems. The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domain-specific expertise. For example, regularization is useful for promoting smoothness, sparsity, low energy, and large entropy in solutions to inverse problems in image analysis, statistical model selection, and the geosciences.
We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available; the objective is to identify a regularizer to promote the type of structure contained in the data. The regularizers obtained using our framework are specified as convex functions that can be computed efficiently via semidefinite programming, and they can be employed in tractable convex optimization approaches for solving inverse problems. Our approach for learning such semidefinite regularizers is based on computing certain structured factorizations of data matrices. We propose a method for this task that combines recent techniques for rank minimization problems along with the Operator Sinkhorn iteration. We discuss some of the theoretical properties of our algorithm as well as its utility in practice.
(Joint work with Yong Sheng Soh)
February 3: Edmond Jonckheere (USC, Department of Electrical Engineering)
Jonckheere-Terpstra test for nonclassical error versus log-sensitivity relationship of quantum spin network controllers
Selective information transfer in spin ring networks by landscape shaping control has the property that the error 1-prob, where prob is the transfer success probability, and the sensitivity of the probability to spin coupling errors are “positively correlated”, meaning that both are statistically increasing across a family of controllers of increasing error. Here, we examine the rank correlation between the error and another measure of performance – the logarithmic sensitivity – used in robust control to formulate the fundamental limitations. Contrary to error versus sensitivity, the error versus logarithmic sensitivity correlation is less obvious, because its weaker trend is made difficult to detect by the noisy behavior of the logarithmic sensitivity across controllers of increasing error numerically optimized in a challenging landscape. This results in the Kendall tau test for rank correlation between the error and the log sensitivity to be pessimistic with poor confidence. Here it is shown that the Jonckheere-Terpstra test, because it tests the Alternative Hypothesis of an ordering of the medians of some groups of log sensitivity data, alleviates this problem and hence singles out cases of anti-classical behavior of “positive correlation” between the error and the logarithmic sensitivity.
February 10: Denis Chetverikov (UCLA, Department of Economics)
On cross-validated Lasso
We derive a rate of convergence of the Lasso estimator when the penalty parameter $\lambda$ for the estimator is chosen using $K$-fold cross-validation; in particular, we show that in the model with the Gaussian noise and under fairly general assumptions on the candidate set of values of $\lambda$, the prediction norm of the estimation error of the cross-validated Lasso estimator is with high probability bounded from above up to a constant by $(s\log p /n)^{1/2}\cdot(\log^{7/8}(p n))$, where $n$ is the sample size of available data, $p$ is the number of covariates, and $s$ is the number of non-zero coefficients in the model. Thus, the cross-validated Lasso estimator achieves the fastest possible rate of convergence up to a small logarithmic factor $\log^{7/8}(p n)$. In addition, we derive a sparsity bound for the cross-validated Lasso estimator; in particular, we show that under the same conditions as above, the number of non-zero coefficients of the estimator is with high probability bounded from above up to a constant by $s\log^5(p n)$. Finally, we show that our proof technique generates non-trivial bounds on the prediction norm of the estimation error of the cross-validated Lasso estimator even if the assumption of the Gaussian noise fails; in particular, the prediction norm of the estimation error is with high-probability bounded from above up to a constant by $(s\log^2(p n)/n)^{1/4}$ under mild regularity conditions.
February 24: Julian Gold (UCLA, Department of Mathematics)
Isoperimetric shapes in supercritical bond percolation
We study the isoperimetric subgraphs of the infinite cluster $\textbf{C}_\infty$ of supercritical bond percolation on $\mathbb{Z}^d$, $d \geq 3$. We prove a shape theorem for these random graphs, showing that upon rescaling they tend almost surely to a deterministic shape. This limit shape is itself an isoperimetric set for a norm we construct. In addition, we obtain sharp asymptotics for a modification of the Cheeger constant of $\textbf{C}_\infty \cap [-n,n]^d$, settling a conjecture of Benjamini for this modified Cheeger constant. Analogous results are shown for the giant component in dimension two, with the caveat that we use the original definition of the Cheeger constant here, and a more complicated continuum isoperimetric problem emerges as a result.
March 3: Arash Amini (UCLA, Department of Statistics)
Semidefinite relaxations of the block model
Community detection has emerged in recent years as one of the fundamental problems of network analysis. Informally, one seeks to partition the network into cohesive groups of nodes, or communities, that reveal its large-scale connective structure. There is now a well-established mathematical model, namely, the Stochastic Block Model (SBM), which allows for a precise notion of community. The model has a number of attractive features, among them, a deep connection to the general class of exchangeable random graphs and the ability to approximate complex nonparametric models while being analytically tractable to study theoretical limitations of community detection. The flexibility and expressive power of SBM, however, comes at the expense of computational intractability. The optimal way of fitting SBM, via maximum likelihood estimation (MLE), requires combinatorial search in the worst case. In this talk, we consider some of the proposed semidefinite programming (SDP) relaxations of the problem and clarify their relations by viewing them within a unified framework, namely, as relaxations of the MLE over various sub-classes of SBM. We also derive a tighter relaxation for the balanced case where the communities are of equal size. We show that this new relaxation is consistent over a broader class of SBMs which we call the weakly assortative class. Previous consistency results for SDP relaxations require a stronger notion of assortativity, and we show that this stronger notion is indeed necessary in those cases.
March 10: Tomoyuki Ichiba (UC Santa Barbaba, Department of Statistics and Applied Probability)
Walsh semimartingales and diffusion on metric graphs
In this talk we shall discuss diffusion on metric graphs. We start with a change-of-variable formula of Freidlin-Sheu type for Walsh semimartingale on a star graph. In diffusion case we characterize such processes via martingale problem. As a consequence of folding/unfolding semimartingale, we obtain a system of degenerate stochastic differential equations and we examine its solution and convergence properties. The stationary distribution, strong Markov property and related statistical problems are also discussed. Then we extend our considerations to diffusion on metric graphs. Part of this talk is based on joint work with I. Karatzas, V. Prokaj, A. Sarantsev and M. Yan.
April 7: Xunyu Zhou (Columbia University, Department of Industrial Engineering and Operations Research)
Discounting, Diversity, and Investment
This paper presents the class of weighted discount functions, which contains the discount functions commonly used in economics and finance. Weighted discount functions also describe the discounting behavior of groups, and they admit a natural notion of group diversity in time preference. We show that more diverse groups discount less heavily, and make more patient decisions. Greater group diversity leads to more risk-taking and delayed investment in a real options setting. We further provide a general result on the timing behavior of groups, and link it to that of individuals who are time-inconsistent. This is a joint work with Sebastian Ebert and Wei Wei.
April 14: Badal Joshi (California State University San Marcos)
Graphical equilibria and balanced stationary distributions in reaction networks
Reaction networks are commonly used to model a wide variety of biochemical systems. Deterministic models have been traditionally used in chemistry which involves a large number of molecules. In biochemistry, the numbers of molecules are small and thus stochastic modeling becomes essential. We explain the connection between the mass-action stochastic and deterministic models. We then focus on the underlying graphical structure of the reaction network and the graphical structure of the allowed transitions in the state space of the discrete/stochastic model. We explore the relationships between the symmetry properties of the reaction network, symmetry properties of the underlying network of the allowed transitions in the stochastic model, the deterministic equilibria and the stationary distributions in the stochastic model.
April 21: Peter Baxendale (USC, Department of Mathematics)
Some very large deviations
Motivated by a problem in financial mathematics, we study the small time behavior of an additive functional of a fast diffusion process. The resulting behavior depends on the relative sizes of the small time and the speed of the diffusion. A simple time change converts the problem into one of moderate, or large, or very large deviations.
After reviewing results for the sums of i.i.d. random variables (Cramer’s theorem and beyond) we will identify the rate function for our very large deviation problem in terms of viscosity solutions of an associated deterministic control problem.
April 28: James-Michael Leahy (USC, Department of Mathematics)
On solutions of degenerate linear stochastic integro-differential equations in Soblev spaces
We consider the question of existence, uniqueness and regularity of solutions of degenerate systems of linear stochastic integro-differential equations with variable coefficients in the scale of integer Sobolev spaces $W^{m,p}$, $m, p \geq 1$. The main motivation for considering this class of equations is the non-linear filtering problem for jump-diffusion processes with correlated noise. The regularity of solutions of these equations is intimately related to establishing convergent rates of numerical approximations and to deriving the existence of a conditional density. The main technique we employ is the method of vanishing viscosity, which relies on a priori estimates of solutions in the $W^{m,p}$-norms.