Research

Total Variation Denoising

Noah Simon and I have worked on finite dimensional approximations to nonparametric penalized regression problems. By approximating penalties on a mesh, our regression approach makes many difficult problems tractable by providing calculable and efficient solutions. We used entropy-based bounds and empirical process ideas to describe properties of our regression method. Our latest efforts have been on implementing readily available approximations to total variation problem solutions for any number of features.

We are developing an R package, “MultivarTV,” that implements efficient procedures written in C for fitting approximate solutions to multivariate total variation denoising problems. The algorithm uses the alternating direction method of multipliers (ADMM), as described by Boyd et al. (2011). Please see the repository for MultivarTV for more details.

A paper is currently in submission. We have a version of the paper on ArXiv.

Reinforcement Learning/Bayesian Programming

Amitabh Sinha and I have collaborated on using image transformations of physical transportation networks to learn future network structure. Network connectivity is a classical operations research problem, where the approach is typically more global: decide which pairs of facilities will have trucks while incorporating constraints across the network such as facility specific capacities or truck capacities or speed objectives. However, these global approaches are difficult to scale and are not robust to changing network realities such as weather changes, resource changes (truck companies going down), or pandemics in the extreme cases. We would prefer to have a network that is adaptive to these emergent trends. Our approach is to learn the features that govern connectivity between pairs of facilities and reinforce the likelihood of connection based on observed reality via Thompson Sampling.

We have submitted part of this work and a version of the paper can be found on ArXiv.

Graph Representation Learning

In collaboration with Akhilesh Soni, Mehdi Golari, and Da Zheng, we have been considering methods to overcome combinatorial challenges to large optimization problems via neural networks. A major challenge for large networks with many constraints is solving for an optimal network since the millions of combinations grows explosively as the network grows as well. The primary goal has been to generate embeddings across inputs to various instances of an optimization problem such that we can then learn an ego-network. This work is currently in submission