I am an Associate Professor of Statistics (with tenure) and also hold a courtesy appointment at the Department of Biostatistics. I currently serve on the Editorial Boards for
Bernoulli and the
Journal of the Royal Statistical Society: Series B.
I develop methods that convert predictive and exploratory machine learning (ML) algorithms into principled tools for inference using ideas from convex analysis, large-deviation and nonparametric theory, and generative modeling. Visit Research to explore the unique contributions of my work with collaborators and team members.
My current research is funded through grants from both the National Science Foundation and the National Institutes of Health.
My work was recognized with the
CAREER Award for early-career faculty by the National Science Foundation in the 2023 funding cycle and the
Bernoulli Society’s New Researcher Award in Mathematical Statistics for 2025. I am an elected member of the
International Statistical Institute since 2021.
Flexible Data-Adaptive Approaches to Modern Inference: Under this theme, my work focuses on developing new methods for sound statistical inference on data-adaptive targets. These methods draw on ideas from [generative modeling], [privacy and large language models]. Common to these techniques is that they bypass the difficult task of deriving typically case-specific analytical descriptions of the selection process.
Inference for unsupervised learning algorithms: My more recent efforts focus on developing inferential methodologies for data-adaptive targets arising in various learning settings, including unsupervised learning. This line of work introduces novel inferential techniques for widely used exploratory tools such as [PCA] and [clustering].
Antithetic randomization: [Antithetic Gaussian randomization] is proposed as a cross-validation method for reliable assessment of model performance in settings where standard sample splitting is infeasible, such as with non-i.i.d. data. This approach achieves near-zero bias without paying a price in variance, thereby outperforming existing randomization alternatives.
Further work is underway to understand the theoretical properties of this randomization approach and to explore its application to other problems.
Distribution-free selective inference: This line of research expands the scope of selective inference beyond normal data. My [work] provides a first-of-its-kind theoretical basis for data carving, a new class of inferential methods that, like data splitting, uses a subset of the data for selection but, unlike splitting, utilizes the full dataset for selective (or post-selection) inference.
The proof techniques developed have advanced selective inference in semi- and nonparametric settings where it was previously unavailable, including, for example, [causal effect moderation] and [quantile regression].
Selective inference beyond polyhedral constraints: This line of research develops a simple Gaussian randomization scheme that enables feasible inference for selection events that do not satisfy polyhedral constraints. By contrast, early work in the area, e.g., [Lee et al. (2016)], rely on a polyhedral representation of the selection event.
These randomization techniques make selective inference feasible across a broad range of learning problems, including the selection of [groups of variables] via group lasso–type penalties, models fitted through [multi-task learning], and dependence structure learned through [graph- or network-analysis].
Tractable and powerful selective inference: My work, under this theme, develops: (i) sampling methods, leveraging the benefits of a full [Bayesian] machinery, (ii) probabilistic [approximation] techniques that rely on convex analysis and large deviation theory, (iii) [exact] methods by identifying an appropriate conditioning set.
These advances have led to a general M-estimation framework for selective inference that accommodates both likelihood-based and more general models, as developed [here].
New Papers (2026)
Ronan Perry, Snigdha Panigrahi, and Daniela Witten. Post-selection inference for penalized M-estimators via score thinning . [arxiv] |
Papers (2025)
Di Wu, Jacob Bien and Snigdha Panigrahi. Hierarchical Clustering with Confidence . [arxiv] Soham Bakshi, and Snigdha Panigrahi. Classification Trees with Valid Inference via the Exponential Mechanism . [arxiv] Sifan Liu, and Snigdha Panigrahi. Flexible Selective Inference with Flow-based Transport Maps . [arxiv] Yiling Huang, Snigdha Panigrahi, Guo Yu, and Jacob Bien. Reluctant Interaction Inference after Additive Modeling . [arxiv] Sofia Guglielmini, Gerda Claeskens, and Snigdha Panigrahi. Selective Inference in Graphical Models via Maximum Likelihood [arxiv] Yiling Huang, Snigdha Panigrahi, and Walter Dempsey. Selective Inference for Sparse Graphs via Neighborhood Selection 2025. Electronic Journal of Statistics (Accepted). [arxiv] Ronan Perry, Snigdha Panigrahi, Jacob Bien, and Daniela Witten. Inference on the proportion of variance explained in principal component analysis 2025. Journal of the American Statistical Association (Accepted). [arxiv] Yiling Huang, Sarah Pirenne, Snigdha Panigrahi, and Gerda Claeskens. Selective Inference using Randomized Group Lasso Estimators for General Models 2025. Electronic Journal of Statistics. [arxiv] [publication] Sifan Liu, and Snigdha Panigrahi. Selective Inference with Distributed Data 2025. Journal of Machine Learning Research. [arxiv] [publication] Snigdha Panigrahi, Jingshen Wang, and Xuming He. Treatment Effect Estimation via Efficient Data Aggregation 2025. Bernoulli. [arxiv] [publication] Yumeng Wang, Snigdha Panigrahi, and Xuming He. Asymptotically-exact selective inference for quantile regression 2025. Annals of Statistics (Accepted). [arxiv] |
Papers (2024 and older)
Soham Bakshi, Yiling Huang, Snigdha Panigrahi, and Walter Dempsey. Inference with Randomized Regression Trees 2024. [arxiv] Sifan Liu, Snigdha Panigrahi, and Jake A. Soloff. Cross-Validation with Antithetic Gaussian Randomization 2024. [arxiv] Soham Bakshi, Walter Dempsey, and Snigdha Panigrahi. Selective Inference for Time-varying Effect Moderation 2024. [arxiv] Kevin Fry, Snigdha Panigrahi, and Jonathan Taylor. Assumption-Lean Data Fission with Resampled Data (Discussion of Data fission: splitting a single data point) 2024. Journal of the American Statistical Association. [publication] Snigdha Panigrahi, Kevin Fry, and Jonathan Taylor. Exact Selective Inference with Randomization 2024. Biometrika. [arxiv] [publication] Snigdha Panigrahi, Natasha Stewart, Chandra Sripada, and Elizaveta Levina. Selective Inference for Sparse Multitask Regression with Applications in Neuroimaging 2024. Annals of Applied Statistics. [arxiv] [publication] Snigdha Panigrahi. Carving model-free inference 2023. Annals of Statistics. [arxiv] [publication] Snigdha Panigrahi, Peter W. Macdonald, and Daniel Kessler. Approximate Post-Selective Inference for Regression with the Group LASSO 2023. Journal of Machine Learning Research. [arxiv] [publication] Snigdha Panigrahi, Shariq Mohammad, Arvind Rao, and Veerabhadran Baladandayuthapani. Integrative Bayesian models using Post-selective Inference: a case study in Radiogenomics 2022. Biometrics. [arxiv] [publication] Snigdha Panigrahi and Jonathan Taylor. Approximate selective inference via maximum likelihood 2022. Journal of the American Statistical Association [arxiv]; [publication] Snigdha Panigrahi, Jonathan Taylor, and Asaf Weinstein. Integrative methods for post-selection inference under convex constraints 2021. Annals of Statistics. [arxiv]; [publication] Snigdha Panigrahi, Parthanil Roy, and Yimin Xiao. Maximal Moments and Uniform Modulus of Continuity for Stable Random Fields 2021. Stochastic processes and their applications. [arxiv]; [publication] Basil Saeed, Snigdha Panigrahi, and Caroline Uhler. Causal Structure Discovery from Distributions Arising from Mixtures of DAGs 2020. International Conference on Machine Learning. [arxiv]; [publication] Snigdha Panigrahi, Junjie Zhu, and Chiara Sabatti. Selection-adjusted inference: an application to confidence intervals for cis-eQTL effect sizes 2019. Biostatistics. [arxiv]; [publication] Qingyuan Zhao and Snigdha Panigrahi. Selective Inference for Effect Modication: An Empirical Investigation 2019. Observational Studies: Special issue devoted to ACIC. [arxiv]; [publication] Snigdha Panigrahi, Nadia Fawaz, and Ajith Pudhiyaveetil. Temporal Evolution of Behavioral User Personas via Latent Variable Mixture models 2019. IUI Workshops on Exploratory Search and Interactive Data Analytics. [arxiv]; [publication] Snigdha Panigrahi, Jonathan Taylor, and Sreekar Vadlamani. Kinematic Formula for Heterogeneous Gaussian Related Fields 2018. Stochastic processes and their applications. [arxiv]; [publication] Snigdha Panigrahi and Jonathan Taylor. Scalable methods for Bayesian selective inference 2018. Electronic Journal of Statistics. [arxiv]; [publication] Snigdha Panigrahi, Jelena Markovic, and Jonathan Taylor. An MCMC-free approach to post-selective inference 2017. [arxiv] Xiaoying Tian Harris, Snigdha Panigrahi, Jelena Markovic, Nan Bi, and Jonathan Taylor. Selective sampling after solving a convex problem 2016. [arxiv] |
SOFTWARE My contributions to software development in the field of Selective Inference can be tracked here: [Github]