Research
Throughout my years in undergraduate studies, I have developed research interests in probability, stochastic processes, with their applications in interactive particle systems and machine learning. Meanwhile, I have also expanded my research over the foundation of measure theory and probability theory.
Here are the major research projects that I have worked through.
Hausdorff Measure of Cartesian Product of Cantor Sets
Preprint submitted to arXiv.
Hausdorff measure and Hausdorff dimension are useful tools to describe fractals. This paper investigates the bounds on the \( d\log_32 \)-dimensional Hausdorff measure of the \(d\)-fold Cartesian product of the \(\frac{1}{3}\) Cantor set, \(\mathcal C^d\). By applying known theorems on the Hausdorff measure of fractals satisfying the strong open set condition and generalizing what has been done on \(\mathcal C^2\), we compute stricter upper and lower bounds for the Hausdorff measure of \(\mathcal C^d\) for several small integers \(d\).
Hausdorff measure and Hausdorff dimension are useful tools to describe fractals. This paper investigates the bounds on the \( d\log_32 \)-dimensional Hausdorff measure of the \(d\)-fold Cartesian product of the \(\frac{1}{3}\) Cantor set, \(\mathcal C^d\). By applying known theorems on the Hausdorff measure of fractals satisfying the strong open set condition and generalizing what has been done on \(\mathcal C^2\), we compute stricter upper and lower bounds for the Hausdorff measure of \(\mathcal C^d\) for several small integers \(d\).
Advisor: Taylor Jones
Mathematical Foundations of Neural Network and Transformer
Actively working on the research.
Through the project, we intend to continually investigate the IPS and SDEs models used in Recurrent Neural Network (RNN) and how they are able to simplify the complicated machine learning structures.
We explore various commonly used structure in machine learning, such as multi-head Attention, Autoencoder, and gradually push to more complicates structures in Transformer.
Right now, we are working our way through the paper A mathematical perspective on Transformers.
Through the project, we intend to continually investigate the IPS and SDEs models used in Recurrent Neural Network (RNN) and how they are able to simplify the complicated machine learning structures.
We explore various commonly used structure in machine learning, such as multi-head Attention, Autoencoder, and gradually push to more complicates structures in Transformer.
Right now, we are working our way through the paper A mathematical perspective on Transformers.
Advisor: Lu Fei
Adaptive Kernel Learning in Interacting Particle Systems and their Simulations
Poster presented on Southern Regional Council on Statistics (SRCOS) 2025.
We introduce a novel adaptive coefficient learning strategy for the non-parametric estimation of the radial interaction kernels in interacting particle systems (IPS), which can be modeled by stochastic differential equations (SDEs). These systems are fundamental in various physical and biological fields, where we typically don't know the underlying interactive system. Here, we demonstrate the approach with Lennard-Jones kernel on particle system simulation, and the unknown kernel is projected onto orthogonal basis functions, with coefficients initially estimated using the Least Squares Estimator (LSE). Our adaptive learning procedure refines the basis by strategically eliminating less significant coefficients, optimizing selection for more influential basis functions. We present numerical results from 2D simulations, demonstrating the efficacy of this kernel learning approach and discussing its performance with various basis sets.
We introduce a novel adaptive coefficient learning strategy for the non-parametric estimation of the radial interaction kernels in interacting particle systems (IPS), which can be modeled by stochastic differential equations (SDEs). These systems are fundamental in various physical and biological fields, where we typically don't know the underlying interactive system. Here, we demonstrate the approach with Lennard-Jones kernel on particle system simulation, and the unknown kernel is projected onto orthogonal basis functions, with coefficients initially estimated using the Least Squares Estimator (LSE). Our adaptive learning procedure refines the basis by strategically eliminating less significant coefficients, optimizing selection for more influential basis functions. We present numerical results from 2D simulations, demonstrating the efficacy of this kernel learning approach and discussing its performance with various basis sets.
Advisor: Xiong Wang
Improvement on the Precision of QR Factorization: An Analysis on the Fractional Implementation
Poster and Slides presented on Mid-Atlantic Research Exchange (MATRX) 2025.
QR factorization is a fundamental algorithm in the field of computational mathematics, whose applications are among solving linear systems, eigenvalue problems, and linear regression. The current computations using classical and modified Gram-Schmidt algorithms are susceptible to floating-point errors, and could lead to catastrophic cancellation. Our project investigates a \(\mathbb Q\)-based fractional computation model to reduce roundoff errors, explores alternatives around certain issues, and analyzes the trade-off between computational complexity and numerical precision. Results show Fractional QR in applications achieves better and arbitrary numerical precision compared to traditional QR factorization methods.
QR factorization is a fundamental algorithm in the field of computational mathematics, whose applications are among solving linear systems, eigenvalue problems, and linear regression. The current computations using classical and modified Gram-Schmidt algorithms are susceptible to floating-point errors, and could lead to catastrophic cancellation. Our project investigates a \(\mathbb Q\)-based fractional computation model to reduce roundoff errors, explores alternatives around certain issues, and analyzes the trade-off between computational complexity and numerical precision. Results show Fractional QR in applications achieves better and arbitrary numerical precision compared to traditional QR factorization methods.
Advisor: Mario Micheli