Online dynamic scene reconstruction from monocular input remains a challenging problem due to costly optimization process and severely insufficient multiview constraints. In this talk, I will talk about my recent attempts to address this challenge by leveraging 3D Gaussians as the underlying representation. Specifically, I will first discuss the motivation and goals of this research. Next, I will introduce MonoDy-GS, a semester project I completed last year focused on online dynamic scene reconstruction from RGB-D input. Finally, I will provide a brief update on my ongoing master's thesis, which aims to improve upon MonoDy-GS in dynamic reconstruction quality.