To optimize neural networks for better prediction accuracy and higher execution performance, researchers often rely on Neural Architecture Search (NAS) and tensor compilers. However, these methods are limited to optimizing existing, manually designed operators. In this presentation, I will introduce Syno, an end-to-end framework I developed to automate the discovery of novel neural operators with better accuracy and/or speed. Syno constructs novel neural operators based on a novel set of fine-grained primitives, guides the synthesis with heavy canonicalization and pruning techniques, and leverages Monte Carlo tree search algorithms to explore the design space. This work, which has been submitted to ASPLOS 25, discovers better operators with an average of 2.06x speedup and less than 1% accuracy loss, even on NAS-optimized models, demonstrating its potential to advance neural network optimization.