Detecting trojans (backdoors) in deep neural networks is critical due to their real-world implications. Current methods often rely on assumptions about the attack type and struggle with trojaned classifiers trained using adversarial techniques. To address these challenges, we introduce TRODO (TROjan scanning by Detection of adversarial shifts in Out-of-distribution samples), a novel attack-agnostic method that identifies "blind spots" where trojaned classifiers misclassify out-of-distribution (OOD) samples as in-distribution (ID). By adversarially shifting OOD samples toward ID, TRODO detects trojans without prior knowledge of attack types or reliance on training data. The method is robust, adaptable, and effective across diverse scenarios, including adversarially trained classifiers, making it a promising approach to trojan scanning.