Disinformation can distort real-world events and influence individuals’ decisions, posing a serious threat to society. However, moderating disinformation is still a major challenge for social network operators, as they are omnipresent, and social media’s ease-of-use, anonymity, and interconnectedness enables their rapid diffusion. Additionally, there is a lack of clear guidance on prioritizing content for censorship efforts. Until now, existing literature focuses on the virality of traditional online content, such as marketing campaigns, which are generally driven by positive emotions and arousal. Nevertheless, this type of content is vastly dissimilar from the hate-filled, misleading, and malicious content on social media platforms, rendering literature findings inapplicable when it comes to disinformation diffusion. So, what makes disinformation go viral? Using a unique dataset of ~400 million live-crawled messages on Twitter surrounding the US presidential election in 2020, our study analyzes which content and context characteristics drive the virality of disinformation. We classify ~10 million disinformation spread over ~50,000 distinct disinformation stories and (1) identify different diffusion trajectories of virality with the help of time series shape clustering. Moreover, to investigate the differing diffusion patterns, we (2) use state-of-the-art natural language processing to analyze linguistic and meta-level features. With that, this work provides ex-ante guidance to policymakers and network operators to help identify the most critical content on social media to curb the spread of threatening disinformation online. Furthermore, this study advances the overall understanding of disinformation diffusion by focusing exclusively on misleading content and the differences among them. Lastly, this work can add a new perspective to existing research by extensively quantifying the effects of viral disinformation online with a large-scale social media analysis.