Major JohnRoss Wendler, USAF, is a Special Assistant to the Chairman of the Joint Chiefs of Staff.
The COVID-19 pandemic has had a significant impact on the world, including strained diplomatic ties and blurred perceptions of who or what is responsible for its origins. In response to allegations, China crafted an intricate social media campaign to clear its name. This campaign gained notoriety in June 2020 when Twitter removed 150,000 malicious Chinese accounts.1 The accretion of fictitious accounts suggests that China has emboldened its efforts to spread propaganda on Twitter in favor of Chinese Communist Party (CCP) objectives. Although previous Chinese propaganda campaigns had focused on demeaning the protests in Hong Kong, a massive wave of social media rhetoric promoting the Chinese government’s response to the coronavirus outbreak as a form of Great Power competition—initially downplaying the pandemic’s severity while seeking praise for the government’s draconian efforts to contain its spread—is a narrative that underscores the changing character of war.
It seems the information that the CCP passed through Twitter is more mendacious than originally perceived, resembling stratagems from Russia’s 2014 disinformation playbook in Crimea. The concept of disinformation is distinct from misinformation, meaning not only false but also false as part of an intentional effort to mislead, deceive, or confuse.2 These intentions are consistent with previous observations by the Department of Defense (DOD) and Intelligence Community about communications from the Chinese state and nonstate actors.3 Despite China’s denial of these allegations, often blaming Western governments, social media propaganda toward Western countries has become increasingly complex, systematic, and effective. The joint force should examine this campaign as an opportunity to better understand the changing character of war and the deliberate weaponization of social media among Great Power competitors.
Through a quantitative content analysis, this article applies communication theory to investigate how the CCP responded to the novel coronavirus of 2020. It also examines social media virality (number of shares) and popularity (number of likes) effects to gain insights into the relationship between Chinese government narratives and social media users. Results indicate that governmental and diplomatic Twitter accounts with the presence of disinformation had a statistically significant impact (p < 0.001) on virality and popularity. Additionally, this article presents an analysis of China’s disinformation campaign as competing narratives with the United States in the wake of the pandemic. Twitter will be the primary platform for content analysis because it continues to be an effective and widely used tool for mass media dissemination in the United States. This article begins by examining scholarly literature concerning the history of Chinese propaganda, current research on virality and popularity in social media, crisis communication theory, and this theory’s application in pandemic response.
A Brief History of Chinese Propaganda
Propaganda, censorship, and disinformation are pillars of the CCP’s grand strategy, allowing governmental officials to control the flow of information in and out of China.4 Adopted from Soviet-era tactics, government-sanctioned propaganda campaigns are designed to make the state and its objectives look favorable to the world—most importantly, by making state competitors (largely the United States) appear weak, corrupt, and abusive. In February 2016, on a tour of Chinese media outlets, CCP General Secretary Xi Jinping announced, “All of the work by the Party’s media must reflect the Party’s will, safeguard the Party’s authority, and safeguard the Party’s unity.”5 In other words, the job of the Chinese media machine is not to inform the public and seek out the truth, but rather to report stories favorable to Party leadership and censor those that are not. Media should support the state and strengthen the state. Truth is not valuable if it weakens the state.
In recent years, the CCP has created a titanic propaganda and censorship apparatus, controlling the most dangerous threat to Party unity and authority—truth.6 Incorporating a robust and systematic means of controlling information, the CCP has constructed an elaborate Internet censorship program—the Great Firewall, also referred to as the Golden Shield Project—designed to rapidly censor Internet content produced within the People’s Republic of China.7 Developed and operated by the Ministry of Public Security, the program aims to restrict content to its citizens, identify and locate individuals, and provide the state with immediate access to personal records.8 Today, the Golden Shield Project is one of the most controversial programs in the world—and it is being exported and adopted by other like-minded states, such as Cuba, Zimbabwe, and Belarus.9 Once he was installed as Party chief, Xi’s governmental agencies, diplomats, and state-run media outlets began ramping up their use of social media accounts, including on Twitter, Facebook, and YouTube—platforms that are banned inside of China—in order to reach a larger audience abroad.10
Relying on the extensive use of new technology, President Xi has succeeded in imposing a social model in China based on the control of news, information, and online surveillance of its citizens. According to the Reporters Without Borders (Reporters sans frontières, RSF) 2021 World Press Freedom Index, China scores among the worst in the world, at 177 out of 180, on the country index for freedom of speech and expression. RSF conducts yearly summaries of almost all countries by utilizing a comprehensive methodology that examines physical violence; numbers of journalists murdered, attacked, detained, or threatened; harassment and access to information; censorship and self-censorship; control of media; and judicial, business, and administrative pressure.11
More than 100 journalists and bloggers are currently detained by China in life-threatening conditions. Liu Xiaobo, a Nobel peace laureate and winner of the RSF Press Freedom Prize, and Yang Tongyan, a dissident blogger, both died in 2017 from cancers that were left untreated while they were detained.12 China’s state-owned and privately owned media are now under the CCP’s close control, and foreign reporters attempting to work in China are encountering more and more obstacles.13 At the same time, President Xi has been attempting to export this oppressive model by promoting a “new world media order” under China’s influence.
Today, China analysts widely agree that the CCP’s propaganda overseas has seen a significant resurgence under President Xi.14 Incorporating modern disinformation tactics to “weaponize culture and ideas” as a form of soft power techniques, CCP’s image-building activities involve social media, digital networks, and hybrid and nonlinear conflict strategies.15 This branding is part of a larger undertaking during Xi’s watch to reinvigorate the Party, firmly establish its leadership in the pursuit of the “China Dream” and “the great rejuvenation of the Chinese nation,” and garner greater international respect and acceptance of the CCP.16 Also, ProPublica and others have documented the increasing use of fake Twitter accounts by the People’s Republic of China and CCP members, especially since 2019, to generate an illusion of widespread support for their policies.17
China and Manipulated Messaging on COVID-19
China has been modifying its reports of COVID-19 since December 2019, displaying a range of themes on social media and state-controlled news outlets. In large part, propaganda efforts shaped the narrative around the origin of the virus and the management of the outbreak.18 Both China and Russia have used media to manipulate and exploit uncertainties in the origin of COVID-19, bolstering conspiracy theories that the disease was a deliberately engineered creation brought to China by the United States rather than a naturally occurring phenomenon.19
According to a Congressional Research Service report, there is reason to believe that Chinese officials and state-controlled media initially downplayed the severity and scope of the outbreak, releasing incomplete information on the spread and prevention of the disease and blocking access to some Chinese and foreign news reports. Several individuals who attempted to share early information were reprimanded by public security officials for “spreading rumors” and creating “negative social influence.”20 As containment issues began to circulate to international news agencies, Chinese officials and media shifted to public claims of successful crisis management, with official numbers released to media outlets showing the epidemic coming under control. As other countries began to see signs of the disease and struggle with infection rates, China promoted the narrative of the country as a world leader and the Chinese government as superior in combating the virus.21
Tensions between the United States and China escalated when Zhao Lijian, a Chinese Foreign Ministry spokesperson, tweeted two manufactured conspiracy theories: that patient zero was a U.S. Soldier who visited Wuhan to participate in the October 2019 Military World Games, and that the virus broke loose from the U.S. Army’s laboratory at Fort Detrick, Maryland.22 Then–Secretary of State Mike Pompeo expressed “strong U.S. objections” to China’s efforts to shift blame for the virus to the United States, ordering Yang Jiechi, the director of the General Office of the Central Affairs Commission, to stop spreading “disinformation and outlandish rumors.”23 Chinese reactions departed further from diplomacy when Pompeo began referring to the pandemic as the “China virus” and “Wuhan flu,” inciting Hua Chunying, another Foreign Ministry spokesperson, to tweet that Secretary Pompeo should “stop lying through [his] teeth.”24
Tensions began to subside in the summer of 2020 when China withdrew its inflammatory comments about the virus’s origins. China’s Twitter response, while now less pugnacious, continues to elicit notoriety and debate. Because the COVID-19 pandemic is unique in how quickly it has affected the world, the rhetorical response made on social media would likely benefit from being grounded in communication theory, specifically a crisis response theory known as the Situational Crisis Communication Theory (SCCT).
Crisis Communication and Response: SCCT
Crisis response strategies represent the words and actions managers employ in dealing with crises.25 In crisis communication, there are two strategies for managing outcomes: managing information and managing meaning. Managing information pertains to critical findings related to the crisis. To that end, information is collected, categorized, and disseminated to stakeholders—that is, citizens—for their benefit. This can be as simple as advising citizens to wear face masks and engage in social distancing guidelines. Managing meaning, on the other hand, focuses on efforts to influence how people perceive the crisis and the organization involved in the crisis.26 In the case of the pandemic, China manages meaning by using social media through censored accounts to influence people’s perceptions of responsibility and attitudes toward the CCP’s reputation.
Social media has become one of the main vehicles for information dissemination and situational sense-making during the coronavirus pandemic, so it is no surprise that governments utilize its capabilities as a tool for controlling information. Current research suggests that instructing information (for example, informing the public on the dangers associated with a crisis), adjusting information (downplaying the severity of the issue), and repairing organizational reputation (boosting stakeholder opinion of the organization) are three crisis response strategies that affect stakeholder perceptions.27 Focusing on the latter two, adjusting information and reputational repair, will assist in understanding why China may resort to propaganda in an attempt to better its situation.
Reputation management seeks to reduce the negative effects a crisis has on an organization’s related assets and, most important, its reputation.28 Reputation repair strategies commonly work through four options: deny, diminish, rebuild, and reinforce.29 Crisis communication theory offers a prediction of the reputational threat presented by a crisis and prescribes crisis response strategies designed to defend reputational assets.30
The effects of China’s COVID-19 propaganda on social media were calculated using quantitative content analysis methods. Twitter’s application programming interface allowed data on Chinese government accounts, Chinese diplomatic accounts, and state-censored news media accounts to be collected. Based on previous research, these three types of accounts have the highest probability of representing the CCP’s approved narratives.31 An artificial intelligence–powered computer program, Hamilton 2.0, categorized tweets in the test data set.32 The Hamilton 2.0 dashboard is a research project developed by the Alliance for Securing Democracy at the German Marshall Fund of the United States. It provides a summary analysis of the narratives and topics promoted by Russian, Chinese, and Iranian government officials; state-funded media on Twitter, YouTube, and state-sponsored news Web sites; and official diplomatic statements at the United Nations. The purpose of the dashboard is to increase knowledge of the focus and spread of state-backed government messaging across various information media.
Partnership with the Hamilton research team enabled the cultivation of critical message data, examining Chinese tweets from December 1, 2019, to September 30, 2020. The test data set included key phrases—#covid, #coronavirus, #wuhan. The #covid hashtag also allows for multiple hashtags that begin with the word covid (for example, #covid, #COVID, #Covid19, #covid-19). The data set consisted of 133,987 tweets from Chinese news and media accounts (for example, Xinhua News Agency, Global Times, China Daily), Chinese government officials and diplomats (Lijian Zhao, Ambassador Xu Hong), and Chinese government accounts (the Chinese embassy in Prague). Governmental accounts are identified as “Chinese government official” under the Twitter username, while media accounts are labeled “Chinese state-affiliated media.”
Random sampling methods narrowed down the data set to a testable quantity. The design of a coding protocol examined Twitter account type, presence of disinformation, and reputational repair strategy. Coding involved dichotomous methods for the presence of all indicators in each message—that is, 1 or 0—where the frequency of each indicator helps to minimize possible subjective decisions of coders. PolitiFact, Snopes, and other fact-checking organizations determined if disinformation was present. Intercoder reliability checks using statistical analysis software yielded a 0.91. Since methodologists agree that reliability coefficients of 0.7 or greater are generally accepted, intercoder reliability was deemed strong and acceptable. The data investigation utilized regression analysis, multivariate analysis of variance, analysis of variance, and t-tests.
Virality represents the distribution and overall effect size of each tweet. As tweets are increasingly shared and retweeted, the message footprint enlarges, increasing the chances that it is seen outside the sender’s normal sphere of influence based on platform algorithms. Social media sites such as Facebook and Twitter incorporate algorithms to analyze words, phrases, or hashtags to create a list of topics—that is, a trend list—sorted in order of popularity. According to a 2011 study on social media, a trending topic “will capture the attention of a large audience” for a short period.33 The more a message is shared and retweeted, the larger the audience and the more viral the effect.
Popularity has similar effects on Twitter algorithms; more “likes” from other users push the message higher on the trend list. For the purposes of this article, an increase in popularity among Twitter users is categorized as an increase in acceptance levels. From a crisis communication perspective, an increase in popularity equates to a reduction in anger and the associated likelihood of negative word of mouth.
Building on current literature, research findings suggest China’s coronavirus propaganda campaign incorporates disinformation to strengthen its reputation and blame its competitors. The research findings highlight three important takeaways from a national security perspective: China’s coronavirus propaganda campaign incorporates modern disinformation tactics as a form of soft power through social media, China uses specific Twitter account types to better manipulate virality and popularity, and virality leads to an increase in popularity.
Disinformation Tactics as a Form of Soft Power
Findings show that governmental and diplomatic accounts are more likely to utilize disinformation or misinformation compared with news and media accounts. These tactics also have a statistically significant effect (p < 0.001 level) on virality and popularity, with an average of 20 times more retweets and 13 times more likes compared with fact-based information on a similar topic. This effect has successfully allowed China to increase target audience size—further supported by current research findings on targeting specific audiences through social media.34 To that end, the weaponization of ideas may have proved effective at generating media hype in Western audiences—likely bolstering the CCP’s willingness to use similar tactics in the future, especially against Western competitors, both commercial and diplomatic.
The notion of social media warfare is supported for three reasons: The language is targeted, the time of tweet transmission is purposeful, and Twitter is banned inside China. Across the entire data set, an alarming 73 percent of all tweets from China were in English. Regardless of whether the tweets originated from a Chinese embassy in India or a news anchor in Hong Kong, the language denotes the targeting of Western audiences. Even more concerning, most tweets were posted midmorning or midevening U.S. East Coast time, even though these times correlate to untraditional social hours in Hong Kong. These combined stratagems indicate intentional weaponization of information.
According to a study by Shimon Kogan, Tobias Moskowitz, and Marina Niessner, there are disproportionate effects of disinformation on the relationship between fake news and financial markets.35 According to this study, fake articles had, on average, nearly three times the impact of real news articles on the daily price volatility or absolute return of the manipulated stocks in the 3 days after the publication of fake news. In other words, misleading and false tweets attract more retweets and thus have a more significant skyward trend on virality.36
According to another study by Soroush Vosoughi, Deb Roy, and Sinan Aral, false news spreads significantly further, faster, deeper, and more broadly than does the truth—sometimes by an order of magnitude.37 While truth rarely diffuses to more than 1,000 people, the top 1 percent of false news cascades routinely diffuse to as many as 100,000 people. This study also found that truth took approximately 6 times as long as falsehood to reach 1,500 people and 20 times as long to travel 10 reshares from the origin tweet in a retweet cascade.38 Although research findings such as these corroborate the results from this article, they do not address why certain account types were more successful at spreading disinformation.
Account Types Matter
The enhanced effects from governmental and diplomatic accounts can be explained by examining the perceived authority that these accounts may have with certain audiences. Recalling that Chinese government tweets are labeled as a “government official,” it is logical to suggest this badge enhances the perception of an authoritative figure. The audience must then form its own opinions on whether the information presented is false because the presence of a credible source (for example, a Chinese ambassador to the United States) may lead to peripheral processing via heuristic principles—that is, cognitive shortcuts—in the belief that “statements by credible sources can be trusted.”39 This likely explains why diplomatic accounts had larger effects on virality and popularity even with the presence of disinformation. See figure for illustrations of how disinformation affected virality.
Moreover, governmental and diplomatic accounts seem to use denial strategies the most, commonly targeting the United States and other Western critics of China’s mishandling of and reluctance to share information during the initial phases of the virus’s life span. China’s narrative began with ignoring strategies (downplaying how dangerous the virus is), followed by denial strategies (suggesting the virus originated in the United States or was created by the U.S. Army), until, finally, attacking-the-accuser strategies (by calling out the United States for referring to the virus as the China virus or Wuhan flu).
These active reputation repair messages seemed successful in the short term as the frequency in the usage of the terms Chinese flu and Chinese virus reduced after March 2020. March, coincidently, had the highest amount of denial options utilized in the test data set. This fact underscores the effectiveness of targeted and synchronized soft power tactics in social media warfare.
Virality’s Leads on Popularity
During the final analysis of virality and popularity, a curious pattern kept emerging during statistical calculations. Post hoc examination illuminated the presence of a phenomenon where virality enhances popularity. In other words, when China uses a nefarious narrative from an authoritative diplomatic account laced with falsehoods, a spike in the number of retweets typically occurs—strengthening its impact on virality. However, as time goes on, this large audience that has now been exposed to the narrative begins to like and comment on it more, increasing its popularity. This delayed effect may be caused by persuasion theory effects, namely, the liking heuristic.
People typically agree with people they like, and people they like typically have “correct” opinions.40 When people interpret data they do not completely understand, the mind takes mental shortcuts through its interpretation of peripheral data or heuristics.41 This observed liking effect42 in the test data resembles a large-scale randomized experiment conducted on Facebook by a Massachusetts Institute of Technology research team.43 The team found that personalized referrals to other Facebook members were three times more effective at generating adoption compared with normal advertising. Thus, a tweet that is shared and liked among strong-tie relationships on social media increases the adoption of the narrative.
A simple like of a tweet does not mean complete message consensus. A Western social media user who likes a Chinese propaganda tweet, for instance, does not become a Party agent. However, if exposure continues to occur from multiple data sources, it may begin to persuade that user’s trust and position on the topic at hand. More pointedly, viral messages that gain popularity run the risk of cultivating consensus: “If other people believe it, then it is probably true.”44
Although this research has multiple implications, it also is limited by several factors. First, this study on China’s response to the coronavirus pandemic was conducted primarily in the United States. Future research could compare findings to a comparative study of other countries, which would provide valuable insights into cultural differences in managing a similar crisis. Moreover, the study examined only Twitter as a social media platform. Although Facebook would likely have similar results, a social media platform that is not banned in China—for example, WeChat—could help the Intelligence Community understand how China uses propaganda on its citizens compared with Western audiences.
Future research should also utilize experimental design to isolate the three most influential variables: disinformation, account type, and reputational response strategy. Additionally, a network analysis of the data set would help DOD and the Intelligence Community better predict the effects of virality on popularity by examining the depth of dispersion and acceptance of narratives. A network analysis would also help discern how many Western social media users encountered targeted nefarious tweets. This would likely help social media corporations understand the effects of false information, perhaps reducing its spread. Despite these limitations, this article provides significant lessons for understanding China’s disinformation campaign on social media.
The COVID-19 pandemic led China to successfully deflect the damage to its international reputation by utilizing a specific and intentional weapon: information. China’s capacity and capability to manipulate information on a broad, global scale under a compressed timeline highlight not only the changing character of war but also how woefully behind the United States is at competing against targeted social media narratives. Today, 6,000 tweets are posted on Twitter every second, corresponding to more than 350,000 tweets per minute, 500 million tweets per day, and roughly 200 billion tweets per year.45 As countries and organizations become more adept at utilizing social media to coerce audiences and outpace their competitors, it will become increasingly important for gatekeepers to protect the culture and ideas of their citizens.
China has demonstrated its freedom of maneuver in the information battlespace on a scale and timeline that the United States cannot accomplish. Recognizing this is the first step in adjusting how the United States handles the weaponization of social media. The joint force must tailor a robust response: recognizing disinformation, suppressing it, and countering it to U.S. advantage. Developing this response enterprise will also require an examination of how the United States interprets and values truth. Continued research and development on social media trends will allow gatekeepers to focus efforts on disinformation that appears to be trending. Early identification in a tweet’s lifecycle would significantly slow the dispersion across users and ultimately expand decision space for defense and policymakers. As we saw in the Crimean conflict of 2014, the weaponization of disinformation is one of the most insidious threats to democracy. Eight years later, it appears the threat has grown more dangerous and resolute. China’s utilitarian relationship with truth enables it to bend and break the truth to maintain control. To regain advantage, the United States cannot ignore nefarious social media actors. To win, we must reaffirm our American values—defend truth, promote the sanctity of free speech and expression, and protect the principles of our people. JFQ
1 Kate Conger, “Twitter Removes Chinese Disinformation Campaign,” New York Times, June 11, 2020, available at <https://www.nytimes.com/2020/06/11/technology/twitter-chinese-misinformation.html>.
2 James H. Fetzer, “Disinformation: The Use of False Information,” Minds and Machines 14, no. 2 (January 2004), 231–240.
3 Jarred Prier, “Commanding the Trend: Social Media as Information Warfare,” Strategic Studies Quarterly 11, no. 4 (Winter 2017), 50–85, available at <https://www.jstor.org/stable/26271634>.
4 Dan Blumenthal, “China’s Censorship, Propaganda & Disinformation,” American Enterprise Institute, July 10, 2020, available at <https://www.aei.org/articles/chinas-censorship-propaganda-disinformation/>.
6 Conger, “Twitter Removes Chinese Disinformation Campaign.”
7 Blumenthal, “China’s Censorship, Propaganda & Disinformation.”
8 Yaqiu Wang, “In China, the ‘Great Firewall’ Is Changing a Generation,” Politico, September 1, 2020, available at <https://www.politico.com/news/magazine/2020/09/01/china-great-firewall-generation-405385>.
9 “The Great Firewall of China: Background,” Torfox, June 1, 2011, available at <https://cs.stanford.edu/people/eroberts/cs181/projects/2010-11/FreedomOfInformationChina/the-great-firewall-of-china-background/index.html>.
10 Kristin Shi-Kupfer and Mareike Ohlberg, China’s Digital Rise: Challenges for Europe, MERICS Papers on China No. 7 (Berlin: Mercator Institute for China Studies, April 2019), available at <https://merics.org/sites/default/files/2020-06/MPOC_No.7_ChinasDigitalRise_web_final_2.pdf>.
11 Edward Webb, “Censorship and Revolt in the Middle East and North Africa: A Multi-Country Analysis,” Dickinson College, ISA Annual Convention, San Diego, April 1–4, 2012, available at <http://files.isanet.org/ConferenceArchive/6381ff74a06c4ac183579fe46632d16f.pdf>.
12 Reporters Without Borders, “China,” available at <https://rsf.org/en/china>.
14 Nadège Rolland, China’s Vision for a New World Order, Special Report #83 (Seattle: National Bureau of Asian Research, January 2020), available at <https://www.nbr.org/wp-content/uploads/pdfs/publications/sr83_chinasvision_jan2020.pdf>.
15 Peter Pomerantsev and Michael Weiss, The Menace of Unreality: How the Kremlin Weaponizes Information, Culture, and Money (New York: Institute of Modern Russia, 2014), 14, available at <https://imrussia.org/media/pdf/Research/Michael_Weiss_and_Peter_Pomerantsev__The_Menace_of_Unreality.pdf>.
16 Elizabeth C. Economy, The Third Revolution: Xi Jinping and the New Chinese State (New York: Oxford University Press, 2018), 177.
17 Jeff Kao and Mia Shuang Li, “How China Built a Twitter Propaganda Machine Then Let It Loose on Coronavirus,” ProPublica, March 26, 2020, available at <https://www.propublica.org/article/how-china-built-a-twitter-propaganda-machine-then-let-it-loose-on-coronavirus>.
18 Catherine A. Theohary, Terrorist Use of the Internet: Information Operations in Cyberspace, R41674 (Washington, DC: Congressional Research Service, March 8, 2011).
20 Susan V. Lawrence, COVID-19 and China: A Chronology of Events (December 2019–January 2020), R46354 (Washington, DC: Congressional Research Service, updated May 13, 2020), 8.
21 Theohary, Terrorist Use of the Internet.
22 Donald M. Bishop, “Disinformation Challenges in a Pandemic,” Foreign Service Journal 97, no. 6 (July–August 2020), 38–41.
25 W. Timothy Coombs, “Protecting Organization Reputations During a Crisis: The Development and Application of Situational Crisis Communication Theory,” Corporate Reputation Review 10, no. 3 (September 2007), 163–176.
26 W. Timothy Coombs, “Parameters for Crisis Communication,” in The Handbook of Crisis Communication, ed. W. Timothy Coombs and Sherry J. Holladay (Oxford: Blackwell Publishing, 2010), 17–53.
27 W. Timothy Coombs, “The Value of Communication During a Crisis: Insights from Strategic Communication Research,” Business Horizons 58, no. 2 (March–April 2015), 141–148.
29 Coombs, “Protecting Organization Reputations During a Crisis.”
31 Rolland, China’s Vision for a New World Order.
32 Alliance for Securing Democracy, The German Marshall Fund of the United States, “Hamilton 2.0 Dashboard,” n.d., available at <https://securingdemocracy.gmfus.org/hamilton-dashboard/>.
33 Sitaram Asur et al., “Trends in Social Media: Persistence and Decay,” Proceedings of the International AAAI Conference on Weblogs and Social Media 5, no. 1 (February 2011).
34 Shi-Kupfer and Ohlberg, China’s Digital Rise.
35 Shimon Kogan, Tobias J. Moskowitz, and Marina Niessner, Social Media and Financial News Manipulation (Rochester, NY: Social Science Research Network, 2021), available at <https://ssrn.com/abstract=3237763>.
37 Soroush Vosoughi, Deb Roy, and Sinan Aral, “The Spread of True and False News Online,” Science 359, no. 6380 (March 2018), 1146–1151.
39 Robert B. Cialdini et al., “Compliance Principles of Compliance Professionals: Psychologists of Necessity,” in Social Influence: The Ontario Symposium, vol. 5, ed. Mark P. Zanna, James M. Olson, and C.P. Herman (New York: Psychology Press, 1987), 174.
40 Shelly Chaiken, “The Heuristic Model of Persuasion,” in Zanna, Olson, and Herman, Social Influence, 31.
41 Daniel J. O’Keefe, Persuasion: Theory and Research, 3rd ed. (Thousand Oaks, CA: Sage, 2016).
42 Cialdini et al., “Compliance Principles of Compliance Professionals.”
43 Sinan Aral and Dylan Walker, “Creating Social Contagion Through Viral Product Design: A Randomized Trial of Peer Influence in Networks,” Management Science 57, no. 9 (2011), 1623–1639.
44 Chaiken, “The Heuristic Model of Persuasion,” 4.
45 Sinan Aral, The Hype Machine: How Social Media Disrupts Our Elections, Our Economy, and Our Health—and How We Must Adapt (New York: Currency, 2020), 96.