Law in the Internet Society

View   r3  >  r2  >  r1
MaxEFirstEssay 3 - 08 Jan 2024 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Changed:
<
<
>
>
-- MaxE - 20 Nov 2023
 
Deleted:
<
<

MaxEFirstEssay

 
Changed:
<
<
-- By MaxE - 13 Oct 2023
>
>
Intro
 
Added:
>
>
the U.S. government should regulate bot presence on social media to combat the issue of content oversaturation and censorship on social media platforms before establishing other, less important, regulations. This term, the Supreme Court will decide whether social media sites have a First Amendment right to choose which information they publish on their websites. Currently, social media company’s remove, demote, or hide lawful content to minimize speech (i) that the business does not associate with, (ii) that puts off their consumers or advertisers and (iii) that is of little interest or value for users. There’s a perception amongst consumers that social media platforms disproportionately silence conservative voices. Therefore, some states want to regulate platforms’ curatorial decisions to “prohibit censorship.” If the Supreme Court rules against social media companies, lawmakers will replace the private entities’ editorial voice with government-dictated preferences. This is the wrong issue for the Supreme Court to decide regarding social media because state-imposed censorship does nothing to solve the bot infestation problem online. In this paper, I argue that: (i) bots constitute a significant issue driving content oversaturation and censorship on social media platforms today; (ii) lawmakers should impose maximum bot presence thresholds and audit social media platforms using bot-detecting algorithms to enforce those restrictions.
 
Deleted:
<
<

I. Intro

 
Changed:
<
<
State and non-state actors manipulate social media platforms and their users to saturate social media with propaganda. Furthermore, intra-user and user-to-company communication on social media is now tailorable to an individual’s online habits. For these reasons, social media creates a point of injection for propaganda. State and non-state actors exploit these algorithm-based platforms by hijacking trends and using bots. In this paper, I will analyze social media propaganda mechanisms that State and non-state actors use to gain international exposure. I argue that both groups fuel diametrically opposed narratives to polarize conflicting ideologies. Thus, fueling division and promoting conflict-driven rather than conflict-resolving agendas.
>
>
Bots constitute a significant issue driving content oversaturation and censorship on social media platforms.
 
Changed:
<
<

II. Diametrically opposed narratives surrounding conflict leads to disinformation and social division.

>
>
Today, the abundance of fake accounts on social media intrudes upon the free exchange of ideas, particularly when users can’t tell whether they’re talking to other organic users or manipulative bots. Twitter and Facebook have publicly claimed that the total number of false or spam accounts is a mere 5% of daily users. In 2022, Profesor Soumendra Lahiri and Drubayoti Ghosh tested this representation by using an algorithm to identify inorganic users with up to 98% accuracy. Their data set was limited to two million tweets collected over a six-week-period. Furthermore, Ghosh and Lahiri’s found that bots account for 25-68% of Twitter users, depending on the time and issues being discussed. Brenda Curtis, an investigator for NIH’s Intramural Research Program, conducted another study where she asked humans and algorithms to detect social bots online. Social bots refer to bots that pose as humans and mimic human behaviors like excessive posting or tagging other users to emulate and alter the behavior of real people. Human participants correctly identified social bots less than 25% of the time compared to the algorithms. Curtis’s results suggest two things. First, organic users do not always know whether they are interacting with fake or real accounts. Second, bot-detection algorithms can address the problem of content oversaturation and censorship on social media platforms driven by bot-generated content better than humans can. Therefore, the federal or state governments should mandate disclosure requirements that permit third-party audits and produce bot/human ratios on social media platforms instead of creating bias preference lists. Preference lists constitute good optics for regulating social media. But this will not fix real issue. Furthermore, the Constitution empowers legislators with the means to regulate bot-generated content on social media because the First Amendment expressly protects the rights “of the people,” which includes free speech. Bots are not people. Therefore, they have no right to free speech and lawmakers should incentivize their removal from online platforms.
 
Changed:
<
<
Diametrically opposed narratives surrounding war leads to disinformation and widespread polarized perceptions that fuel tension between feuding parties. Today, the internet provides unprecedented access to information from conflict zones. Users get a front row seat to the battles in Ukraine, ISIS’s activities, and the Israeli-Hamas war. On social media, the political messaging surrounding these conflicts is characterized by fundamental disagreements promoted by the parties in each conflict respectively. Therefore, polarization shapes discussions regarding each conflict and disinformation runs rampant on social media platforms. State and non-state actors use bots, hijack trends, and employ divisive rhetoric to push these agendas.
>
>
Lawmakers should mandate disclosure requirements and conduct 3rd-party algorithmic audits on social media platforms to decrease bot presence online.
 
Changed:
<
<

a. State actors create disingenuous narratives on social media to tamper with elections.

>
>
Social media platforms already have their own systems that identify and remove bot accounts. Governments can learn from their models to effectively audit bot presence online. Specifically, social media companies reverse engineer the data sets of known bots and use them to train new bot-detection algorithms. One bot-detection algorithm is called the social honey pot. Honey pots are host-site generated fake accounts that mimic the profiles of organic users to lure spam bots. When spam bots interact with honey pots, company researchers identify the former and deactivate them. Unfortunately, the current regulatory landscape relative to social media permits the concealment of bot protection measures when it should promote practices that increase company-investor transparency and user safety. Companies have many reasons to misrepresent the prevalence of bots on their platforms and will not do so without the threat of sanctions. For example, Elon Musk famously tried to pull out of his $44 billion acquisition of Twitter because he believed that “20% fake/spam accounts, while 4 times what Twitter claims, could be much higher” than expected. His “offer was based on Twitter’s SEC filings being accurate” and considered Twitter’s misrepresentation materially adverse to his $44 billion bid. If the SEC imposed disclosure requirements and mandated regulatory approvals involving bot/human ratios, Elon may have paid less to acquire Twitter or terminated the agreement entirely after the SEC’s audit. Additionally, if the SEC found Twitter liable for misrepresenting their bot/user ratio to the public, the agency could sanction Twitter. Regulatory action like this could incentivize host-company bot extermination and encourage user protection. For these reasons, lawmakers should regulate the industry accordingly to protect future investors and users rather than handing the reigns over to biased legislatures that answer to constituents.
 
Changed:
<
<
the U.S. Congress, a State actor, investigated Russian interference in the 2016 election campaign. The U.S. accused Russia of using trolls and bots to spread misinformation and politically biased information. Trolls are malicious accounts created for the purpose of manipulation. Bots are automated accounts. Both trolls and bots clone or emulate the activity of human users while operating at a much higher pace. For example, they can produce content and follow a scripted agenda. Furthermore, these fake accounts were “created and operated with the primary goal of manipulating public opinion” (for example, promoting divisiveness or conflict on some social or political issue). In 2016, the Russian cyberattacks enhanced malicious information and polarized political conversation, causing confusion and social instability. As a result, Russia’s tampering influenced the public sentiments surrounding the 2021 election’s validity. Furthermore, non-state actors use bots and trends to push their agendas by advocating for their movements and dividing global spectators.
>
>
Conclusion
 
Changed:
<
<

b. Non-state actors use bots and hijack trends to gain international support through widespread exposure.

Non-state actors hijack trends and direct bots at users within society to diminish trust towards their respective governments. Trends refer to the list of topics sorted in order of popularity displayed on social media platforms like X and Facebook. The best way to establish a trend is to build a network of bot accounts programmed to tweet at various intervals, respond to certain words, or retweet when directed by a master account. This is because bots tend to follow each other and the core group. Furthermore, in 2014, the Islamic State (IS) created an app to help them create trends and spread propaganda across twitter (Now X). First, IS members linked twitter accounts to the app, “Dawn of Glad Tidings.” The app made posts and reposts from a master account that other accounts mimicked on behalf of various twitter users. Furthermore, IS successfully hijacked trends stemming from hashtags related to the World Cup. “At one point, nearly every tweet under the #WorldCup had something to do with IS instead of soccer.” Twitter’s initial reaction was to suspend accounts that violated the user terms of the agreement. The result was creative usernames by IS supporters. Therefore, IS successfully turned a global event into a provocative brochure used for recruiting. Furthermore, in 2014, Twitter estimated that only 5 percent of accounts were bots; that number grew to over 15 percent by 2020. This demonstrates the influence that bots have over content saturation on social media.
>
>
In conclusion, the U.S. government should regulate bot presence on social media to combat the issue of bot-driven content oversaturation and censorship in propaganda before establishing other, less important, regulations. The current, preference list, approach at the state level will fail to solve these issues because social bots can amplify or suppress content volume on social media platforms. Additionally, the currently regulatory landscape prohibits practical regulatory practices for auditing and imposing bot-presence thresholds on social media companies to protect investors and users. Researchers at MIT note that the problem of inaccurate bot detection stems from a lack of transparency. The federal and state governments should require companies like Twitter, Facebook, and Instagram to be more transparent with their data so that auditors can create reliable, comprehensive, bot-detection algorithms. This way, the government could successfully regulate social media by decreasing bot presence and protecting the free speech of organic users.
 
Changed:
<
<

III. Conclusion

>
>
Word Count: 996
 
Changed:
<
<
In Conclusion, State and non-state actors use bots, trends, and polarizing rhetoric to push their agendas domestically and internationally. Today, polarization shapes discussions regarding each conflict and disinformation runs rampant on social media platforms. In 2016, the Russian cyberattacks enhanced malicious information and polarized political conversation, causing confusion and social instability. As a result, Russia’s tampering influenced the public sentiments surrounding the 2021 election’s validity. In 2014, the Islamic State (IS) created an app to help them influences trends and spread propaganda across twitter. IS successfully hijacked trends stemming from hashtags related to the World Cup and shifted the narrative surrounding the event throughout the digital community. Furthermore, social media presents a threat to social cohesion and unity because users select the news that they want to see, the algorithm generates relevant content, and malicious actors impose propaganda onto unknowing users. For these reasons, State and non-state actors use social media as a tool for modern information-age warfare.
>
>
Sources 1) Jennifer Stisa Granick & Vera Eidelman, The Supreme Court Will Set an Imporant Precedent for Free Speech Online, ACLU ( Oct. 19, 2023) https://www.aclu.org/news/privacy-technology/the-supreme-court-will-set-an-important-precedent-for-free-speech-online 2) U.S. Const. amend. I. 3) Shawn Ballard, Are bots winning the war to control social media?, Wash. Univ. Dept. of Poli Sci (Nov. 1. 2022) https://polisci.wustl.edu/news/are-bots-winning-war-control-social-media 4) Brenda Curtis, McKenzie? Himelein-Wachowiak, Salvatore Giorgi, Bots and Misinformation Spread of Social Media: Implications for COVID-19, Journal of Medical Internet Research (Dec. 5, 2021) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8139392/?report=reader 5) Dylan Walsh, Study finds bot detection software isn’t as accurate as it seems, MIT Management Sloan School (June 12, 2023) https://mitsloan.mit.edu/ideas-made-to-matter/study-finds-bot-detection-software-isnt-accurate-it-seems 6) Jon Porter, Elon Musk says Twitter deal “cannot move forward” until it proves bot numbers: Tesla CEO says fake / spam accounts could make up “much more” than 20 percent of users, The Verge ( May 17, 2022) https://www.t heverge.com/2022/5/17/23085296/elon-musk-bots-twitter-20-percent-deal-acquisition
 

Changed:
<
<
Pretty much everyone can agree that the claims you make are plausible, and no one (including you, who have not a scrap of actual evidence to offer) can show that they are true.
>
>
Not mentioning the First Amendment is a something of a drawback to the credibility of the argument, Max. As is retailing bullshit from Elon Musk. If no one uses a stupid service except bots, why exclude the bots?
 
Changed:
<
<
That's fine, but the right question to ask about claims of that character is, so what?
>
>
The simple question at least deserves an answer: Who cares? Not using platform services is obviously better than using them. No one makes you use them. Why should we allow ourselves to waste the public force on the regulation of conduct we could simply replace non-coercively with better alternatives?
 
Changed:
<
<
I know that it is perfectly possible to live a well-informed and cultivated life without having anything to do with "social media." I know that inexpensive and widely-available technologies allow more people to live such lives than has ever been possible before in the world's history. I know that ignorance is now practically preventible everywhere in humanity. I also know that mass media of the Edisonian era possessed ma y of the same vulnerabilities to manipulation, and succumbed as often.

It seems to me the interesting question that follows from where the current draft leaves us, then, is: "What technologies of freedom do the current conditions require us to produce for people? How can we make those particular technologies of freedom out of and by adapting the ones we already possess?"

>
>
In future, please just replace old revisions with new ones, rather than making a new topic. The wiki preserves all history of every page, and changing topics from revision to revision breaks my tools.
 

Deleted:
<
<

You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.

 \ No newline at end of file

MaxEFirstEssay 2 - 05 Nov 2023 - Main.EbenMoglen
Line: 1 to 1
 
META TOPICPARENT name="FirstEssay"
Deleted:
<
<
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.
 

MaxEFirstEssay

Line: 28 to 27
 In Conclusion, State and non-state actors use bots, trends, and polarizing rhetoric to push their agendas domestically and internationally. Today, polarization shapes discussions regarding each conflict and disinformation runs rampant on social media platforms. In 2016, the Russian cyberattacks enhanced malicious information and polarized political conversation, causing confusion and social instability. As a result, Russia’s tampering influenced the public sentiments surrounding the 2021 election’s validity. In 2014, the Islamic State (IS) created an app to help them influences trends and spread propaganda across twitter. IS successfully hijacked trends stemming from hashtags related to the World Cup and shifted the narrative surrounding the event throughout the digital community. Furthermore, social media presents a threat to social cohesion and unity because users select the news that they want to see, the algorithm generates relevant content, and malicious actors impose propaganda onto unknowing users. For these reasons, State and non-state actors use social media as a tool for modern information-age warfare.
Added:
>
>
Pretty much everyone can agree that the claims you make are plausible, and no one (including you, who have not a scrap of actual evidence to offer) can show that they are true.

That's fine, but the right question to ask about claims of that character is, so what?

I know that it is perfectly possible to live a well-informed and cultivated life without having anything to do with "social media." I know that inexpensive and widely-available technologies allow more people to live such lives than has ever been possible before in the world's history. I know that ignorance is now practically preventible everywhere in humanity. I also know that mass media of the Edisonian era possessed ma y of the same vulnerabilities to manipulation, and succumbed as often.

It seems to me the interesting question that follows from where the current draft leaves us, then, is: "What technologies of freedom do the current conditions require us to produce for people? How can we make those particular technologies of freedom out of and by adapting the ones we already possess?"

 
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.

MaxEFirstEssay 1 - 13 Oct 2023 - Main.MaxE
Line: 1 to 1
Added:
>
>
META TOPICPARENT name="FirstEssay"
It is strongly recommended that you include your outline in the body of your essay by using the outline as section titles. The headings below are there to remind you how section and subsection titles are formatted.

MaxEFirstEssay

-- By MaxE - 13 Oct 2023

I. Intro

State and non-state actors manipulate social media platforms and their users to saturate social media with propaganda. Furthermore, intra-user and user-to-company communication on social media is now tailorable to an individual’s online habits. For these reasons, social media creates a point of injection for propaganda. State and non-state actors exploit these algorithm-based platforms by hijacking trends and using bots. In this paper, I will analyze social media propaganda mechanisms that State and non-state actors use to gain international exposure. I argue that both groups fuel diametrically opposed narratives to polarize conflicting ideologies. Thus, fueling division and promoting conflict-driven rather than conflict-resolving agendas.

II. Diametrically opposed narratives surrounding conflict leads to disinformation and social division.

Diametrically opposed narratives surrounding war leads to disinformation and widespread polarized perceptions that fuel tension between feuding parties. Today, the internet provides unprecedented access to information from conflict zones. Users get a front row seat to the battles in Ukraine, ISIS’s activities, and the Israeli-Hamas war. On social media, the political messaging surrounding these conflicts is characterized by fundamental disagreements promoted by the parties in each conflict respectively. Therefore, polarization shapes discussions regarding each conflict and disinformation runs rampant on social media platforms. State and non-state actors use bots, hijack trends, and employ divisive rhetoric to push these agendas.

a. State actors create disingenuous narratives on social media to tamper with elections.

the U.S. Congress, a State actor, investigated Russian interference in the 2016 election campaign. The U.S. accused Russia of using trolls and bots to spread misinformation and politically biased information. Trolls are malicious accounts created for the purpose of manipulation. Bots are automated accounts. Both trolls and bots clone or emulate the activity of human users while operating at a much higher pace. For example, they can produce content and follow a scripted agenda. Furthermore, these fake accounts were “created and operated with the primary goal of manipulating public opinion” (for example, promoting divisiveness or conflict on some social or political issue). In 2016, the Russian cyberattacks enhanced malicious information and polarized political conversation, causing confusion and social instability. As a result, Russia’s tampering influenced the public sentiments surrounding the 2021 election’s validity. Furthermore, non-state actors use bots and trends to push their agendas by advocating for their movements and dividing global spectators.

b. Non-state actors use bots and hijack trends to gain international support through widespread exposure.

Non-state actors hijack trends and direct bots at users within society to diminish trust towards their respective governments. Trends refer to the list of topics sorted in order of popularity displayed on social media platforms like X and Facebook. The best way to establish a trend is to build a network of bot accounts programmed to tweet at various intervals, respond to certain words, or retweet when directed by a master account. This is because bots tend to follow each other and the core group. Furthermore, in 2014, the Islamic State (IS) created an app to help them create trends and spread propaganda across twitter (Now X). First, IS members linked twitter accounts to the app, “Dawn of Glad Tidings.” The app made posts and reposts from a master account that other accounts mimicked on behalf of various twitter users. Furthermore, IS successfully hijacked trends stemming from hashtags related to the World Cup. “At one point, nearly every tweet under the #WorldCup had something to do with IS instead of soccer.” Twitter’s initial reaction was to suspend accounts that violated the user terms of the agreement. The result was creative usernames by IS supporters. Therefore, IS successfully turned a global event into a provocative brochure used for recruiting. Furthermore, in 2014, Twitter estimated that only 5 percent of accounts were bots; that number grew to over 15 percent by 2020. This demonstrates the influence that bots have over content saturation on social media.

III. Conclusion

In Conclusion, State and non-state actors use bots, trends, and polarizing rhetoric to push their agendas domestically and internationally. Today, polarization shapes discussions regarding each conflict and disinformation runs rampant on social media platforms. In 2016, the Russian cyberattacks enhanced malicious information and polarized political conversation, causing confusion and social instability. As a result, Russia’s tampering influenced the public sentiments surrounding the 2021 election’s validity. In 2014, the Islamic State (IS) created an app to help them influences trends and spread propaganda across twitter. IS successfully hijacked trends stemming from hashtags related to the World Cup and shifted the narrative surrounding the event throughout the digital community. Furthermore, social media presents a threat to social cohesion and unity because users select the news that they want to see, the algorithm generates relevant content, and malicious actors impose propaganda onto unknowing users. For these reasons, State and non-state actors use social media as a tool for modern information-age warfare.


You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable. To restrict access to your paper simply delete the "#" character on the next two lines:

Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.


Revision 3r3 - 08 Jan 2024 - 18:43:39 - EbenMoglen
Revision 2r2 - 05 Nov 2023 - 16:29:33 - EbenMoglen
Revision 1r1 - 13 Oct 2023 - 21:01:19 - MaxE
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM