Return to flip book view

KPMG Group Report

Page 1

S O C I A L M E D I A A L G O R I T H M SA N D C O R P O R A T E M I S C O N D U C TGROUP 5GROUPREPORT ICPU1029 C O R P O R A T E C R I M E A N DM I S C O N D U C T : T H EI N T E R S E C T I O N B E T W E E N C R I M EA N D T E C H N O L O G Y

Page 2

0 20 30 60 91 6C O N T E N T S2 12 22 32 4

Page 3

EXECUTIVESUMMARY As social media evolves in an era of digital interaction there isincreased scrutiny on the evolving ability for individuals,institutions and corporations to engage in misconduct. Thisreport will highlight four key concerns from recent academicliterature and case studies; political bots, echo chambers,disinformation campaigns and astroturfing. Currently,technological and governmental intervention is failing to combatthese growing threats. This report will offer policy-based insightinto current and future direction to enhance government andcorporate response. By conducting a wide-reaching literature review embeddedwithin a double-diamond approach this report aims toencapsulate the most well-evidenced and supported insights.Broad stages of discovery allow lateral diagnosis of a wide arrayof issues and policy insights before narrowing the scope toensure applicability to the near future of policy growth. In asummary of our findings we offer five recommendations forfuture action. These are: create enforceable deadlines for socialmedia companies to answer misconduct enquiries, promote at-source fact-checking, improve bot and bot use definitions, workcohesively with companies and governments to understand anddetect astroturfing & consider lateral solutions such as principle-based regulation and forced monopoly breakups. These arelimited by the current ‘black box’ of social media algorithms andthe lacking integration between corporations and governments.Ultimately, these suggestions can provide KPMG insight anddirection to support corporate and government policyimprovement.

Page 4

In the era of digitalisation, technological advancements like artificial intelligence(AI) and robotics bring about innovations in conducting businesses, alongsideincreasing concern with tech-enabled crimes. KPMG is interested in howregulators, technology firms and business leaders can detect, combat and preventcorporate crime in various industries (University of Sydney, 2020). Addressing thedeficiencies in technological tools and government regulations to detect andprevent social media wrongdoings, this report focuses on identifying the mostextensively-abused types of media misconduct and offers recommendations forfuture policy direction.To deal with information overload, social media platforms leverage smarttechnologies to more efficiently digest user-contributed information and optimiserecommendation output. However, the increasing level of complexity with AIdecision-making can create a “black box”, where users and even designers cannotunderstand how information is processed (Buffomante, 2018). The lack of opacityand accountability makes social media prone to malicious acts ranging fromfalsified political campaigns and election interference, to Covid-19 fake news andconspiracies. With over 2.4 billion active users and 64.5% of them dependent onsocial media for news feed, platforms like Facebook and YouTube canunconsciously shape users’ opinion and behaviour, exaggerating the impact ofdisinformation (Martin, 2018).As a result, social media companies are pressed to deploy fact-checking andcontent moderation systems to mitigate the problem. Nonetheless, the currentdetection system remains largely reliant on humans, resulting in its failure to keepup with the speed and sophistication of bot-powered crimes (Pennycook & Rand,2020). Simultaneously, service providers are exploring the use of AI-crime-fightingtools. Yet, existing detection technology lacks the morality and accuracy required,and it is expected that a more scientific solution takes time (Pennycook & Rand,2020). Furthermore, placing the responsibility of regulation solely on social mediacompanies implies the imposition of American norms of speech in other countries,which can be problematic compared to a context-based approach (Bogle, 2020).Therefore, moderation on the technical and institutional side is only a partialINTRODUCTION Project Scope Social media technology and misconduct

Page 5

"Social media design is facilitating an evolution in misconduct, such thatself-regulating and government policy is failing to stay up-to-date”Problems, Aim & Objectives solution to social media misconduct. It is crucial to address thesocial elements of the issue, since at the centre of theseproblems are humans. As a result, in addition to highlighting thetechnical and empirical lens for finance, science and engineeringmajors, the project harnesses the knowledge of internationalrelations, international studies and international businessstudents through social lens.PolicyTechnological misconduct is the central problem addressed in thisreport. However, policy has been insufficient in recent years inevolving to combat the modern forms of misconduct. Social mediaspecifically has been an area of increased scrutiny with USsenate hearings increasingly targeting heads of FaceBook,Twitter and Google (Stepansky, 2020). Currently, social mediacompanies self-regulation and corporate policies are beingcriticised for their failures to mitigate increasingly dangerousforms of misconduct, particularly in the 2020 elections(Zakrzewski & Lerman, 2020). Hence, this report will providenovel insights by aligning the most well-evidenced forms ofmisconduct with clear policy recommendations from governmentand corporate levels. Through highlighting the policy lens thisreport engages the interdisciplinarity of politics and internationalrelations, supporting a wide-scope of insights.This report is underpinned by a focus on technology (social media)as a facilitation of misconduct to meet the brief. Therefore ouridentified problem statement highlights the issue of social mediamisconduct and its disparity with policy:Therefore, this project aims to contribute a deep insight into socialmedia misuse and create alignment with generalised policy

Page 6

This problem statement and objectives require social,technical, and political lenses to understand thealgorithmic underpinnings and human interactions drivingthese platforms (Woolley & Howard, 2018). Hence, socialframeworks are crucial in defining the nature ofmisconduct as it relates to individuals, discourse, and freeand fair speech. Empirical frameworks are employed tosystematically evidence the resulting cases of misconduct.A political lens offers a regulatory framework to aligndocumented misconduct with future trajectory.Furthermore, through collaboration of disciplines frompolitics to finance, members have been able to co-create ineach of the objectives as they form positive feedbackloops which underpin insight (Jones, 2019). To uncover and define the most well-supported forms of socialmedia misconduct1To explore current government and self-regulation policy anduncover its limitations2To align highlighted misconduct with policy implications that willbetter support future mitigation of these issues3Interdisciplinary Need implications for governments and corporations. These aims were achieved throughthree core objectives:

Page 7

APPROACH & METHODS Our approach is based on conducting a systematic literature review to analyse andsynthesise a wide variety of case studies, government reports and academicliterature. Literature reviews are highly regarded for providing a single documentaggregating substantial and wide-reaching insights (Jahan et al., 2016). Socialmedia as an area of study has existed in academic literature for less than twodecades and the awareness of it as a burgeoning area for misconduct is even morerecent (Woolley & Howard, 2018). Hence, with recent fears of democraticinterference and significant socio-political discourse occurring digitally, it is criticalwe synthesise the current understanding of social media misconduct. We aim toembed this literature review within the current policy framework surroundingdigital social platforms to support governments, institutions and companies indirecting their attention and resources most effectively.A double diamond approach allowed our team to systematically undertake aliterature review and utilise the unique interdisciplinary skills of each member(Clune & Lockrey, 2014). Particularly, with a mixture of knowledge and knowercodes (Maton & Chen, 2019) there are strong parallels to divergent and convergentthinking, fitting well with the double diamond model. In the first stage, discover,the group supported diverse ideas and insights to conduct a broad review. Thisreview was aimed at uncovering as many signs, forms and cases of misconductassociated with social media. In this stage leniency was granted to cover a widedegree of sources and ensure a wide research net. Due to the ‘social and technical’

Page 8

nature of social media misconduct investigation, both those who prioritised socialrelations and those who prioritised epistemic relations were crucial (Woolley &Howard, 2018; Maton & Chen, 2019). This broad examination provided a brainstormof over 20 unique forms of misconduct and is encapsulated in figure 1.In the second stage, ‘define’, the group aimed to refine this brainstorm anddetermine the most well-supported, detailed and deeply applicable misconductactivities. A narrow definition of social media misconduct was applied, namely as‘clearly unethical and potentially illicit actions by agents engaged in social media’.By this clear definition the group narrowed its focus and uncovered the mostpressing and significant issues. High academic standards were applied to ensure allsources utilised were trusted and accurate. Both quantitative scholarly literatureand qualitative case studies and institutional reports were accessed. Upon review, four major and specific forms of misconduct were uncovered:political bots; disinformation campaigns; astroturfing & echo chambers. All four ofthese concepts displayed well-documented case study examples, support frominstitutional reports and academic quantitative insights. Hence, the team was ableto conclude these were real, effective and critical issues for policy to be directedtowards.

Page 9

In the ‘develop’ stage of the double diamond, the team was tasked with divergingonce again to uncover the large variety of potential solutions and applications ofour research. Two major brackets for discovery were explored, self-regulation andgovernment regulation. With diverse policy environments globally, a number ofcritical locations (Australia, Germany and the US) were utilised as they offered aninsightful range of varied solutions. The goal in this section was to engage a wide-reaching set of solutions that was not limited to simple government oversight orintervention. Instead the team aimed to provide insight into the wide-scope ofpotential strategies that could be employed.In the final stage, ‘deliver’, the goal was to create synthesis between the definedforms of misconduct and the policy implications. The group first had to vet allprovided insights from the ‘develop’ stage and narrowly define the most applicableand effective forms of policy. These could be synthesised with the uncoveredmisconduct to provide government and social media policy-makers insightfulforward-looking recommendations. With many countries and regions beginning toincorporate a variety of regulations such as Germany’s new antitrust measures, it isa crucial time in delivering insight. By defining the most critical and well-supportedforms of misconduct we aim to direct institutions and social media to work togetherand find harmonious solutions.LimitationsThe double diamond model is limited in its linearity as this project combines someiterative processes. In the ‘develop’ and ‘deliver’ stages, policy insights refined andinformed the original problem context. Misconduct occurs ‘in-situ’ of a specificpolicy environment that differs from country to country, hence the misconductitself will differ. Therefore, the team enhanced the specific nature of theuncovered problems as solutions influenced the original process.

Page 10

FINDINGS/RESULTS The results of our literature review are provided below, and highlight the impactfuland also interwoven nature of political bots, echo chambers, disinformationcampaigns and astroturfing.Political BotsA political bot is an algorithm that executes a series of activities in order togenerate or manipulate public opinion. The most common form of social media botsoperate false accounts with the intent to emulate real individuals. Explicitly botsare not illegal entities and are recognised as a “common tactic” during variouspolitical events (Woolley & Guilbeault Quoting Jane, 2018). Bots are able to actindependently or in botnets to execute actions including spamming, sharing, DNSattacks, email harvesting and astroturfing (Neudert, 2018). Concerningly bots arebecoming cheap, accessible and increasingly versatile (Hegelich & Janetzko, 2016).

Page 11

In Germany, an example of strict digital regulation, bots persist but have minimalinfluence on public opinion, as seen in the 2017 election (Murthy et al., 2016).However, these bots still aided in amplifying opinions, spreading biased contentand creating hate speech (Neudert, 2018). In less well-regulated scenarios such asthe US & Russia, bots have been far more influential.For instance, in the US, bot usage peaked in key times of the 2016 election (Bessi &Ferrara, 2016). During the election bots were engineered to artificially demonstrateconsensus and spur actual support, known as the bandwagon effect (Woolley &Guilbeault, 2018). This is contextualised as over a third of twitter users are botsand shortly 10% of social media content will be produced by bots (Woolley &Howard, 2018). Furthermore, with technological advances and even basic codingskills bots are decentralised as individuals use them to amplify fringe ideologies.This has been coined the megaphone effect and refers to individuals or institutionsutilising bots to amplify their voices unnaturally (Woolley & Guilbeault, 2018). Withtheir own research Woolley & Guilbeault (2018) concluded that bots were able toaccess the “core of political discussion” and spread divisive content. In Russia, Sanovich (2018) detailed blatant misconduct in the sheer scale and abuseof politically charged bots. Russian bots were often created at scale, lacking basichuman items (such as descriptions and profile pictures) and were active in keymoments of the political conflict (Sanovich, 2018).They were found to hide theiridentity, promote selective hashtags and share selected content with a politicalagenda (Hegelich & Janetzko, 2016). A quantitative analysis reflected that in thedataset of “accounts with more than 10 tweets… around 45 percent were bots”.Furthermore, Sanovich (2018) highlighted that bots peaked during the Ukrainianconflict after Malaysian flight 17 went down and before renewed fighting.Sanovich summarised this data by classifying Russian bots as engaged to executea clear well-coordinated strategy.In summary, political bots are active players in the social media algorithm sphereand display concerning traits such as hiding their identity, promoting fringe andextremist views and spurring real support, influencing political discourse.

Page 12

Even without human intervention, the intrinsic design of AI may result in itunintentionally reinforcing and amplifying disparities, where its training data is theprimary source of bias (Edwards, 2020). First, AI is trained on historical data setswhich potentially encode existing gender, ethnic and cultural biases. As AI learnsto make predictions based on identified patterns within these data sets, it is likelyto infer and formalise a biased set of rules that essentially mimics past humandecision-making processes (Baer & Kamalnath, 2017). Second, training data may beincomplete, unrepresentative or skewed, resulting in structural racism where theunderrepresented groups are discriminated against (Australian Communicationsand Media Association [ACMA], 2020).Moreover, the adoption of recommendation systems by social media platforms canfurther amplify the intrinsic AI bias, facilitate dissemination of misinformation andcontribute to the polarisation of opinions. As the social web becomes overloadedwith information, personalised recommendation systems are deployed in order toreduce the burden of information-processing and optimise user experience bypredicting and presenting more relevant information (Bozdag, 2013). Social mediacompanies like Facebook and TikTok commonly adopt recommendation systems thatutilise collaborative and content-based filtering, where users receive filtered andpersonalised information based on their interests and preferences (Isinkaye,Folajimi, & Ojokoh, 2015).Echo Chambers Content-based techniques emphasise on the attribute of items andrecommendations are made based on users’ past evaluation of similar items(Isinkaye et al., 2015). On the other hand, collaborative filtering makesrecommendations by assuming users share interests with users with similar profiles(Schafer, Frankowski, Herlocker, & Sen, 2007).

Page 13

By doing so, these personalisation mechanisms create an algorithmic “filterbubble”, where social media AI determines information visible to users based onprevious online behaviour such as search history, likes, shares and shoppinghabits, reinforcing people’s existing preferences (Garcia, 2020). By limiting thesource of information to those who users share similar views with, users aretrapped in an “echo chamber”, in which they only receive conforming opinions fromlike-minded people (Garcia, 2020).Filter bubbles and echo chambers are ubiquitous and can potentially increase theprevalence of misinformation since information that resonates with biased clustersof users is more likely to virally spread through a network (Törnberg, 2018).Exploiting echo chambers and confirmation biases, political bots are utilised tosuggest inauthentic accounts to be followed, which can exacerbate politicalpolarisation by enabling the design and delivery of highly targeted information tobe persuasive or even manipulative (ACMA, 2020; Menczer & Hills, 2020).In addition to echo chambers, ranking algorithms of social media platforms thatprioritise views over truth in search of advertising revenue may be anothercontributing factor (Anand, 2017). Algorithms are aware that highly provocative,emotive or extreme information can generate greater views, click-through ratesand reactions, and therefore may promote misinformation.Disinformation CampaignsA campaign of disinformation is one that actively attempts to spread falseinformation in order to manipulate individuals through social media (Bradshaw &Howard, 2018). This differs from misinformation which is the inadvertent spread offalse information, often a trickle-down issue (Jackson, 2017). More specifically,disinformation often contains substantial truths with blended falsehoods,

Page 14

exaggerations and lacking context (Jackson, 2017). Critically, Vosoughi, Roy & Aral(2018) prove that false news spread faster, diffused more deeply and went furtherinto the social media conversation than the truth. In part, disinformation spreadcan be executed by bots, representing the link between forms of misconduct.Furthermore, disinformation campaigns can be wide-reaching, taking the form ofselective censorship, search rank manipulation and privacy breaches (Tucker et al.,2018). These would include examples such as politically charged censorship inChina and harnessing bots to build credibility and force search engines to rankdisingenuous content (Tucker et al., 2018). Looking at specific evidence and examples of disinformation spreading, the KnightFoundation (2018) found a peak of “6.6 million tweets linking to fake andconspiracy news publishers… before the 2016 US election”. However, 80% of theseaccounts were still in use in 2018 with over 4 million tweets relating todisinformation found in a similar time-frame (Knight Foundation, 2018). It was foundthat 33% of the accounts were bots and many of the disinformation stories followedregular sharing patterns coordinated to manufacture ongoing media presence. Onestark example came with the conspiracy linking 5G towers and COVID-19 thatspread across the UK rapidly and mostly unchecked (Ahmed et al., 2020). It wasidentified that there was even an account specifically dedicated to spreading theconspiracy that resulted in the burning of two 5G towers (Ahmed et al., 2020).At scale, disinformation campaigns are dangerous and alienating, such as Russia’shostile disinformation targeted at European countries (Richter., 2017). This ‘KremlinWatch Report’ exposes large scale disinformation tactics employed by Russia tosubvert European freedoms (Richter, 2017). The Kremlin controlled paper RussiaToday is seen as the face of this scaled disinformation campaign and continues toinfluence public opinion inside Russia and other European nations. Russia furtherhas engaged in mass-scale and microtargeted political ads aimed at polarising anddividing in foreign nations such as the US (Al-Rawi & Rahman, 2020). Ultimately,disinformation and its clash with the free speech advocation of social media willcontinue to be a sticking point for policy. This has been shown to be real, wide-spread and influential.

Page 15

Keller et al (2019) describes political astroturfing as a “centrally coordinateddisinformation campaign in which participants pretend to be ordinary citizensacting independently”. Crowdturfing is a more specific version of astroturfing inwhich many ‘shills’ (fake supporters) are incentivised to act as a grassrootscampaign (Lee, Tamilarasan & Caverlee, 2013). The goal of both these styles ofcampaigns is to manufacture a fake ‘grassroots-style’ movement to influence thepolitical discourse. Since these campaigns hide their centrality of control and theincentivised nature of constituents, they are considered disinformation inherently,and commonly harness disinformation tactics to influence discourse (Keller et al.,2019). The earliest recorded and indicted case of astroturfing was in South Korea in 2012when the National Information Service used a mixture of bots and humans todeploy their political agenda as a homogenous ideology (Keller et al., 2019). In 2012this had minimal substantive impact and the authors highlight this as an evolvingfield of impact. In more recent times it was found that the Russian InternetResearch Agency (IRA) deployed political ads at scale. These ads ‘microtargeted’US individuals based on political, racial and religious backgrounds to activelydivide and polarise discourse, undermining Western democracy (Al-Rawi &Rahman, 2020). With the concealed centrality of the campaign and falsifiedunbiasedness this is a form of astroturfing and by extension, social mediamisconduct. While this is an evolving field of research, there is clear growth in thenumber and scale of astroturf campaigns (Keller et al., 2018). With thisconceptualisation, as bots evolve and social media embeds itself more deeply,astroturfing campaigns will likely become more influential in the political situationof many countries (Woolley & Howard, 2018; Keller et al., 2018). It is hence criticalfor policy implications to be proactive and forward-looking to contend with risingforms of misconduct.Astroturfing

Page 16

Case Studies

Page 17

The Australian Government is currently a member of the International GrandCommittee on Disinformation and ‘Fake News’ (JSCEM, 2019). In keeping with theseobligations, the Australian Electoral Committee (AEC) has been working with socialmedia companies around election times to ensure that illegal ads are referred tothe AEC. However, this has not prevented the spread of disinformation campaignsin Australian Elections. An example of this is the ‘death tax’ campaign, whichcirculated during Australia's 2019 Federal elections. This disinformation campaignwas requested for removal to Facebook (Murphy 2019). However, Facebook failedto respond to orders from the Liberal party to remove what the Party determinedto be online policy rumours. This was because Facebook’s third-party-fact checkingdid not apply to content promoted by politicians or parties, and only to generalFacebook users (Buchanan, 2019). However, research indicates that disinformationcampaigns are often spread on social media by political figures and parties (DigitalRights Watch, 2018). Whilst Digital Platforms and the Australian government haveboth attempted to curb disinformation from political sources, this has notprevented unauthorised sponsored ads intended to sway voter behaviour duringAustralian voting seasons (Digital Rights Watch, 2018). Therefore, improvedaccountability between Digital Platforms and the AEC can also increase responsetimes to disinformation. We endorse the recommendation by Digital Rights Watch,which calls for the enforcement of a “set period of time (in which digital platformsmust respond to AEC enquiries)” (Digital Rights Watch, 2018). Another contributing factor to the issue of timeliness in responding todisinformation is the high cost and slow speed of current robust methods of factchecking and flagging. To rectify this, digital companies should implement fasterand more reliable methods to stop disinformation at source or close to source. Forexample, a scalable online algorithm, designed by Kim et al., finds which stories toanalyse by solving a novel stochastic optimal control problem (Kim et al., 2018). Thestochastic differential equation (SDE) works through variables including flaggingprobabilities to automatically remove disinformation at source, therebyoutperforming traditional methods (Kim et al., 2018). In another attempt,DISCUSSION Australian Government and Digital Platform Regulationsin relation to Disinformation, Bots and AstroturfingDisinformation

Page 18

Currently, digital platforms and independent regulators are using machine learningfor detecting bots (Fisher 2020; Yang et al., 2019). This has been relativelyeffective, as seen by Facebook’s announcement on their use of AI ‘Deep EntityClassification’, which they claim has reduced the volume of spam accounts by 27%as of March 2020 (Fisher, 2020). However, bot-use regulation is currentlyunderdeveloped. Shahbaz (2019) recommended the development of technology toautomatically label bot accounts. However, a blanket disclosure requirement hasbeen criticised as a threat to the right of free expression (Lamo, 2018). For thisreason, narrowly tailored laws that address specific harms may be more ideal(Lamo, 2018). Such targeted regulation would require a clear understanding anddefinition of bots and their various forms, yet the notion of what exactly a “bot” isremains ill-defined (Lamo, 2018). Therefore in order to advance policy surroundingthe use of bots, the Australian government will be required to develop a clearerunderstanding of bot technologies (Gorwa & Guilbeault, 2018). disinformation can be mitigated near source using a modified Multivariate Hawkesprocess to weed out the sharing of detected fake news (Farajtabar et al., 2017).Though these measures have worked in closed group experiments, they have yet tobe taken up by mainstream social media firms.Recommendation 1: Empower the AEC by enforcing a set period of time in which digitalplatforms must respond to AEC enquiries regarding disinformation andincrease resources for the AEC for managing digital and social media-related issues.Recommendation 2: Promote the uptake of faster and more reliable methods of fact-checking onsocial media platforms that stop disinformation at source.Political BotsRecommendation 3: Develop a clearer and more defined policy regarding the use of bots andbot technologies.

Page 19

Astroturfing There are currently no policies from the Australian government relating toAstroturfing, however, via a case-by-case stance, it may be prosecuted under thebreach of Australian Consumer Law (Hall 2011). Yet detecting astroturfing remainsdifficult. This is due to the fact that astroturfing distinguishes itself frommisinformation based on the intent to mislead platform users. However, provingintent has not been achieved or approached by platforms due to the nature ofastroturfing and the fact that such activities would extend beyond Facebook’s andothers’ regulatory boundaries. Facebook and others cannot detect accounts thatare involved in astroturfing, because they are not bot technology and cannot bedetected by Botdetecting AI such as Facebook’s ‘Deep entity Classification’ or‘Botmeters’ (Fisher 2020; Yang et al., 2019). There is also the concern that doing sowould increase the likelihood that digital platforms ban genuinely misinformed useraccounts, which would be a contradiction to preserving democratic freedom withinthe social media space.Furthermore, Digital platforms’s current demonetisation deterrent, is ineffectiveagainst astroturfing because astroturfing is mostly non-financially motivated(Weedon et al., 2017). Due to the clear lack of regulation of disinformation at thesource, Australia remains prone to Astroturfing campaigns. To address theshortcomings of Digital Platform’s regulation of this behaviour, the government andparliament should conduct further inquiry into the matter to identify potentialsolutions for addressing the source of disinformation campaigns. Strategies tocounteract astroturfing are under development currently, with one method calledinoculation strategy showing promising experimental results. Zerback et al.conducted a study where subjects are provided refutational preemption messagesprior to persuasive attacks by astroturfing comments (Zerbeck et al., 2020). Theresults showed that this inhibited the persuasive effects caused by astroturfingand that issue specific inoculation messages worked better than abstract ones(Zerbeck et al., 2020).Recommendation 4: Inquire further into the issue of Astroturfing in order to identify andpromote the use of technological solutions which address the source ofastroturfing campaigns.

Page 20

Digital Platforms are working towards combating algorithmic defects that mayresult in echo chambers or bias, however there remains insufficient knowledge andgovernment regulation over this form of technology. Facebook has reported thatthey have created a global integrity team that targets issues relating topolarisation, recalibrating news feeds to prioritise higher-quality and diverseinformation. They also restrict recommendations for content which has beenflagged as being false or having violated Facebook’s Community Standards(Murphy et al., 2020). Furthermore, Facebook and others are also seeking toeliminate information biases in their use of algorithms using AI tools that flagdiscriminatory content (Wiggers, 2018; Murphy et al., 2020).However, social media algorithms remain an arcane ‘black box’ and it continues tonegatively impact social discourse. Regulating Social media algorithms is thus aconcern for the ACCC, who recommends there be more collaboration betweendigital platforms and news media in order to “develop and refine (AI) technologythat serves both governments policy interests, as well as the interests ofconsumers and citizens” (Wilding et al, 2018). To increase transparency, the ACCCreleased a draft of a Mandatory News Media Bargaining code (2020). This codesets out minimum obligations for digital platform conduct and, unlike itspredecessor, is enforceable through penalties and binding dispute resolutionmechanisms for negotiations between digital platforms and news platforms (Taylor,2020). Under the code digital platforms will be required to provide notice ofchanges to their algorithm that will affect news ranking within 28 days (Taylor &Meade, 2020). This will increase transparency surrounding how algorithms operate,subsequently addressing the bargaining power imbalance between Australiannews media businesses and major digital platforms (ACCC ND). However, this codewill benefit large media outlets to the exclusion of independent news makers, andthis may endanger democratic diversity of news by promoting a monopoly ondigital platforms by large media corporations such as News Corp (McDuling 2018).According to Kevin Rudd, the ABC will also be marginalised as an outcome of thiscode, because it will not be covered and will not be a recipient of funding oralgorithmic insights from digital platforms (Kevin Rudd 2020).To aid this, comparing traditional cooperation policies may help. A regulation comesinto effect when an industry causes negative externalities, for example greenhouse Echo Chambers

Page 21

emissions by the fossil fuel companies. Comparatively, the social media industryhas created negative externalities arising from their use of algorithms such asideological polarisation (Persily, 2017). So it is suggested that frameworks ofprinciples-based regulation over rules-based regulation is a better approach as theformer (Noore, 2018). Therefore, some have recommended the implementation of anational safeguard program where policymakers can ensure that all proposedresearch and development plans in big tech undergo a thorough assessment of anypotential effects of manipulation and electoral interference. (Shahbaz, 2019).Alternatively, the government could take a prophylactic approach to regulation(Rahman, 2018). This would mean comprehensive government regulation wouldreorganise the entire industry with the goal of remedying the causes ofdysfunction, not just mitigating symptoms. Rahman proposes that efforts be madeto reduce the source of incentives and conflicts of interest within social mediacorporations by magnifying risks of exploitation and fraud. Mechanisms to achievethis may be where private social media companies are converted to public utilitiesand the breaking up of larger companies to foster competitiveness (Rochefort,2020).Recommendation 5: Create an investigation into alternative frameworks for regulating socialmedia algorithms, for example principle based regulation and breaking uplarger social media companies.

Page 22

As this report prioritised social media misconduct and policy it required a multitudeof disciplinary insights including social, technical and political angles. Our mix ofdisciplines from International Relations, Finance, Science and others were highlyeffective in achieving our defined objectives. Evaluating our interdisciplinary approach we had substantial resources dedicatedtowards international policy understanding and empirical research. These twoareas were key strengths which supported a thorough literature review and wide-reaching implications. By integrating views the team was able to draw on higherorder thinking skills such as evaluating and analysing each other's contributions.This led to effective group discussion, strong feedback loops and, ultimately, ahighly integrated report.Firstly, from an empirical background we have Richard, Justin, Josh & Emilyengaged in engineering, medicine and commerce degrees which provided stronginvestigative skills. From a research approach they were able to engage systematicand critical thinking to achieve detailed research outcomes. Secondly, with majorsin Politics and International Relations, Jeany & Justiniana focussed on providingregulatory and political frameworks for defining the policy implications.However, with strength in technical and political lenses the team was limited in itssocially-based perspective of the problem. While members integrated certaininsights from international relations and politics, greater depth of insight into theday-to-day applications of high-level misconduct would have been beneficial. Thiscould have allowed the team to empathise more closely with the impacted audienceon social media. Furthermore, this would have provided KPMG greater details onthe far-reaching impacts of these behaviours.Ultimately, the team achieved effective synthesis through integrating multipleapproaches and engaging in consistent conversations, meetings and feedback.However, if the team was able to draw on skill sets from extended disciplines suchas social work, education or psychology we could have developed deeper andmore well-rounded implications. Furthermore, with a policy focus an additionalstudent with a law background may have provided more analytical legal insights.INTERDISCIPLINARYCOLLABORATION

Page 23

The aim of this report was to give insight into social media misconduct andrecommendation of policies for the purpose of enhancing KPMG’s awareness on theissues. Case studies provided real-life examples of misconduct and were effective insupporting evidence of policy suggestions. A double diamond research approachwas applied to this report to facilitate a wide range of individual skills whilstmaintaining the ability to carry a continuous flow of information towards a solution. Our findings focused on four aspects of misinformation: political bots, disinformation,echo chambers and astroturfing. Political bots were found to create artificialconsensus on public opinion particularly during elections using the megaphoneeffect. Social media algorithms help amplify shills through echo chambers. ArtificialIntelligence relies on historical data sets to enhance human biases. Thus, content-based techniques can create filter bubbles to isolate misled groups and deceptivematerial can spread very quickly. Scaled disinformation campaigns demonstrated amuch higher frequency of information transmission than the truth. Domestic audiencemanipulation uses shills to create grassroots campaigns that manufacture the socialmedia landscape to forward political discourse in astroturfing.This report also recommends policies to combat social media misconduct. Asregulation for use of bots is underdeveloped, identifying the forms of bots and botoperation is required before creating more defined policy. While policies are in placeto prevent the spread of misinformation, the response time to act on the spread ispoor. It is recommended that a set period of time is defined on which social mediacompanies must respond to AEC. Investigation into the structure of social mediaalgorithms would provide a better understanding of echo chambers and provide aframework for legislation.Promotion of faster and more reliable methods of factchecking would serve as an internal improvement to stopping the source ofmisinformation. Regulation surrounding astroturfing extends beyond the reach ofsocial media companies as they involve a human aspect to the spread of falsestatements. Hence promotion of technological solutions and enquiries into the issuewould be beneficial. KPMG can use this research report to deploy their owninvestigations and provide solutions to digital misconduct. Employment of studentsfrom a wide range of backgrounds as a form of collaboration can be extendedbeyond a university project. Connections to individuals within the industry wouldcement our understanding of these issues as well as a greater understanding of thetechnical aspects of social media algorithms.CONCLUSION

Page 24

Richard FitzGeraldB Commerce/Advanced Finance & International Businesshttps://www.linkedin.com/in/richard-fitzgerald?originalSubdomain=auTianjia (Emily) HouB Commerce Finance & Professional Accountinghttps://www.linkedin.com/in/emily-tianjia-hou-605229195/Justin WongB Science and MDPhysiology, Immunology & Pathologyhttps://www.linkedin.com/in/justin-wong-a85234172/Justiniana RemediB Arts/Advanced studies Politics & International & Global Studieshttps://www.linkedin.com/in/justina-remedi-20b755189Joshua RajendraB Aeronautical Engineering (Hons)/B Commerce Aeronautical Engineering & Financehttps://www.linkedin.com/in/joshua-rajendra-80a724186Jieyun (Jeany) ByunB ArtsInternational Relations & Economicsjbyu5156@uni.sydney.edu.auCONTACTS

Page 25

REFERENCES

Page 26

Page 27

Page 28

Page 29

Page 30