Misinformation game

The aim is to develop a social-media simulation application that can be used for (web-based) experiments.

Architecture

Client Interface:

The simulation application will mimic a serious game and will allow user engagement. It is expected that users should be able to access the application from their computer (or mobile device), but there is no requirement for a distributed setting. Participants will be tested individually in the lab or online.

Each participant will be presented with a large number of posts (e.g., tweets or headlines, with or without images), one at a time. Each post will be associated with a fictional source (i.e., one of n virtual network members). Posts will provide true or false information.

For each post, participants decide whether to like, share or skip it. [Optional functionality: It would be good to have some flexibility with options (e.g., adding dislike or flag (as false) options, or the ability to reply to a post with a short message)].

Admin Interface:

We need a simple admin interface where one can add/delete/edit the posts. The administrator should be able to modify parameters (see below), enable/disable certain options, and download the results in the form of a csv file. Admins should be able to save settings, such that a number of participants can be tested with the same settings, and input specific task instructions [optional functionality: (potentially over multiple screens)] that are displayed at the beginning.

Sampling and Parameters

Posts should be drawn from a spreadsheet or other type of repository such as an unstructured database (e.g., firebase). Likewise, source handles and avatar images should also be drawn from a file/repository. Some materials will be provided, including a set of text-based messages and headline images, each classified as true or false.

Sampling of posts and sources must be reasonably flexible, ideally allowing for anything from fully-random sampling, to random sampling with some constraints (e.g., ratio of true:false posts in general and for each source), to complete control over the sequence (i.e., specification of all parameters [i.e., post X from source Y in sequence-position Z], e.g. by ordering the spreadsheet or by using the admin interface).

The number of virtual network members (i.e., sources) and the number of posts should be editable, and should range from 1 to n (n = number of handles in spreadsheet) and 1 to m (m = number of posts in spreadsheet), respectively (with m n, obviously). The handles of virtual network members will feature a badge that progressively maps their credibility, thereby identifying accounts as more or less reliable sources of information (e.g., through a 0-100 number in a graded colour code). Participants own credibility rating should also be prominently displayed at all times (they dont necessarily need a handle of their own, though). The credibility starting values can be sampled from a normal distribution with editable parameters (e.g., M, SD, with hard 0-100 limits), although again users should have the option to specify/edit starting values manually. Sharing true or false information will improve or worsen the credibility rating dynamically. The impact of each post should also be coded in the spreadsheet (a default could be +1 for true and -2 for false posts, but this should be editable on a general and even post-by-post basis).

Each handle/post should also display the sources current follower number. Participants own number of followers should also be prominently displayed at all times. Initial start values can be sampled from an appropriate (right-skewed) distribution, with editable parameters (e.g., min, max, median, skew). Each response to a post (e.g., each like, share) is associated with an impact on the participants follower number. Defaults could be that likes/shares of true posts have a small positive impact and shares/likes of false posts have an impact that could range from somewhat negative to very positive. As a default, shares could have, on average, twice the impact of likes. If dislikes are added, they could have a small negative impact if the disliked post is true, and a small positive impact if the post is false. All impact values should be sampled from normal distributions with appropriate defaults (e.g., small range for true posts, larger range for false posts) but editable parameters.

Data Recorded

On a trial-by-trial basis (i.e. for each post displayed), the program should record:

Background

In an initial experiment, participants aim will be to achieve a growing number of followers, while maintaining some self-determined level of acceptable credibility. The idea is that sharing of fake news will on average grow your follower numbers most quickly but will drive your credibility rating down, so there is a tension between the two goals. The experiment will test whether a social norms-based shaming of communicators in a network can reduce subsequent dissemination of misinformation. Automated online trust badges have been impactful in e-commerce, and may thus also prove useful in social-media applications. Instructions will or will not feature provision of a social norm, such as 95% of users on this platform agree that sharing of fake news is inappropriate and harmfulits the wrong thing to do!

Other experiments might provide the norm that 95% of users on this platform agree that sharing of offensive or insensitive posts is inappropriate and harmfulits the wrong thing to do! and then measure to what extent participants show opposition to norm-violating (e.g. racist) posts through dislikes or dissenting comments.

Client


Contact: Ullrich Ecker
Phone: 0458220072
Email[email protected]
Preferred contact: Email
Location: UWA

IP Exploitation Model


The IP exploitation model requested by the Client is: Creative Commons (open source) http://creativecommons.org.au/



Department of Computer Science & Software Engineering
The University of Western Australia
Last modified: 28 May 2021
Modified By: Michael Wise
UWA