Fletcher Scott - ᯓ★ This site is under construction ᯓ★ About

About

PhD Candidate at RMIT University & the ARC Centre for Excellence in Automated Decision-Making and Society (ADM+S) | Sessional Tutor for Psychology

Computational Models of Cognition | Resource-Rational Sequential Learning | Human-Computer Interaction

I research how people process information. My current interest is in how processing specifically applies to political messages, especially those designed to mislead or provoke strong emotions. Currently, I am mapping how the order and repetition of messages, such as what shows up first or later, as well as how ideas cluster over time, changes the mental effort people devote to evaluating claims. By pinpointing these moments, information systems could critically place nudges or fact-checks exactly where they can trigger closer scrutiny.

The approach is to assess how computational models can be used to infer users' real-time processing (e.g. how quickly they skim, when they slow down, and which cues grab their attention). Embedding a model of the learner into information platforms paves the way for adaptive systems that tailor corrections, explanations, or practice material to the moment a person is most receptive, turning generic facts into personalized learning aids.

Fletcher Scott

Using a psychological method to increase performance of LLM for relevance judgement and ranking tasks (2025)

Summary and links coming soon.

Every Idea Ever /hr (2025)

Visit Every Idea Ever, a website I built inspired by the Infinite Monkey Theorem and the Library of Babel. Each hour, a letter is iteratively added and fed into an LLM.

When order shapes belief: sequence design for bounded agents (2025)

People rarely encounter information in isolation. Instead, they encounter it through algorithmic feeds that embed items within sequences and contexts that shape their salience, credibility, and integration with prior beliefs. We argue that human learning from recommender systems and the ways it may differ, can be understood by considering credibility as a computational problem of inference over time. The learner must identify reliable cues and sources, allocate attention, and update beliefs under constraints, which constitutes a computational problem that the human learner is optimized to detect and exploit. We claim that this problem acquires a rough solution from three fundamental control systems that the brain uses to maximize functioning across two desirable but competing performance dimensions: speed-accuracy, stability-flexibility, and generalization-multitasking. If improvements on one predictably degrades the other, then sequences that push the system toward speed, stability, or divided attention should yield predictable biases in belief updating. These trade-offs shape belief updating under sequenced exposure and serve to explain common findings in the misinformation and digital persuasion literature.

Sensational headlines impact evidence accumulation during veracity decisions through polarization (2025)

Study description and materials coming soon.

Making accuracy easier: credibility perceptions as a latent cause assignment problem (2025)

The challenge of identifying reliable content amid widespread falsehoods requires a capacity for well-calibrated credibility judgments. We propose that humans address this challenge using statistical learning-the same cognitive process that helps us acquire language and recognize patterns. However, this adaptive mechanism may backfire in algorithmically curated information environments, leading to increased polarization and susceptibility to misinformation. We first outline a latent-cause model of credibility perception, which broadly describes the impact of structure learning, allowing people to categorize information based on recurring patterns (source identity, emotional tone, in-group signals) and then use these categories as shortcuts when evaluating new information. While this process is cognitively efficient, it can make users more vulnerable to misinformation that matches their learned patterns and more resistant to corrections that violate them. We present a Bayesian framework linking environmental statistics to belief formation, propose three testable predictions, and outline experimental protocols for studying these phenomena in controlled settings.

Confidence as inference over time: the effects of sequences on misinformation susceptibility (2025)

Overview and preregistration link coming soon.

Offgrid Online: Visualizing the Environmental Cost of Everyday Computation (2025)

Offgrid Online is a browser extension that visualizes the environmental cost of everyday digital activity. Every time a user searches the web or uses AI tools, it impacts a digital forest, trees are cut down, and a furnace burns, measuring the energy and environmental resources consumed. The forest reflects a user's digital footprint in real time. But users are also empowered to counteract this by donating to ecological charities, and so they can regenerate their forest and watch trees grow back, skies clear, and wildlife return. This project acts as a participatory artwork, interactive data visualizer, and behavioural feedback tool. Offgrid Online visualizes emissions and ties restoration to everyday behaviour. The result is a responsive forest that makes people's environmental data personal and changeable.

Read the Artist Statement (txt)

ADM+S Symposium — Automated social services: Building inclusive digital futures (2025)

ADM+S Symposium — Automated social services: Building inclusive digital futures (2025)
ADM+S Symposium — Automated social services: Building inclusive digital futures (2025).

Brief summary, panel details, and links coming soon.

Report on the 3rd Workshop on NeuroPhysiological Approaches for Interactive Information Retrieval (NeuroPhysIIR 2025) at SIGIR CHIIR 2025

The International Workshop on NeuroPhysiological Approaches for Interactive Information Retrieval (NeuroPhysIIR’25), co-located with ACM SIGIR CHIIR 2025 in Naarm/Melbourne, Australia, included 19 participants who discussed 12 statements addressing open challenges in neurophysiological interactive IR. The report summarizes the statements presented and the discussions held at the full-day workshop.

Read the report (PDF)

NeuroPhysIIR Workshop at ACM SIGIR CHIIR 2025 (Workshop)

Attended and participated in the report on the International Workshop on NeuroPhysiological Approaches for Interactive Information Retrieval at ACM SIGIR CHIIR 2025 (NeuroPhysIIR'25).

Papyrus: The Literature at Unit Scale (2025)

The scholarly record is increasingly fragmented: more than four million peer-reviewed papers now appear each year, yet mismatched metadata, discipline-specific jargon, and paywalls hinder efficient discovery. When the volume of information overwhelms a finite capacity, researchers must either spend months undertaking cross-disciplinary syntheses or retreat into an ever-narrower niche, fueling professional fatigue and the well-documented reasoning errors that accompany cognitive overload.

We re-frame this crisis as a tractable machine-learning problem. Transformer-based language models embed full-text articles in a shared semantic space, while citation graphs add complementary topological cues. Together these representations can automatically surface convergent findings, flag near-replications, and spotlight neglected intersections. Each of these tasks currently overwhelm manual review. Layering an adaptive recommender that infers a reader's evolving goals yields the benefit of passive discovery of high-utility papers as well as an explicit push toward serendipity, both measurable with standard ranked-retrieval and diversity metrics.

Papyrus is a desktop and mobile platform that atomises every article into fine-grained Units of Knowledge, or minimal, context-free propositions expressed in subject-predicate-object form. Each Unit inherits its parent article's embedding, enabling intra-paper summarisation and inter-paper synthesis through a two-stage pipeline (predicate-argument extraction followed by cosine-similarity clustering). Granularity adapts to expertise: novices see high-level explanatory Units, while domain experts receive technical Units that reveal citation pathways and unresolved contradictions.

Semester 1: Psychology of Everyday Thinking (1st year subject) — 2025

Began tutoring the course Psychology of Everyday Thinking for the first time, at the City Campus of RMIT University.

Dataset Sentiment Classifier for Experimental Stimuli (2025)

A new protocol is emerging in misinformation research that aims to improve the rigor of stimulus design. But pretesting statements for emotional or categorical content is often costly and places an undue burden on participants, who have to repeatedly evaluate large sets of stimuli simply to verify that they meet predefined criteria.

As a preliminary solution, machine learning offers a promising route for approximating human-like text perceptions. I've developed a simple tool on Kaggle that allows researchers to upload a dataset of statements and receive outputs categorizing each one by emotional sentiment and its intensity. Essentially, it serves as a user-friendly interface for psychological researchers without a technical background, providing access to a sentiment classifier without requiring extensive coding.

Warning: this method has not yet been validated against established psychological measures at the time of writing. Perhaps a future PhD project could take this further? Someone, pick it up!

Access the tool on Kaggle

Post-SWIRL’25 mini-conference (following the 4th SWIRL) — 2025

Post-SWIRL’25 mini conference
Post-SWIRL’25 mini-conference.

Brief recap and links coming soon.

Read the Poster I presented (PDF)

ADM+S Symposium Sydney — Automated Mobilities (2024)

Brief summary and link coming soon.

St Peter Says… (2024)

Project statement and link coming soon.

Leveraging the serial position effect to enhance auditory recall for voice-only information retrieval (2024)

Prototype description and evaluation plan coming soon.

From Conflict Detection to Arbitration in the Continued Influence Effect of Misinformation (2024)

Draft abstract and figures coming soon.

SIGIR-AP 2024 — 2nd International ACM SIGIR Conference on Information Retrieval in the Asia Pacific

Talk/poster details and link coming soon.

Computational Complexity of Decision Making Workshop (2024)

Workshop notes and resources coming soon.

Semester 2: Cognitive Psychology (2nd year subject) — 2024

Began tutoring the course Cognitive Psychology for the first time, at the City Campus of RMIT University.

Semester 1: Philosophy & Methodology of Psychology (3rd year subject) — 2024

Began tutoring the course Philosophy & Methodology of Psychology for the first time, at the City Campus of RMIT University.

PhD Proposal — Impact of Information Environments on the Perception of Credibility (2024)

Relevance and credibility are shaped by the information environments people inhabit and the noise they encounter. Information environments range from feeds that consistently surprise the user to those that consistently affirm them, each altering attention and memory systems in distinct ways. Previous research has shown that noise, defined as inconsistent or distracting information, can disrupt attention, increase cognitive load, and reduce efficiency.

As algorithms increasingly become the main source of information globally, it is essential to investigate how the structure these algorithms impose on information affects the way individuals maintain and revise their beliefs, accounting for the varying presence of noise in these feeds. Understanding how these varying environments impact perception is key to unraveling the cognitive and neural mechanisms involved in human-information interaction.

The primary aim of this topic is to explore the relationship between different information environments and their impact on perceptions of credibility. Specifically, it seeks to investigate how various sequences affect information processing and beliefs regarding the specific information being sequenced. This is to examine the potentially disruptive effects of noise during credibility judgements and maintenance of states of cognitive control.

Previous research has revealed adaptive strategies employed by the meta-cognitive system in noisy environments. But as of yet, it is unknown whether these strategies are also utilized in algorithmically sequenced news information, potentially explaining key findings in studies of misinformation and the susceptibility of people to polarized narratives when such narratives confirm existing beliefs. These investigations seek to contribute to the body of literature tasked with developing strategies to moderate credibility perception in diverse information settings.