WARNME, IS THIS EVEN FOR ME?
Icon

What is this project?

A student toolkit for understanding and evaluating the emergency mass notification alert system at UC Berkeley and/or at your university. This toolkit was developed by centering those of us in the most need, as described by Angela Glover Blackwell’s “The Curb-Cut Effect.” It is the hope of the research team that university students, regardless of their experience and background, impacted by their school’s application of the Clery Act can better understand the system and what may need to be brought into student community discourse or, with deepened understanding, brought into student community discourse. The intent of this project is to disrupt data access power structures and invite all people to have the resources and information to improve their lives and the systems that impact the quality of their lives. A range of resources were applied or developed and then tested to serve as a type of toolkit or “roadmap” to evaluate university emergency notification systems for students and by students.

Research Methods

User Experience Research

To understand the problem space and validate that this was an area worth researching, the UXR team conducted 3 short exploratory interviews each to understand students’ perspective on the WarnMe system through the categories of interpreting, impact, and needs. The initial interviews consisted of questions that then guided the researchers on what topics they wanted to do a deeper dive into for official interviews for the study and the diary study once interviews were completed.

Screening and recruitment were completed via google form through word of mouth, flyers, posts in group chats, and posts on social media. The only criteria was to be a UCB student, either as an undergraduate student, graduate student, or PhD student. Community members could also be accepted for the interview process. 10 interviews were conducted for approximately 30-45 minutes each in the topic areas of students’ comprehension of WarnMes, students' campus safety practices, interpreting and emotional impact of WarnMes on students, and the needs of students regarding WarnMes.

Once interviews concluded, a diary study was conducted. There were 11 participants in the diary study that responded to a google form for 9-11 WarnMes across two weeks. The WarnMes that the participants completed a form about were either sent out by the university during the two week time frame or were past WarnMes dating back to 2 years ago that the researchers chose to use in case not enough WarnMes were sent out by the university during the two weeks. The topic areas covered by the diary study were purposefully similar to the interview topics: comprehension of the WarnMe that was sent out, emotional impact that the WarnMe had on them, and how the WarnMe affected their needs.

To analyze the interviews, the researchers individually coded their own interviews to pull out themes for each question as well as quotes. Descriptions were made for the themes found and entered into individual spreadsheets for each researcher. From there, as a group, the researchers grouped the themes that sounded similar and placed them in categories based on the research questions.

Coding WarnMes and Natural Language Processing

The project involved analyzing WarnMe messages collected from August 2021 to October 2023 through public records requests. A codebook was developed to code the messages based on multiple categories of codes: location granularity, scene descriptions, and personal identifiers used for victims and suspects.

Four annotators each coded 200 of the 315 messages with all four annotators coding 100 of the messages. From those 100 messages Fleiss' kappa, a statistical measure for assessing the reliability of agreement between multiple annotators, was calculated. The inter-annotator agreement across all code categories showed moderate agreement between the annotators: a weighted average Fleiss' kappa of 0.4828. The codes with the highest agreement were selected for each WarnMe, and the frequency of the various code categories were analyzed.


The NLP analysis began with data preprocessing to refine the dataset by removing URLs, non-standard characters, and excess whitespace. A sentiment analysis pipeline was then implemented using the sentiment-analysis module from Hugging Face, categorizing each comment as positive or negative, and results were documented in the dataset.

We manually annotated the entire data the human way based on six emotions (joy, love, anger, sadness, fear, and surprise) which provided a baseline to evaluate the model's performance. A pre-trained model (bhadresh-savani/distilbert-base-uncased-emotion) from Hugging Face was employed to analyze emotions in the comments. The model achieved 80.65% accuracy, excelling in identifying sadness and anger but struggled with surprise and love. This indicates that there is room for improvement in recognizing less frequent or subtly expressed emotions.

To enhance sentiment analysis, three machine learning models were tested:

The study underscores NLP technologies' evolving complexity and capability in automated emotion detection from text data, highlighting the potential for more accurate and nuanced sentiment analysis in future research.

Icon

Positionality Statement

We are a group of graduate student researchers at the University of California, Berkeley. Our research was conducted at UCB through a program requirement called the capstone project. Our identities include the Black, Global South, and white, queer, trans, and cisgender experiences. Our research was intentionally limited to avenues and research methods we felt could be most safely replicated by students of any background attending schools federally mandated to adhere to the Clery Act. This included but is not limited to: no contact with police or police representatives and working independent of university oversight or input. We have used publicly-available data and conducted one interview with the Director of Clery Compliance. Our work unabashedly centers those at most risk of harm: the diversity of university students, especially those with marginalized identities, with little to no avenues for feedback or input on how their university interprets and applies the Clery Act and manages their emergency alert system.

Our university is an internationally-ranked, public university that is highly resourced, including a $6.9 Billion endowment. Those resources, that we contribute to through our tuition and fees, supported every element of our project workstreams, including Google Suite, UCB Libraries, workspaces, internet access, etc. Our department reimbursed our research participant gifts for participation and provided access to numerous professors and professionals that we were able to utilize.