When Young People Make Threats on Social Media, Do They Mean It?
In New York City, law enforcement regularly monitors the social media use of Black, Indigenous, and people of color (BIPOC) youth, compiling binders of Twitter and Facebook posts to link them to crimes or gangs, according to reports from The New York Times.
Something as benign as liking a photo on Facebook can be used as evidence of wrongdoing in a trial, so when police officers misinterpret social media posts — which often include slang, inside jokes, song lyrics, and references to pop culture — it can lead to serious consequences.
SAFELab, a transdisciplinary research initiative at the Annenberg School for Communication and Penn’s School of Social Practice and Policy, is led by Desmond Upton Patton, the Brian and Randi Schwartz University Professor. The initiative has developed a new web-based app that teaches adults to look more closely at social media posts: InterpretMe.
The app provides social media training for educators, law enforcement, and the press.
“These are the people who come into contact with young people regularly and have influence over their lives,” said Siva Mathiyazhagan, research assistant professor and associate director of strategies and impact at SAFELab. “Yet many of them don’t have the cultural context to understand how young people talk to one another online.”
InterpretMe is built on the insights the SAFELab team gained after working with youth at the Brownsville Community Justice Center, a community center designed to reduce crime and incarceration in central Brooklyn, to help interpret and annotate social media posts made by people their age.
“The young people at the Brownsville Community Justice Center understood how emojis, slang, and hyper-local words are used online,” Mathiyazhagan said. “Their insights were key to building the platform.”
During InterpretMe training, users are placed in a fictional scenario in which they encounter a potentially harmful social media post, such as a student seeming to be depressed or potentially violent, and must decide how to react.
While walking through the scenario, users gather context about the post by doing things like looking at the young person’s previous posts or asking friends about their social life. At the end of a module, a user must decide how they will proceed — what they’ll say to their editor or principal about the student — and are invited to reflect on its reasoning.
SAFELab tested the training with 60 teachers, 50 journalists, and 30 law enforcement officials in phase one.
Participants took surveys before and after the training to judge how bias might affect their social media interpretation skills. Mathiyazhagan said that bias scores decreased across all groups after using the training.
Next, SAFELab plans to incorporate machine learning into InterpretMe. The team has long been experimenting with AI. With the help of both computer scientists and formerly gang- involved youth in Chicago, they created a machine-learning model trained to detect gang signs, slang, local references, and emotion in the hopes of preventing violence.
While the model is based on data from Chicago, it could be expanded to include context for any area.
A single person might miss song lyrics in a Facebook post, but a machine trained on community insights could flag them and stop a misunderstanding from happening.
“Through artificial intelligence, we might be able to not only speed up the interpretation process but also fill in cultural gaps.” – Siva Mathiyazhagan