1 min

Automatically Detecting Sexism in Social Media

Artificial Intelligence for Identifying Problematic Content Developed

Sexism is a widespread social problem which has seen a significant increase in recent years, especially in social media.

Current studies have shown that women who are subject to publicity face particularly great challenges. Social media exacerbate the problem by lowering the barriers to engaging in verbal attacks.

Tool detects Sexist Remarks

Within the framework of the international EXIST Competition (sEXism Identification in Social neTworks), the St. Pölten UAS and the AIT Austrian Institute of Technology developed a method for automatic detection of sexist comments. The team ranked third out of 31 international competitors.

The tool is based on methods of artificial intelligence and uses natural language processing (NLP) and machine learning to conduct a semantic analysis of posts on social media and classify them accordingly.  

“Making a meaningful contribution to solving our society’s problems has always been an important objective of our research. Automatic detection of sexist comments can help improve the discourse on social media, raise awareness on this issue, and take measures to fight against discriminating content”, says Matthias Zeppelzauer, head of the research group Media Computing at the St. Pölten UAS’ Institute of Creative\Media/Technologies.

Categorisation of Content

Distinguishing between various categories of sexist comments and ironic and sarcastic statements is a major challenge for the automatic detection of sexist content. Postings of users of the platforms “Twitter” and “Gab” provided by the EXIST Competition built the data basis for the classification.

The project did not only succeed in distinguishing between sexist and non-sexist content, it also suggests a more detailed categorisation between sexist content.

The postings, which were available in English and Spanish, were classified according to their content and automatically allocated to the following categories: Ideology and Inequality, Stereotypes and Power, Objectification, Sexual Violence, Misogyny, Non-Sexual Violence.

Also considering subtle forms

“In the detection of sexist content, we made a special effort not only to identify obvious forms of sexism but also subtile forms and allusions, which might be overlooked at a first glance”, says Alexander Schindler, head of the NLP-team of the AIT.

The project team comprised both students and researchers of the Center for Digital Safety & Security at the AIT and the St. Pölten UAS and included

  • Mina Schütz,
  • Jaqueline Böck,
  • Daria Liakhovets,
  • Djordje Slijepcevic,
  • Armin Kirchknopf,
  • Manuel Hecht,
  • Johannes Bogensperger,
  • Sven Schlarb,
  • Alexander Schindler and
  • Matthias Zeppelzauer.  

Collaboration with the AIT

Through the team of the AIT, the initiative was supported by the project defalsif-AI, which is subsidised by the Federal Ministry of Agriculture, Regions and Tourism (BMLRT).

You want to know more? Feel free to ask!
FH-Prof. Priv.-Doz. Dipl.-Ing. Mag. Dr. Zeppelzauer Matthias

FH-Prof. Priv.-Doz. Dipl.-Ing. Mag. Dr. Matthias Zeppelzauer

Head of
Media Computing Research Group
Institute of Creative\Media/Technologies
Department of Media and Digital Technologies