# BiDyn: Bipartite Dynamic Representations for Abuse Detection

BiDyn is a method for transductive node classification on dynamic bipartite graphs, with specialized modules for detecting abusive behavior in online communities scale.

## Motivation

Abusive behavior, from trolling to fraudulent product reviews, damages online retail websites and communities. Detecting such abusive is challenging due to the large-scale, dynamic nature of online interaction. A successful abusive detection system must navigate three major real-world challenges:
• Complex dynamic online interactions. Online interactions often take the form of a dynamic bipartite graph in which users interact with site items, such as pages or subcommunities, over time.
• Large-scale interaction data. Online communities can have hundreds of millions of interactions.
• Limited labeled examples of abuse. Human moderation is expensive, and abusive behavior is rare relative to normal behavior, thus collecting sufficiently many positive instances of abuse is difficult.

Left: Degree of abusive and non-abusive users in an e-commerce network as a function of time. Abusive users (red) have more fluctuating activity levels than normal users (blue), which remain relatively constant. Right: Abusive nodes in the e-commerce network have systematically lower log-probability than normal nodes under a graph autoencoder model.

These phenomena indicate that a successful abuse detection system must integrate temporal and graph-based models of abuse at scale.

## Method

Given a dynamic, bipartite graph of user and item nodes, as well as a subset of user nodes labeled as abusive or non-abusive, BiDyn outputs a prediction of whether each remaining user is abusive.

BiDyn iteratively learns embeddings through alternating item and user updates. In the item round, a graph convolution is applied to each item’s neighbors. In the user round, a recurrent neural network is applied to each user’s neighbors. Improved user representations give rise to improved item representations and vice versa; finally, the user representations are used to make a binary prediction on whether each user is abusive. This training scheme improves efficiency by 10x over alternative dynamic graph modeling approaches (TGAT, DyRep), and allows BiDyn to run on datasets with more than 100M edges.

Additionally, BiDyn employs a pretraining framework to learn efficiently given sparse training labels. We introduce a "random access query" pretraining task which learns static node representations that can be used to make dynamic predictions about whether an event occurs between a user and item in a given time range. Modeling the interaction behavior of nodes provides an initial separation of abusive and normal nodes before the main training phase.

## Experiments

We compare BiDyn with baselines (RNN, TGAT) on AUROC by varying the amount of observed node labels on a dataset of Wikipedia edits. Even when we observe less then 5% of node labels, BiDyn achieves high AUROC compared to baselines, hence it is robust to sparse training labels.

We compare GPU memory usage (left) and runtime (right) for BiDyn, RNN-GNN and TGAT. TGAT and RNN-GNN run out of memory for 3 or more layers (denoted by X), while memory usage of BiDyn remains constant with increasing depth. Furthermore, BiDyn achieves comparable or lower runtime than the baselines.

## Code

A PyTorch implementation is available for applying BiDyn to custom datasets and abuse detection settings. Code and implementation details can be found on GitHub.

## References

Bipartite Dynamic Representations for Abuse Detection. Andrew Z Wang, Rex Ying, Pan Li, Nikhil Rao, Karthik Subbian, Jure Leskovec. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2021.

The following BibTeX citation should be used if the paper or code is used:

@inproceedings{wang2021bipartite,
title={Bipartite Dynamic Representations for Abuse Detection},
author={Wang, Andrew and Ying, Rex and Li, Pan and Rao, Nikhil and Subbian, Karthik and Leskovec, Jure},
booktitle={Proceedings of the 27th ACM SIGKDD international conference on Knowledge discovery and data mining},
year={2021},
organization={ACM}
}