Stanford CS224W: Resources


CS224W:
Social and Information Network Analysis
Autumn 2013

Pointers to data and code

Datasets

Stanford Large Network Dataset Collection

Coauthorship and Citation Networks

Internet Topology

  • AS Graphs: AS-level connectivities inferred from Oregon route-views, Looking glass data and Routing registry data

Stack Overflow

Yelp Data

  • Yelp Review Data: reviews of the 250 closest businesses for 30 universities for students and academics to explore and research

Prosper peer to peer money lending dataset

  • Money Lending Data: Lenders ask for loans and people bid (price, interest rate) on loans to fund.

Youtube dataset

  • Youtube data: YouTube videos as nodes. Edge a->b means video b is in the related video list (first 20 only) of a video a.

Amazon product copurchasing networks and metadata

  • Amazon Data: The data was collected by crawling Amazon website and contains product metadata and review information about 548,552 different products (Books, music CDs, DVDs and VHS video tapes).

Wikipedia

  • Wikipedia page to page link data: A list of all page-to-page links in Wikipedia
  • DBPedia: The DBpedia data set uses a large multi-domain ontology which has been derived from Wikipedia.
  • Edits and talks: Complete edit history (all revisions, all pages) of Wikipedia since its inception till January 2008.

Movie Ratings

Who trusts whom data at Trustlet

Mark Newman's pointers

Munmun De Choudhury's pointers

  • Network data: Flickr Image Dataset, YouTube Dataset, Digg Dataset (Social Media), Engadget Dataset (online communities), Del.icio.us Dataset (Social bookmarking)

Reality Commons data

  • Mobile data: Several mobile data sets that contain the dynamics of several communities of about 100 people each.

Stanford Foursquare Place Graph Dataset

  • Every day millions of people check-in to the places they go on Foursquare and in the process create vast amounts of data about how places are connected to each other. We call this set of interconnections the Place Graph, and provide a sample of this data for 5 major US cities. This dataset contains metadata about 160k popular public venues, and 21m anonymous check-in transitions (or trips between venues). You'll have to sign an agreement to gain access; contact Jure for more information.

Chatous

  • If you are interested in working on this project, please contact <kevin@chatous.com>.
  • Chatous is a text-based, 1-on-1 anonymous chat network that has seen 2.5 million unique visitors from over 180 different countries. Users can create a profile that contains a screen name, age, gender, location, and a short free-form "about me" field. After clicking the "new chat" button, users are matched up with one another in a text-based conversation. Interactions on Chatous include exchanging messages, sending/accepting a friend request, reporting an abusive user, ending a conversation. In our dataset, we store all user profile information (and changes made to the profile), all actions taken by users on the site, as well as conversation content (in particular, conversation length and words used). Here are some suggested research questions that we think you could solve:
    1. Predicting user "quality" - or general conversation tendency / likability by community as a whole. In particular, identifying users of poor quality is important because they rarely have long conversations and are overrepresented in the matching queue, thus affecting a large set of users. Using a user's profile information and past conversations, can we predict which users tend to be good conversationalists and which users generally engage in short or zero length conversations?
    2. Predicting identity changes. A user who frequently changes profile information on Chatous, especially age/gender, indicates that the user is lying about their identity. We'd like to develop a system to predict the users are most likely to be lying about their age or gender - i.e. see what type of behavior is linked with profile changes. Given a set of users, can we use information from their current profile and past interactions to predict whether or not they will tend to be changing key profile elements in the future?
    3. Evaluating validity of user reports. On Chatous, user moderation of community is important for flagging of abusive / spammy users. However, the tendency of users to report varies widely, and we have many false positives (reports that are unwarranted, people that are reported simply because they are on the platform a lot). We'd like to develop a system that can determine the accuracy of a user report (based on the reporting user's behavior, the reported user's behavior, and the total number of reports both users have sent/received). We hope this will enable us to remove some of the noise in user reports and more easily detect abusive users on the platform.
  • Dataset:
    • Two weeks of user interaction on our platform ~80,000 users and ~8 million conversations
    • Graph structure consisting of users as nodes and conversations as weighted edges (with conversation length as weight)
    • Additional meta data around edge includes: person who disconnected the conversation, time started, time finished, whether a "friendship" exists between the two users
    • User profiles consisting of screen name, age, gender, location, and "about me" (including all changes to a user's profile)
    • List of user reports (person reported, person reporting, conversation length & all associated meta data)
  • Potential additional data:
    • Word vectors consisting of the words each user has used in a conversation

Bitcoin

  • Bitcoin is a digital currency invented in 2008 and operates on a peer-to-peer system for transaction validation. This decentralized currency is an attempt to mimic physical currencies in that there is limited supply of Bitcoins in the world, each Bitcoin must be "mined", and each transaction can be verified for authenticity. Bitcoins are used to exchange every day goods and services, but it also has known ties to black markets, illicit drugs, and illegal gambling transactions. The dataset is also very inclined towards anonymization of behavior, though true anonymization is rarely achieved.
  • The Bitcoin dataset captures transaction-level information. For each transaction, there can be multiple senders and multiple receivers as detailed here. This dataset provides a challenge in that multiple addresses are usually associated with a single entity or person. However, some initial work has been done to associated keys with a single user by looking at transactions that are associated with each other (for example, if a transaction has multiple public keys as input on a single transaction, then a single user owns both private keys). The dataset provided provides these known associations by grouping these addresses together under a single UserId (which then maps to a set of all associated addresses).
  • Key Challenge Questions:
    1. Can we detect bulk Bitcoin thefts by hackers? Can we track where the money went after thefts?
    2. Can we detect illicit transactions based on Bitcoin transaction behavior? What sort of graph patterns emerge?
    3. Can we detect attempts at money laundering (called a "mixing service" in Bitcoin)
      1. Can we detect money laundering attempts and the people who use them? Note: Current Bitcoin mixing services tend to mix Bitcoins amongst all the people who bother to use a mixing service so does the mixing service actually obfuscate anything?
      2. Can we trace back the originator of these laundering attempts?
    4. Can we detect currency manipulation (hackers try to destabilize Bitcoin currency exchanges to deflate prices)
    5. Is Bitcoin gaining traction or losing traction among the regular population for use as a regular digital currency?
    6. It is Bitcoin best practice to generate and use a new address with every transaction. Is this practice followed? If not, then what can we learn from this?
    7. Can we identify and extract organizational behavior amidst the Bitcoin transactions?
    8. Can we determine which Bitcoin addresses belong to a single entity? While the initial pass over the data have yielded some resolution of entities, can we further improve this mapping?

Software Tools

C++ libary for working with massive network datsets (Windows, Linux, Mac)

Program for large network analysis (Windows or Linux via Wine)

Python package for the study of the structure of complex networks

Graph visualization software

Exploratory data analysis and visualization tool for graphs and networks

Software framework for information visualization (Linux, MacOSX, Windows)

Software for social network analysis (Windows)

Large-scale network analysis, modeling and visualization toolkit

Tools for fitting heavy-tailed distributions to data

Websites

Some websites that may be interesting to do analysis on:

Similar Courses