rank-based-evaluation

Code for the paper "On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods" (https://arxiv.org/abs/2002.06914)

MIT License

Stars
6

On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods

This repository contains the code for the paper

On the Ambiguity of Rank-Based Evaluation of Entity Alignment or Link Prediction Methods
Max Berrendorf, Evgeniy Faerman, Laurent Vermue and Volker Tresp

Installation

Setup and activate virtual environment:

python3.8 -m venv ./venv
source ./venv/bin/activate

Install requirements (in this virtual environment):

pip install -U pip
pip install -U -r requirements.txt

MLFlow

In order to track results to a MLFlow server, start it first by running

mlflow server

GCN experiments on DBP15k

To run the experiments on DBP15k use

(venv) PYTHONPATH=./src python3 executables/adjusted_ranking_experiments.py

The results are logged to the running MLFlow instance. Once finished, you can summarize the results and reproduce the visualization by

(venv) python3 executables/summarize.py

Degree investigations

To rerun the experiments for investigating the correlation between node degree, matchings and entity representation norms, run

(venv) PYTHONPATH=./src python3 executables/degree_investigation.py
Badges
Extracted from project README
Python 3.8 PyTorch License: MIT