TY - CONF AB - Ranking plays a central role in a large number of applications driven by RDF knowledge graphs. Over the last years, many popular RDF knowledge graphs have grown so large that rankings for the facts they contain cannot be computed directly using the currently common 64-bit platforms. In this paper, we tackle two problems: Computing ranks on such large knowledge bases efficiently and incrementally. First, we present D-HARE, a distributed approach for computing ranks on very large knowledge graphs. D-HARE assumes the random surfer model and relies on data partitioning to compute matrix multiplications and transpositions on disk for matrices of arbitrary size. Moreover, the data partitioning underlying D-HARE allows the execution of most of its steps in parallel. As very large knowledge graphs are often updated periodically, we tackle the incremental computation of ranks on large knowledge bases as a second problem. We address this problem by presenting I-HARE, an approximation technique for calculating the overall ranking scores of a knowledge without the need to recalculate the ranking from scratch at each new revision. We evaluate our approaches by calculating ranks on the 3 × 10^9 and 2.4 × 10^9 triples from Wikidata resp. LinkedGeoData. Our evaluation demonstrates that D-HARE is the first holistic approach for computing ranks on very large RDF knowledge graphs. In addition, our incremental approach achieves a root mean squared error of less than 10E−7 in the best case. Both D-HARE and I-HARE are open-source and are available at: https://github.com/dice-group/incrementalHARE. AU - Desouki, Abdelmoneim Amer AU - Röder, Michael AU - Ngonga Ngomo, Axel-Cyrille ID - 15921 KW - Knowledge Graphs KW - Ranking KW - RDF SN - 9781450368858 T2 - Proceedings of the 30th ACM Conference on Hypertext and Social Media - HT '19 TI - Ranking on Very Large Knowledge Graphs ER - TY - CONF AB - We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the ’best’ one? and the second one: which algorithm should I use for my real world problem? Both are connected and neither is easy to answer. We present methods which can be used to analyse the raw data of a benchmark experiment and derive some insight regarding the answers to these questions. We employ the presented methods to analyse the BBOB’09 benchmark results and present some initial findings. AU - Mersmann, Olaf AU - Preuss, Mike AU - Trautmann, Heike ID - 46405 KW - benchmarking KW - multidimensional scaling KW - consensus ranking KW - evolutionary optimization KW - BBOB test set SN - 3642158439 T2 - Proceedings of the 11th International Conference on Parallel Problem Solving from Nature: Part I TI - Benchmarking Evolutionary Algorithms: Towards Exploratory Landscape Analysis ER -