Science and Research Content

Elsevier announces winner of the 2017 Semantic Web Challenge -

Elsevier, the information analytics business specialising in science and health, has announced the winner of the 2017 Semantic Web Challenge (SWC). The winner was announced at the International Semantic Web Conference held in Vienna, Austria, in October. The challenge and allocated prize were sponsored by Elsevier.

In 2017, the Semantic Web Challenge - the longest-running and highly prestigious competition in the area - introduced a new format to measure scientific progress in the field of artificial intelligence (AI) over the web. To enhance reproducibility, competing teams were measured against each other. Each team's contribution was also measured against the current state of the art, based on the FAIR benchmarking platform.

This year's challenge focused on knowledge graphs. Both public- and privately-owned. Knowledge graphs are currently among the most prominent implementations of semantic web technologies.

The 2017 challenge was organized by the SWC Chairs: Dan Bennett, Thomson Reuters; Prof. Dr. Axel Ngonga, University of Paderborn; and Prof. Dr. Heiko Paulheim, University of Mannheim. The winner of the 2017 semantic web challenge is IBM Socrates by Michael Glass, Nandanda Mihindukulasooriya, Oktie Hassanzadeh, and Alfio Gliozzo of IBM Research AI.

The 2017 SWC Winning team from IBM received the 'Big Elsevier Check' to the value of one Bitcoin (approximately USD 5500 / EUR 4740). All competing SWC teams will be featured in a special issue of the Journal of Web Semantics.

By now, concepts such as "Big Data Web Analytics" and "Knowledge Graphs" need no further explanation. This year, the SWC adjusted the annual format in order to measure and evaluate targeted and sustainable progress in this field. In 2017, competing teams were asked to perform two important knowledge engineering tasks on the web: fact extraction (knowledge graph population) and fact checking (knowledge graph validation). Teams were free to use any arbitrary web sources as input, and an open set of training data was provided for them to learn from. A closed dataset of facts, unknown to the teams, served as the ground truth to benchmark how well they did.

The evaluation and benchmarking platform for the 2017 SWC is based on the GERBIL framework and powered by the HOBBIT project. Teams were measured on a very clear definition of precision and recall, and their performance on both tasks was tracked on a leader board. All data and systems were shared according to the FAIR principles (Findable, Accessible, Interoperable, Reusable).

The SWC organisers also provided a system to serve as baseline representing current progress in this area, so that teams not only competed against each other, but also against the current state of the art.

Brought to you by Scope e-Knowledge Center, a trusted global partner for digital content transformation solutions - Abstracting & Indexing (A&I), Knowledge Modeling (Taxonomies, Thesauri and Ontologies), and Metadata Enrichment & Entity Extraction.

Click here to read the original press release.

STORY TOOLS

  • |
  • |

sponsor links

For banner adsĀ click here