Benjamin Timmermans

ESWC 2016 Trip Report

From May 29th until June 2nd 2016, the 13th Extended Semantic Web Conference took place in Crete, Greece. CrowdTruth was presented by Oana Inel presenting her paper “Machine-Crowd Annotation Workflow for Event Understanding across Collections and Domains” and by Benjamin Timmermans presenting his paper “Exploiting disagreement through open-ended tasks for capturing interpretation spaces”, both in the PhD Symposium.


The Semantic Web group at the Vrije Universiteit Amsterdam was very well represented, with plenty of papers during the workshops and the conference. The paper on CLARIAH by Rinke, Albert, Kathrin among others won the best paper award at the Humanities & Semantic Web workshop. Here are some of the topics and papers that we found interesting during the conference.

EMSASW: Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web
In the Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web a keynote talk was given by Hassan Saif titled “Sentiment Analysis in Social Streams, the Role of Context and Semantics”. He explained that sentiment analysis is nothing more than extracting the polarity of an opinion. Through the Web 2.0 the sharing of opinions has become easier, increasing the potential of sentiment analysis. In order to find these opinions first opinion mining has to be performed, which is an integral part of sentiment analysis. Hassan compared several semantic solutions for sentiment analysis: SentiCircles which does not rely on the structure of texts but semantic representations of words in a context-term vector; Sentilo which is an unsupervised domain-independent semantic framework for sentence-level sentiment analysis; sentic computing, a multi-disciplinary tool for concept-level sentiment analysis that uses both contextual and conceptual semantics of words and can result in high performance on well structured and formal text.

Jennifer Ling and Roman Klinger presented their work titled “An Empirical, Quantitative Analysis of the Differences between Sarcasm and Irony”. They explained the differences between irony and sarcasm quite clearly. Irony can be split up into verbal irony which is the use of words for a meaning other than the literal meaning, and situational irony which is a situation where things happen opposite of what is expected. They made clear that sarcasm is ironic utterance, designed to cut or give pain. It is nothing more than a subtype of verbal irony. In tweets, they found that ironic and sarcastic tweets contain significantly less sentences than normal tweets.

PhD Symposium
Ghiara Ghidini and Simone Paolo Ponzetto organized a very nice PhD Symposium. They took care to assign for each student mentors that work in related domains and this made their feedback highly relevant and valuable. In this sense, we would like to thank to our mentors Chris Biemann, Christina Unger, Lyndon Nixon and Matteo Palmonari for helping us to improve our papers and for providing feedback during our presentations.

It was very nice to see that events present a high interest in the semantic web community. Marco Rovera presented his Phd proposal “A Knowledge-Based Framework for Events Representation and Reuse from Historical Archives” that aims to extract semantic knowledge from historical data in the context of events and make them available for different applications. It was nice to see that projects just as Agora and the Simple Event Model (SEM), developed at VU Amsterdam were mentioned in his work.

Another very interesting research project on the topic of human computation and crowdsourcing in order to solve problems that are still very difficult for computers was presented by Amna Basharat, “Semantics Driven Human-Machine Computation Framework for Linked Islamic Knowledge Engineering“. She envisioned hybrid human-machines workflows, where the skills and knowledge background of crowds and experts, together with automated approaches aim to improve the efficiency and reliability of semantic annotation tasks in specialized domains.

Vocabularies, Schemas and Ontologies
Céline Alec, Chantal Reynaud and Brigitte Safar presented their work “An Ontology-driven Approach for Semantic Annotation of Documents with Specific Concepts”. This is a collaboration with the weather company, where they use machine learning to classify things you can but also cannot do at a venue. This results in both positive and negative annotations. In order to achieve this, domain experts manually annotated documents and target concepts as either positive or negative. These target concepts were based on an ontology on tourist destinations with descriptive classes.

Open Knowledge Extraction Challenge
This year, the Open Knowledge Extraction Challenge was composed of 2 tasks and 2 submissions were selected for each of the tasks.

Task 1: Entity Recognition, Linking and Typing for Knowledge Base population

  • Mohamed Chabchoub, Michel Gagnon and Amal Zouaq: Collective disambiguation and Semantic Annotation for Entity Linking and Typing. Their approach combines the output of Stanford NER with the output of DBpediaSpotlight as ground for various heuristics to improve their results (e.g., filtering verb mentions, merging mentions of a given concept by always choosing the longest span). For the mentions that were not disambiguated, they query DBpedia to extract the entity that is linked to each such mention, while for the entities that have no types, they use the Stanford type and translate it to the DUL typing. In the end, their system outperformed the Stanford NER with about 20% on the training set, and similarly the semantic annotators.
  • Julien Plu, Giuseppe Rizzo and Raphaël Troncy: Enhancing Entity Linking by Combining Models. Their system is build on top of the ADEL system, presented in last year challenge. The new system architecture is composed of various models that are combined in order to improve the entity recognition and linking. Combining various models it is indeed a very good approach since it is very difficult if not almost impossible to choose one model that performs well across all datasets and domains.

Task 2: Class Induction and entity typing for Vocabulary and Knowledge Base enrichment

Semantic Sentiment Analysis Challenge
This challenge consisted of two tasks, one for polarity detection of 1m amazon reviews in 20 domains, and one on entity extraction of 5k sentences in two domains.

  • Efstratios Sygkounas, Xianglei Li, Giuseppe Rizzo and Raphaël Troncy. The SentiME System at the SSA Challenge. They used a bag of 5 classifiers in order to classify the sentiment polarity. This bagging has shown to result in a better stability and accuracy of the classification. A four fold cross validation was used while for each sample the ratio of positive and negative examples was preserved.
  • Soufian Jebbara and Philipp Cimiano – Aspect-Based Sentiment Analysis Using a Two-Step Neural Network Architecture. They retrieved word embeddings using a skip gram model that was trained on the amazon reviews dataset. They used the stanford POS tagger with 46 tags. Sentics were received from senticnet resulting in 5 sentics per word: pleasantness, attention, sensitivity, aptitude and polarity. They found that these sentics improve the accuracy of the classification and allow for less training iterations. The polarity was retrieved using SentiWordnet and used as a feature training. The results were limited because there was not enough training data.

Mauro Dragoni presented his paper “Enriching a Small Artwork Collection through Semantic Linking”. A very nice project that highlights some of the issues that small museums and small museums collections encounter: data loss, no exposure, no linking to other collections, no multilinguality. One of the issues that they identified, poor linking to other collections is one of the main goals of our DIVE+ project&system – creating an event-centric browser for linking and browsing across cultural heritage collections. Working with small or local museums is very difficult due to poor data quality, quantity and data management. Attracting outside visitors is also very cumbersome since they have no real exposure and collection owners need to translate the data in multiple languages. As part of the Verbo-Visual-Virtual project, this research investigates how to combine NLP with Semantic Web technologies in order to improve the access to cultural information.

Rob Brennan presented the work on “Building the Seshat Ontology for a Global History Databank”, which is a collection of expert-curated body of knowledge about human history. They used an ontology to model uncertain temporal variables, and coding conventions in a wiki-like syntax to deal with uncertainty and disagreement. This allows each expert to define their interpretation of history. Different types of brackets are used to indicate varying degrees of certainty and confidence. However, in the tool they do not show all the possible values, just the likely ones. Three graphs were used for this model: the real geospatial data, the provenance and the annotations. Different user roles are supported in their tool, which they plan to use to model trust and the reliability of their data.

In the paper “Towards Monitoring of Novel Statements in the News” Michael Färber stated that the increasing amount of information that is currently available on the web makes it imperative to search for novel information, and not only relevant information. The approach extracts novel statements in the form of RDF triples, where novelty is measured with regard to an existing KB and semantic novelty classes. One of the limitations of the system, that is considered as future work, is the fact that the system does not consider the timeline. Old articles could be considered novel if their information is not in the KB.
As a side note, we also consider novelty detection an extremely relevant task given the overwhelming amount of information available, and we made the first steps in tackling this problem by combining NLP methods and crowdsourcing (see Crowdsourcing Salient Information from News and Tweets, LREC 2016).

The paper “Efficient Graph-based Document Similarity” by Christian Paul, Achim Rettinger, Aditya Mogadala, Craig Knoblock and Pedro Szekely deals with assessing the similarity or relatedness among documents. They rank documents based on their relevance/similarity by first performing a search for surface forms of words in the document collection and then looking for co-occurrences of words in documents. They integrate semantic technologies (DBpedia, Wikidata, xLisa) to solve problems arising due to language ambiguity: dealing with heterogenous data (news articles, tweets), poor or no metadata available for images, videos among others.

Amparo E. Cano presented the work on “Semantic Topic Compass – Classification based on Unsupervised Feature Ambiguity Gradation”. For classification they used lexical features such as ngrams, entities and twitter features, and also semantic features from dbpedia. The feature space of a topic is semantically represented under the hypothesis that words have a similar meaning if they occur in a similar context. Related words for a given topic are found using wikipedia articles. They found that enriching the data with semantic features improved the recall of the classification. For evaluation three annotators classified the data, where data on which they did not agree was removed from the dataset.

“Implicit Entity Linking in Tweets” by Sujan Perera, Pablo Mendes, Adarsh Alex, Amit Sheth and Krishnaprasad Thirunarayan – is a new approach of linking implicit entities by exploiting the facts and the known context around given entities. To achieve this, they use the temporal factor to disambiguate entities that are present in tweets, i.e., identify domain entities that are relevant at the time t.

On Tuesday, Jim Hedler gave a keynote speech titled “Wither OWL in a knowledge-graphed, Linked-Data World?”. The topic of the talk was the question whether OWL is dead or not. In 2010 he claimed that semantics were coming to search. Some of the companies back then like Siri had success, but many did not. SPARQL has been adopted in the supercomputing field, but they are not yet a fan of RDF. Many large companies are also using semantic concepts, but not OWL. They are simply not linking their ontologies. is now used in 40% of google crawls. It is simple, and this is good because it is used in 10 billion pages. It’s simplicity keeps the use consistent.
Ontologies and owl are like sauron’s tower. If you let one inconsistency in, it may fall over completely. The RDFS view is different: it does not matter if things mean different things, it is jut about linking things together. In the Web 3.0 there are many use cases for ontologies in web apps at web scale. There is a lof of data but few semantics. This explains why RDFS and SPARQL are used but not why OWL is not. The problem is that we cannot talk about the right things in OWL.

On Thursday, Eleni Pratsini – Lab Director, Smarter Cities Technology Center, IBM Research – Ireland had a keynote on “Semantic Web in Business – Are we there yet?”. Her work focuses on advancing science and technology in order to improve the overall cities’ sustainability. Applying semantic web in smart cities could be the main way to understand the city’s needs and further empowering it take smart decisions over the population and the environment.

We both pitched our doctoral consortium papers at the minute of madness session and presented it in the poster session. You can read more about Oana’s presentation here, and Benjamin’s presentation here.

By Oana Inel and Benjamin Timmermans

Exploiting disagreement through open ended tasks for capturing interpretation spaces


I presented my doctoral consortium paper titled “Exploiting disagreement through open ended tasks for capturing interpretation spaces” at the PhD Symposium of ESWC 2016.

An important aspect of the semantic web is that systems have an understanding of the content and context of text, images, sounds and videos. Although research in these fields has progressed over the last years, there is still a semantic gap between data available of multimedia and metadata annotated by humans describing the content. This research investigates how the complete interpretation space of humans about the content and context of this data can be captured. The methodology consists of using open-ended crowdsourcing tasks that optimize the capturing of multiple interpretations combined with disagreement based metrics for evaluation of the results. These descriptions can be used meaningfully to improve information retrieval and recommendation of multimedia, to train and evaluate machine learning components and the training and assessment of experts.


Best poster award for CrowdTruth at ICT OPEN 2016


On the 22nd of March we presented our latest work on CrowdTruth at the ICT.OPEN 2016 conference. We are happy to announce that our poster received the best poster award in the Human and the Machine track. Furthermore, Anca Dumitrache gave a presentation and pitched our poster which resulted in the 2nd prize for best poster of the conference. It is a good signal that from the almost 200 posters the importance of the CrowdTruth initiative was recognized.

IMG_20160322_1915071 CePa15IXEAAnSYv

CrowdTruth 2.0 released

Today we released version 2.0 of the CrowdTruth framework. In the update the data model of the platform is changed, so that data and crowdsourcing results can be managed and reused more easily. This allows for several new features that have been integrated, such as project management and permissions. Users can create projects and share their crowdsourcing jobs within these projects. The media search page has been updated to accommodate any type of data, where you can search through the media in the platform. Another improvement to the platform is the automatic setup of new installations. This makes it easier for new users to get started straight away. You can find a list of the changes in the change log. Try out the platform and get started!

Scientific poster design


Recently the CrowdTruth team got a paper accepted at ICT Open 2016. As part of this upcoming conference, I visited a masterclass on scientific poster design at NWO. The class was given by two professional designers.

The most important thing in your poster is having a clear message. This can be achieved by creating a visual focus. This means that you should not give all images the same size, but guide the reader visually with placement and size of text and images. You have to be able to read the main message from far away and can include the fine details smaller for when the reader is up close. In order to achieve this, there should only be one main focus point to start from.

After having a starting point, there should be a clear hierarchy throughout the poster. The amount of levels of information should be reduces as much as possible, for instance four or five maximum. Most of the content from your paper is not suitable for the poster, only use the most suitable parts, and optionally include more text with details using a small font size at the bottom. Organize the message systematically by using a grid so that all elements are aligned along this grid.


The typography is another very important but also often forgotten aspect of poster design. Choose one proper typography that is well readable and has enough options to variate in size and style. Though, try to minimize the differences in font size, matching the hierarchy structure of the content. Write easy to read sentences but make sure the lines are not too short or long to improve the readability.

The colors of the poster are also an important aspect. Do not use a picture or image with different colors behind a text, it usually makes it too difficult to read. Applying a drop shadow to solve this is not a good solution. Try to never use shadows. Instead, focus on having a high contrast between the text and background color.

For using images and graphics, apply the same rules as for text color. Choose the most important image and decide if it communicates with your audience. It is better to choose one powerful image than a lot of random images. The chronological order of the poster can be changed by positioning the main thing in an unusual position, but then this focus point and the continuing hierarchy must be very clear.

Finally, it is best with scientific posters to just put all logos in a clear line at the bottom in a color bar. They could also be placed vertically, although this is less common and tends to take up more space. When in doubt, just put something big in the poster to get the attention of the audience. Make the poster stand out from the 200 other ones in the same room.

Watson Innovation Course Closing Event


On Friday 22 January Gerard Smit (CTO for IBM Belgium, Netherlands, Luxembourg) and Prof. Hubertus Irth (Vice Dean and Research Director of the Vrije Universiteit Faculty of Earth and Life Sciences and Faculty of Sciences) officially launched the collaboration between the IBM Collaborative Innovation Center (CIC) and the VU Faculty of Sciences. In the event students of the Watson Innovation course pitched their projects to a mixed crowd of students, scientists, engineers and business clients.

In the Watson Innovation course, students used Watson to answer questions about Amsterdam, for which Amsterdam Marketing provided the data and use case. The app LocalBuddy was selected as winner, and the students received a prize for their achievements by Amsterdam Marketing.

Watson Innovation Course Presentations


Today the students of the first Watson Innovation course by the Vrije Universiteit Amsterdam and IBM Netherlands presented their group work at the VU Intertain Lab. Representatives from Amsterdam Marketing and IBM Netherlands were present to evaluate the ideas, applications and business plans of the groups. The groups have been working on their Watson powered apps since last November, using the Watson Engagement Advisor and IBM Bluemix. The most interesting project groups will be selected to present their work again next Friday at IBM.


Cognitive Computing and Watson lecture


Today Lora Aroyo presented the first lecture of the Watson Innovation course at the Vrije Universiteit. The topic of the lecture was Cognitive Computing, IBM Watson and looking inside the mind of Watson. There was a high attendance of motivated bachelor and master students with various backgrounds, such as artificial intelligence, computer science, business administration, business analytics and information sciences. We are looking forward to see them develop their ideas with Watson.

Sign up now for the first edition of the Watson Innovation Course!

Have you ever wondered how we could provide tourists in Amsterdam with the best experience? Now is your chance to develop ideas, business cases and real prototypes of Watson to answer all questions tourists have.

The Watson Innovation course is a collaboration between the Vrije Universiteit, University of Amsterdam and IBM. It offers a unique opportunity to learn about IBM Watson, cognitive computing and the meaning of such artificial intelligence systems in a real world and big data context. Students from Computer Science and Economics faculties will join their complimentary efforts and creativity in cross-disciplinary teams to explore the business and innovation potential of such technologies. Visit the course page to find out all the details.

Netherlands eScience Symposium


On Thursday 9th of October was the Netherlands eScience symposium in the Amsterdam Arena. This yearly event attracts scientists and researchers from many different disciplines. In the digital humanities track, Oana Inel of the CrowdTruth team gave a talk on the Dive+ project. This is a digital cultural heritage project in which innovative access to online collections is provided, with the purpose of supporting digital humanities scholars and online exploration for the general public. This project is supported by the Netherlands eScience center, and used CrowdTruth for the crowdsourcing of events in historical data. The talk titled “Towards New Cultural Commons with DIVE+” can be seen below.