Welcome to the CrowdTruth blog!

The CrowdTruth Framework implements an approach to machine-human computing for collecting annotation data on text, images and videos. The approach is focussed specifically on collecting gold standard data for training and evaluation of cognitive computing systems. The original framework was inspired by the IBM Watson project for providing improved (multi-perspective) gold standard (medical) text annotation data for the training and evaluation of various IBM Watson components, such as Medical Relation Extraction, Medical Factor Extraction and Question-Answer passage alignment.

The CrowdTruth framework supports the composition of CrowdTruth gathering workflows, where a sequence of micro-annotation tasks can be configured and sent out to a number of crowdsourcing platforms (e.g. CrowdFlower and Amazon Mechanical Turk) and applications (e.g. Expert annotation game Dr. Detective). The CrowdTruth framework has a special focus on micro-tasks for knowledge extraction in medical text (e.g. medical documents, from various sources such as Wikipedia articles or patient case reports). The main steps involved in the CrowdTruth workflow are: (1) exploring & processing of input data, (2) collecting of annotation data, and (3) applying disagreement analytics on the results. These steps are realised in an automatic end-to-end workflow, that can support a continuous collection of high quality gold standard data with feedback loop to all steps of the process. Have a look at our presentations and papers for more details on the research.

Digging into Military Memoirs

On 8th and 9th of September the workshop “Digging into Military Memoirs” took place at the Royal Netherlands Institute of Southeast Asian and Caribbean Studies, in Leiden. The workshop, organized by Stef Scagliola, was a great opportunity to get a close contact with researchers, historians in various fields such as interviews, oral history, cross-media analysis among others. During the workshop the participants experimented with digital technologies on the basis of a corpus of 700 documents published about the veterans in Indonesia.

The aim of the workshop was to explain to a group of around 20 historians the possibilities of Digital Humanities tools and methods. The workshop was divided in four sessions (Data Visualization, Open Linked Data, Text Mining and Crowdsourcing) and each part was composed of a short presentation and hands-on assignments to be performed individually or in groups. The main expectation for each of the sessions was to inform the researchers about the most appropriate tools/applications to use at each stage of their research in order to generate faster and more efficient insights for their work.

The crowdsourcing session was developed and presented together with Liliana Melgar. We divided the session in two parts. The first part was to be followed as an example, Liliana provided brief explanations about the current state-of-the-art in crowdsourcing approaches in Digital Humanities and other fields. In the second part, the historians were able to experiment with different examples of crowdsourcing task and further develop a project idea (based on their own interests) where crowdsourcing would make a good candidate.

Sign up for the Watson Innovation Course!

Have you ever wondered how we could provide tourists in Amsterdam with the best experience? Now is your chance to develop ideas, business cases and real prototypes of Watson to answer all questions tourists have.

The Watson Innovation course is a collaboration between the Vrije Universiteit, University of Amsterdam and IBM Netherlands. It offers a unique opportunity to learn about IBM Watson, cognitive computing and the meaning of such artificial intelligence systems in a real world and big data context. Students from Computer Science and Economics faculties will join their complimentary efforts and creativity in cross-disciplinary teams to explore the business and innovation potential of such technologies. Visit the course page to find out all the details.

Crowdsourcing brainstem tumors at Lowlands 2016

lowlands

Brainstem tumors are a rare form of childhood cancer for which there is currently no cure. The Semmy Foundation aims to increase the survival of children with this type of cancer by supporting scientific research. The Center for Advanced Studies at IBM Netherlands is supporting this research by developing a cognitive system that allows doctors and researchers to quicker analyse MRI-scans and better detect anomalies in the brainstem.

In order to gather training data, a crowdsourcing event was held at the festival Lowlands, which is a 3-day music festival that took place from 19-21 August 2016 and welcomed 55k visitors. At the science fair, IBM had a booth that hosted both this research and showcase of the Weather stations of the Tahmo project with TU Delft.

screenshot

In the crowdsourcing task, the participants were asked to draw the shape of the brainstem and tumor in an MRI scan. Gathering data on whether a particular layer of a scan contains the brainstem and determining its size should allow a classifier to recognize the tumors. Furthermore, the annotator quality can be measured with the CrowdTruth methodology by analysing the precision of the edges that were drawn in relation to their alcohol and drug use that we collected. The hypothesis is that people under influence can still make valuable contributions, but that these are of lower quality than sober people. This may make the reliability of online crowd workers more clear, because it is unknown under what conditions they make their annotations.

heatmap

The initial results in the heatmap of drawn pixels give an indication of the overall location of the brainstem, but further analysis will follow on the individual scans in order to measure the worker quality and generating 3d models.

Big Data in Society Summerschool

summerschool

From 2 to 16th of July we organized the Big Data in Society Summerschool at the Vrije Universiteit Amsterdam. As part of our Collaborative Innovation Center with IBM, we presented an introduction of the technical and theoretical underpinnings of IBM Watson and discussed the use of big data and implications for society. We looked at examples of how the original Watson system can be adapted to new domains and tasks, and presented the CrowdTruth approach for gathering training and evaluation data in this context. The participating students, which ranged from bachelor to PhD level, said they learned a lot from the lectures and found the practical hands-on sessions very useful.

ESWC 2016 Trip Report

From May 29th until June 2nd 2016, the 13th Extended Semantic Web Conference took place in Crete, Greece. CrowdTruth was presented by Oana Inel presenting her paper “Machine-Crowd Annotation Workflow for Event Understanding across Collections and Domains” and by Benjamin Timmermans presenting his paper “Exploiting disagreement through open-ended tasks for capturing interpretation spaces”, both in the PhD Symposium.

IMG_20160529_0806471

The Semantic Web group at the Vrije Universiteit Amsterdam was very well represented, with plenty of papers during the workshops and the conference. The paper on CLARIAH by Rinke, Albert, Kathrin among others won the best paper award at the Humanities & Semantic Web workshop. Here are some of the topics and papers that we found interesting during the conference.

EMSASW: Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web
In the Workshop on Emotions, Modality, Sentiment Analysis and the Semantic Web a keynote talk was given by Hassan Saif titled “Sentiment Analysis in Social Streams, the Role of Context and Semantics”. He explained that sentiment analysis is nothing more than extracting the polarity of an opinion. Through the Web 2.0 the sharing of opinions has become easier, increasing the potential of sentiment analysis. In order to find these opinions first opinion mining has to be performed, which is an integral part of sentiment analysis. Hassan compared several semantic solutions for sentiment analysis: SentiCircles which does not rely on the structure of texts but semantic representations of words in a context-term vector; Sentilo which is an unsupervised domain-independent semantic framework for sentence-level sentiment analysis; sentic computing, a multi-disciplinary tool for concept-level sentiment analysis that uses both contextual and conceptual semantics of words and can result in high performance on well structured and formal text.

Jennifer Ling and Roman Klinger presented their work titled “An Empirical, Quantitative Analysis of the Differences between Sarcasm and Irony”. They explained the differences between irony and sarcasm quite clearly. Irony can be split up into verbal irony which is the use of words for a meaning other than the literal meaning, and situational irony which is a situation where things happen opposite of what is expected. They made clear that sarcasm is ironic utterance, designed to cut or give pain. It is nothing more than a subtype of verbal irony. In tweets, they found that ironic and sarcastic tweets contain significantly less sentences than normal tweets.

PhD Symposium
Ghiara Ghidini and Simone Paolo Ponzetto organized a very nice PhD Symposium. They took care to assign for each student mentors that work in related domains and this made their feedback highly relevant and valuable. In this sense, we would like to thank to our mentors Chris Biemann, Christina Unger, Lyndon Nixon and Matteo Palmonari for helping us to improve our papers and for providing feedback during our presentations.

It was very nice to see that events present a high interest in the semantic web community. Marco Rovera presented his Phd proposal “A Knowledge-Based Framework for Events Representation and Reuse from Historical Archives” that aims to extract semantic knowledge from historical data in the context of events and make them available for different applications. It was nice to see that projects just as Agora and the Simple Event Model (SEM), developed at VU Amsterdam were mentioned in his work.

Another very interesting research project on the topic of human computation and crowdsourcing in order to solve problems that are still very difficult for computers was presented by Amna Basharat, “Semantics Driven Human-Machine Computation Framework for Linked Islamic Knowledge Engineering“. She envisioned hybrid human-machines workflows, where the skills and knowledge background of crowds and experts, together with automated approaches aim to improve the efficiency and reliability of semantic annotation tasks in specialized domains.

Vocabularies, Schemas and Ontologies
Céline Alec, Chantal Reynaud and Brigitte Safar presented their work “An Ontology-driven Approach for Semantic Annotation of Documents with Specific Concepts”. This is a collaboration with the weather company, where they use machine learning to classify things you can but also cannot do at a venue. This results in both positive and negative annotations. In order to achieve this, domain experts manually annotated documents and target concepts as either positive or negative. These target concepts were based on an ontology on tourist destinations with descriptive classes.

Open Knowledge Extraction Challenge
This year, the Open Knowledge Extraction Challenge was composed of 2 tasks and 2 submissions were selected for each of the tasks.

Task 1: Entity Recognition, Linking and Typing for Knowledge Base population

  • Mohamed Chabchoub, Michel Gagnon and Amal Zouaq: Collective disambiguation and Semantic Annotation for Entity Linking and Typing. Their approach combines the output of Stanford NER with the output of DBpediaSpotlight as ground for various heuristics to improve their results (e.g., filtering verb mentions, merging mentions of a given concept by always choosing the longest span). For the mentions that were not disambiguated, they query DBpedia to extract the entity that is linked to each such mention, while for the entities that have no types, they use the Stanford type and translate it to the DUL typing. In the end, their system outperformed the Stanford NER with about 20% on the training set, and similarly the semantic annotators.
  • Julien Plu, Giuseppe Rizzo and Raphaël Troncy: Enhancing Entity Linking by Combining Models. Their system is build on top of the ADEL system, presented in last year challenge. The new system architecture is composed of various models that are combined in order to improve the entity recognition and linking. Combining various models it is indeed a very good approach since it is very difficult if not almost impossible to choose one model that performs well across all datasets and domains.

Task 2: Class Induction and entity typing for Vocabulary and Knowledge Base enrichment

Semantic Sentiment Analysis Challenge
This challenge consisted of two tasks, one for polarity detection of 1m amazon reviews in 20 domains, and one on entity extraction of 5k sentences in two domains.

  • Efstratios Sygkounas, Xianglei Li, Giuseppe Rizzo and Raphaël Troncy. The SentiME System at the SSA Challenge. They used a bag of 5 classifiers in order to classify the sentiment polarity. This bagging has shown to result in a better stability and accuracy of the classification. A four fold cross validation was used while for each sample the ratio of positive and negative examples was preserved.
  • Soufian Jebbara and Philipp Cimiano – Aspect-Based Sentiment Analysis Using a Two-Step Neural Network Architecture. They retrieved word embeddings using a skip gram model that was trained on the amazon reviews dataset. They used the stanford POS tagger with 46 tags. Sentics were received from senticnet resulting in 5 sentics per word: pleasantness, attention, sensitivity, aptitude and polarity. They found that these sentics improve the accuracy of the classification and allow for less training iterations. The polarity was retrieved using SentiWordnet and used as a feature training. The results were limited because there was not enough training data.

IN-USE AND INDUSTRIAL TRACK
Mauro Dragoni presented his paper “Enriching a Small Artwork Collection through Semantic Linking”. A very nice project that highlights some of the issues that small museums and small museums collections encounter: data loss, no exposure, no linking to other collections, no multilinguality. One of the issues that they identified, poor linking to other collections is one of the main goals of our DIVE+ project&system – creating an event-centric browser for linking and browsing across cultural heritage collections. Working with small or local museums is very difficult due to poor data quality, quantity and data management. Attracting outside visitors is also very cumbersome since they have no real exposure and collection owners need to translate the data in multiple languages. As part of the Verbo-Visual-Virtual project, this research investigates how to combine NLP with Semantic Web technologies in order to improve the access to cultural information.

Rob Brennan presented the work on “Building the Seshat Ontology for a Global History Databank”, which is a collection of expert-curated body of knowledge about human history. They used an ontology to model uncertain temporal variables, and coding conventions in a wiki-like syntax to deal with uncertainty and disagreement. This allows each expert to define their interpretation of history. Different types of brackets are used to indicate varying degrees of certainty and confidence. However, in the tool they do not show all the possible values, just the likely ones. Three graphs were used for this model: the real geospatial data, the provenance and the annotations. Different user roles are supported in their tool, which they plan to use to model trust and the reliability of their data.

NATURAL LANGUAGE PROCESSING AND INFORMATION RETRIEVAL
In the paper “Towards Monitoring of Novel Statements in the News” Michael Färber stated that the increasing amount of information that is currently available on the web makes it imperative to search for novel information, and not only relevant information. The approach extracts novel statements in the form of RDF triples, where novelty is measured with regard to an existing KB and semantic novelty classes. One of the limitations of the system, that is considered as future work, is the fact that the system does not consider the timeline. Old articles could be considered novel if their information is not in the KB.
As a side note, we also consider novelty detection an extremely relevant task given the overwhelming amount of information available, and we made the first steps in tackling this problem by combining NLP methods and crowdsourcing (see Crowdsourcing Salient Information from News and Tweets, LREC 2016).

The paper “Efficient Graph-based Document Similarity” by Christian Paul, Achim Rettinger, Aditya Mogadala, Craig Knoblock and Pedro Szekely deals with assessing the similarity or relatedness among documents. They rank documents based on their relevance/similarity by first performing a search for surface forms of words in the document collection and then looking for co-occurrences of words in documents. They integrate semantic technologies (DBpedia, Wikidata, xLisa) to solve problems arising due to language ambiguity: dealing with heterogenous data (news articles, tweets), poor or no metadata available for images, videos among others.

Amparo E. Cano presented the work on “Semantic Topic Compass – Classification based on Unsupervised Feature Ambiguity Gradation”. For classification they used lexical features such as ngrams, entities and twitter features, and also semantic features from dbpedia. The feature space of a topic is semantically represented under the hypothesis that words have a similar meaning if they occur in a similar context. Related words for a given topic are found using wikipedia articles. They found that enriching the data with semantic features improved the recall of the classification. For evaluation three annotators classified the data, where data on which they did not agree was removed from the dataset.

SEMANTIC DATA MANAGEMENT, BIG DATA, SCALABILITY
“Implicit Entity Linking in Tweets” by Sujan Perera, Pablo Mendes, Adarsh Alex, Amit Sheth and Krishnaprasad Thirunarayan – is a new approach of linking implicit entities by exploiting the facts and the known context around given entities. To achieve this, they use the temporal factor to disambiguate entities that are present in tweets, i.e., identify domain entities that are relevant at the time t.

Keynotes
On Tuesday, Jim Hedler gave a keynote speech titled “Wither OWL in a knowledge-graphed, Linked-Data World?”. The topic of the talk was the question whether OWL is dead or not. In 2010 he claimed that semantics were coming to search. Some of the companies back then like Siri had success, but many did not. SPARQL has been adopted in the supercomputing field, but they are not yet a fan of RDF. Many large companies are also using semantic concepts, but not OWL. They are simply not linking their ontologies. Schema.org is now used in 40% of google crawls. It is simple, and this is good because it is used in 10 billion pages. It’s simplicity keeps the use consistent.
Ontologies and owl are like sauron’s tower. If you let one inconsistency in, it may fall over completely. The RDFS view is different: it does not matter if things mean different things, it is jut about linking things together. In the Web 3.0 there are many use cases for ontologies in web apps at web scale. There is a lof of data but few semantics. This explains why RDFS and SPARQL are used but not why OWL is not. The problem is that we cannot talk about the right things in OWL.

On Thursday, Eleni Pratsini – Lab Director, Smarter Cities Technology Center, IBM Research – Ireland had a keynote on “Semantic Web in Business – Are we there yet?”. Her work focuses on advancing science and technology in order to improve the overall cities’ sustainability. Applying semantic web in smart cities could be the main way to understand the city’s needs and further empowering it take smart decisions over the population and the environment.

We both pitched our doctoral consortium papers at the minute of madness session and presented it in the poster session. You can read more about Oana’s presentation here, and Benjamin’s presentation here.

By Oana Inel and Benjamin Timmermans