In July, we presented a work-in-progress paper at the sixth AAAI Conference on Human Computation and Crowdsourcing (HCOMP). In this paper we took a digital hermeneutics approach to understand what are the visual attributes and semantics that drive the creation of narratives. We present insights from a nichesourcing study in which humanities scholars remix keyframes and video fragments into micro-narratives i.e., (sequences of) GIFs. To support the narrative creation for humanities scholars a specific video annotation is needed, e.g., (1) annotations that consider literal and abstract connotations of video material, and (2) annotations that are coarse-grained, i.e., focusing on keyframes and video fragments as opposed to full length videos. The main findings of the study are used to facilitate the creation of narratives in the digital humanities exploratory search tool DIVE+. In previous DIVE+ crowdsourcing experiments, we used the CrowdTruth metrics and methodology to gain understanding of events!
Our presentation started with a one-minute pitch (check the slide above) and continued with a posters and demo session.