Session 13

Friday 14:00 - 15:30

High Tor 2

Chair: Michael Pidd

Modeling linked cultural events: design and application

  • Claartje Rasterhoff

University of Amsterdam

This paper discusses promises and pitfalls of linking historical data on cultural events. Events play a key role in historical scholarship, and have gained even more urgency with the increasing importance of digital humanities. Many such projects on events, however, employ them as devices to structure data collections and do not explicitly aim to develop analytical frameworks in relation to data collection and modeling. In this paper, we discuss the conceptual and practical requirements for such a framework on the basis of pilot projects on Dutch cinematic, musical, and theatrical events across the period 1600-2000.

A systematic analysis of cultural events requires a data structure that allows for querying the connections between people, places, time, genres, titles etc.. Many datasets on historical European music, theatre, and film are now publicly available online. The ones that contain programming information are, at least to some extent, already event-based. In theory they invite researchers to systematically analyze historical cultural life internationally, cross-sectorally, and within broader local contexts. However, developing a data model that allows for analyses across time, place, and cultural activities across different data sets is highly complex. The data are heterogeneous in scale and scope, and normalizing across datasets is tricky.

Harmonizing and linking all the relevant datasets is therefore impossible, but fortunately the structure of linked data provides a way to query heterogeneous data without enforcing an overarching ontology. Moreover, for comparative and cross-sectoral research, the event data can be linked internally as well as to external knowledge bases by means of shared vocabularies. We further demonstrate that conceptualizing cultural events such as concerts or theatrical performances in a linked data framework is not just a technical solution. This approach acknowledges the performative and interactive nature of cultural products and activities within their broader historical contexts.

Bernoulli-Euler Online: Presentation of early modern mathematical Correspondence on the Web

  • Tobias Schweizer ,
  • Sepideh Alassi

Digital Humanities Lab, University of Basel

The Bernoulli-Euler Online project (BEOL) aims to integrate different editions, consisting of the writings of members of the Bernoulli dynasty and Leonhard Euler, into one platform. At the current stage of the project, we have have been able to include several hundred letters exchanged between mathematicians of the early modern period. The data is stored in an RDF triplestore and accessed via the Knora API, using JSON-LD as the exchange format. Knora is a generic framework and is further extended for the special needs of the BEOL project. BEOL defines its own OWL ontologies that are derived from the more generic Knora ontologies.

In our contribution, we present the BEOL project, which can be accessed through our graphical user interface, SALSAH, which is based on Angular 4. It displays the letters and the relations to their authors, recipients, and mentioned persons, as well as to other letters or bibliographical items, along with a subject index. The letters' transcriptions often include mathematical notation (in LaTeX format), which is rendered using MathJax. Whenever available, high-quality facsimiles are displayed with the transcriptions, relying on the IIIF standard.

To use the full power of RDF, we developed a flexible way of querying RDF data that is similar to a SPARQL endpoint, but serves the data via the Knora API as JSON-LD. We call it the Knora Query Language (KnarQL). When the user enters search criteria, SALSAH generates a KnarQL query requesting a subgraph of the BEOL graph, which is then returned by the API and displayed within SALSAH.

Like Knora, SALSAH is developed in a generic way. The functionality developed for the BEOL corpus can be applied to other projects as well. Rather than dealing with predefined routes on the server and custom response formats, we are developing SALSAH in such a way that it can deal with complex linked data serialised as JSON-LD.

What Academics Express About Their Sense of Self on social media? A computational linguistic analysis

  • Paolo Casani

Centre for Digital Humanities, University College London

Digital communication technologies have become so sophisticated, persuasive, and intimate that they influence our very nature, our sense of self and how we express our identities. While acknowledging the tremendous space in communication that they have actualized, it is also important to critically engage with the influence that they have on our personal sphere.

Focusing on the multiple levels of analysis by the cohort of academics, in their roles as experts and educators, this research investigates the personal experience that they have of ICTs, and how they express aspects of the self over social media platforms. It combines a qualitative and quantitative mixed method design in two parallel phases. The first uses semi-structured interviews to gather testimonies about the relationship of academics with ICT, and then analyzes this data with grounded theory and NVivo in order to detect patterns and extract novel themes. The second employs computational linguistic techniques to study what academics say on social media platforms, specifically how they manifest themes identified in the qualitative study.

This paper will mainly focus on the second phase: what academics express on Twitter as their preferred communication platform. On the one hand, it will detail the methods used, as well as the challenges faced by a digital humanist at a computing institute in Japan; on the other, it will outline some of the more significant findings. Recent studies in social psychology have shown that what people say online, in its magnitude as “big data”, allows us to capture and predict personality traits, using for instance the Five Factor Model and sentiment analysis. Accordingly, this paper will discuss the application of natural language processing (NLP) and machine learning techniques to a humanistic study. It will also describe how such computational techniques can inform and complement thick qualitative descriptions of the subjective experience.