Data analysis in the humanities presents challenges of scale, interpretation, and communication distinct from the social sciences or sciences.
This seminar will explore the emerging practices of data analysis in the digital humanities from two sides: a critical perspective aiming to be more responsible readers of cultural analytics, and a creative perspective to equip you to perform new forms of data analysis yourself.
Our goal is to make it possible to merging forms of data analysis taking place in humanities scholarship, both in terms of applying algorithms and in terms of better investigating the presuppositions and biases of the digital object. We’ll aim to come out much more sophisticated in the use of computational techniques and much more informed about how others might use them.
Some of the key questions we’ll aim to answer are:
A wide variety of types of data will be used but we will focus particularly on methods for analyzing texts in the context of other methods. If your interests lie elsewhere, don’t worry too much–as you’ll learn, most of the textual approaches we’ll consider are easily adaptable for (and in many cases, orginally developed for) other sources of data.
Over the course of the semester, you should work to develop your own collection of data. Working with these texts will allow us to ask more sophisticated questions on large documents of scholarly importance. % Course Goals
Be able to contribute to debates about the place of data analysis in the humanities from both a technical and theoretical perspective, in a way that lets you responsibly elicit “data” (or as Johanna Drucker would have it, “capta”) out of more humanistic stores of knowledge.
Acquire proficiency in the manipulation, transformation, and graphic presentation of data in the R programming language for use in the context of exploratory data analysis and presentation.
Know the appropriate conditions for using, and be able to use, some of the major machine learning algorithm for data classification, clustering, and dimensionality reduction.
Execute projects creatively deploying and combining these methods in ways that contribute to humanistic understanding.
This course will have you writing some code in the R language. There is an extensive debate about whether digital humanists need to learn to code which we’re not going to engage in; the fact of the matter is simply that if you want to either do data analysis in the humanities, coding will often be the only way to realize your personal vision; and if you want to build resources in the humanities that others might want analyze, you’ll need to know what sophisticated users want to do with your tools to make them work for them.
I have no expectation that anyone will come out of this a full-fledged developer. In fact, I hope that by doing some actual scripting, you’ll come to see that these debates over learning to code brush over a lot of intermediate stages. We’ll be focusing in particular in developing skills less in full-fledged “programming,” but in “scripting.” That means instructing a computer in every stage of your work flow; using a language rather than a Graphical User Interface (GUI), which may be almost all the program you’ve used before. This takes more time at first, but has some extraordinary advantages over working in a GUI:
First off: you don’t have to do R. If you already know python and want to build on that, it is possible to do almost everything in this class using pandas for dataframe analysis and altair for visualization. At some point, I may try to rewrite the whole class to support either.
In this class, you will have to do some coding as wellg about data. Exploratory data analysis which operates on as just thinking about data analysis in the humanities. If you’ve never coded before, this will be frustrating from time to time. (In fact, if you’ve done a lot of coding before, it will still be frustrating!)
We’ll be working entirely in the “R” language, developed specifically for statistical computing. This has three main advantages for the sort of work that historians do:
It is easy to download and install, though the program RStudio
. This makes it easy to do “scripting,” rather than true programming, where you can test your results step by step. It also means that R takes the least time to get from raw data to pretty plots of anything this side of Excel. RStudio also offers a number of features that make it easier to explore data interactively.
It has a set of packages we’ll be using for data analysis. These packages, whose names you will scattered through this text, are ggplot2
, tidyr
, dplyr
, and the like. These are not core R libraries, but they are widely used and offer the most intellectually coherent approach to data analysis and presentation of any computing framework in existence. That means that even if you don’t use these particular tools in the future, working with them should help you develop a more coherent way of thinking about what data is from the computational side, and what you as a humanist might be able to do with it. These tools are rooted in a long line of software based on making it easy for individuals to manipulate data: read the optional source on the history of database populism to see more. The ways of thinking you get from this will serve you will in thinking about relational databases, structured data for archives, and a welter of other sources.
It is free: both “free as in beer,” and “free as in speech,” in the mantra of the Free Software Foundation. That means that it–like the rest of the peripheral tools we’ll talk about–won’t suddenly become inaccessible if you lose a university affiliation.
Different computer languages serve different purposes. If you have taken ever taken an introductory computer science course, you might have learned a different language, like python, Java, C, or Lisp.
Although computing languages are equivalent in a certain, abstract sense, they each channel you towards thinking in particular ways. So when we
Which of these languages is best? It depends, obviously, on what you want to do. If you only learn a single languauge, there’s a strong argument that it should Python, which is a widespread, swiss-army-knife type language that can frequently run quite quickly. But python generally promotes a kind of thinking about how you can get a problem done.
What R–especially tidyverse R–does best is let you abstract back from thinking about programming to thinking about data. Exploratory data analysis which operates on a particular base class, the ‘data frame.’ We’ll talk about this more in Chapter 3; but the point is that it provides a coherent, basic language for describing any data set in terms of groupings, summary statistics, and visualization.
The closest analogues to these in other languages are less elegant and less well thought out. Python has widely used tool called pandas
for analyzing data that is fast, powerful, and effective. But it is also more challenging for beginners than it need be. If you Google problems with pandas you’ll be confronted with a variety of problems;1 R for data science has been a little less confusing.
One thing you can’t do in this course, though, is rely on the out-the-box approaches prevalent in many DH programs. ArcGIS or QGIS may be the best way to make maps, and Gephi the best way to do network analysis. But as this is a course in data analysis, I want you to think about the fundamental operations of cartography and network analysis as simply subsets of a broader field, which is hard to see from the confines. All of these things are possible in R. And unlike graphical tools, working in a language saves your workflow. If you make a map with laboriously poisitioned points in ArcGIS, your operations aren’t open for inspection. In R, though, every step you take and every move you make can be preserved. This is called reproducible research, and it is among the most important contributions you can make when working collaboratively.
This course works alongside of an online textbook that I will try to keep up to date with what we’re working on.
The exercises exist as part of that textbook.
In general, I tell you to read a chapter or two each week. These are going to messy, and there are some weeks they might be almost missing. It’s somewhere between an outline, a textbook, and a reference. But it keeps in one place both the topics you need to cover and the worksets we’ll be using, alongside code that you can use, copy, and paste.
Do not expect the sections to be online before Tuesday around 5:00pm.
Since we’re using R for this course, I’m putting the materials for it online as an R “package.” That means a bundle of code and data that you can install to your local computer and use.
To use this, you’ll need to install R and the RStudio environment that it uses. Some instructions I think should work for this are here
That package is online at https://github.com/HumanitiesDataAnalysis/HumanitiesDataAnalysis. But in general, there should be no reason to access it there; instead, you will install it within Rstudio on your machine or on any machine you work on. (The computers in Data Services at Bobst, for example, should have RStudio on them; you can just pull up this website, run the code below, and have the basic things you need to work with in this class.
The purpose of this is twofold.
if(!require(remotes)) {install.packages("remotes")}
remotes::install_github("HumanitiesDataAnalysis/HumanitiesDataAnalysis", update = FALSE)
There are a few other R packages associated with this course that we’ll use as we go along, by me and others. They will be installed in similar ways.
In general, you’ll then be able to start working by adding the following text to the beginning of your code (make sure it’s inside an R Markdown block.)
library(HumanitiesDataAnalysis)
library(tidyverse)
Remember to keep this on hand; you will probably need to rerun some of these command quite often.
This is an unconventional methods course, because we’ll be looking at two very different kinds of methods; literature and code.
You must attend class each week having completed the assigned readings and ready to discuss them. This may be a tiny class; let me know in advance if you are going to be missing.
To help consolidate your programming abilities, there will be weekly problem sets. The first can be completed in any form you like; later ones should be mailed to me as R Markdown Documents. (This is the format I’ll be sending them to you as.)
They should be completed weekly, and e-mailed to me before the start of class. Problem sets are required but ungraded–I understand that you may not be able to complete them every week. The purpose is to try.
Once we get our feet wet, I’ll ask you to post results of data explorations online. If I were you, I’d do this on a blog or straight to social media. Also keep them coming to class. This can be on one of the large unstructured sets I provide, or another data source you work out with me in class.
Later we’ll be exploring a series of specific algorithms. Some we’ll go over in depth in class; others we’ll only touch on obliquely. Based on the data from class you find most interesting (or data of your own, we’ll work to determine which of those algorithms may make sense as a transformation. You’ll then write up a short version of the exploration for the course blog, present it briefly in class, and then revise your post in response to comments.
If you are taking this under the guise of a research seminar with your own materials, you should produce a multifaceted analysis with a reflective, methodological take on the data you bring to the class. This could either take the form of an explicit journal article for a digital humanities audience (I would mirror those in Digital Scholarship in the Humanities or Cultural Analytics) or a 10-20 methodological appendix to a larger work (such as a dissertation) giving the details of an analysis that may take only a few pages in a more traditional work. You’ll consult with me over the semester about how best to integrate your sources with materials from class.
If you are taking it as a readings course, you still should create something. As we move into the later weeks of the semester, you should figure out which of the the various data sets we’ve used may be particularly interesting and find a way to build out on the techniques and strategies to create something novel. Most likely, this will be an experiment along the lines of the “Quantitative Formalism” pamphlet we read later. In it, you will take the fundamental advantages of a programming environment combine some of the various methods and strategies we’ve learned in a programming environment, or build out some new ones. Appropriate products might include a large-format print map, a set of blog posts exploring generic distances, or a poster proposal.
This is an advanced graduate seminar; I hope that the syllabus will change in response to your own interests and readings.
This flexibility may cause problems of “versioning”: what version of the syllabus should you believe? So for the record, the priority for what to do consists of:
Grading in this course is thorny. As graduate students, you should be starting to get a sense of what is important to you; I’m not going to quiz you on the parts of books that you find interesting.
Grading is based on how fully you do the following:
Notes: mostly we’ll be reading articles in this course available online. One are required for purchase. If you have difficulty obtaining any texts, please let me know as soon as possible. In week 1, you’ll read my spiel about what humanists need to understand when they read CS. My answer is, in general–you need to know what they did, but not how they did it. I’ve put some CS papers in this syllabus to expand your thinking about what’s possible. You should absolutely, positively, not aim to understand the process in a CS paper. As a rule, if you see a fancy equation in an article not written by a humanist, you can probably skip the whole section for the time being.
Introductions
Due Mon, Jan 24: - “Install R on your computer.” - “The programs R and Rstudio Rstudio is a wrapper program around the R language that we’ll be using for almost every assignment.”
What is (could be) Humanities Data Analysis?
Readings
Online text
Due Mon, Jan 31: Choose two datasets to discuss in class that are relevant to your research interests, so far as you’re able to find them.
One should be something that you can actually download, almost certainly in the form of a CSV or Excel File.
The other should be something that you know exists, but that you might not be fully able to work with yet.
For both of them, fill out the online spreadsheet. The goal here is to reduce this to a tabular dataset. Describe what each of the columns in this dataset would be.
Do not describe the dataset as a whole aside from the columns–see if you can capture it in the individual elements.
Due Mon, Jan 31: Try to finish the exercises for “Working in a Programming Language,” installing R and Rstudio.
Information %>% Data
Readings
Online text
agenda: Class agenda
Data Visualization
Readings
Online text
Practicum for next class
ggplot2
”Related texts not to read
Due Wed, Feb 16: Counting things
No class: President’s Day
Counting, grouping, and accounting for how only things that get counted count.
description: A huge amount of work is just about finding interesting things to count. Often, sophisticated work can just be figuring how to count something new. Here we look a little bit at how you can, simply count something.
Readings
Online text
Related texts not to read
Practicum for next class
Making data work together
Readings
Online text
Practicum for next class
No class: Spring Break
Text as Data, 1
Readings
practicum for next class: -“Texts as Data, exercises.”
Due Fri, Mar 25: Place on the course Slack two ggplot visualizations results from a join between two different datasets. Try to be goofy on one and serious with the others. You may use text fields if you want.
Text as Data, 2
Readings
Online text for this class session
agenda: Class agenda
Due Fri, Apr 01: Place on the course Slack two ggplot visualizations results from a join between two different datasets. Try to be goofy on one and serious with the others. You may use text fields if you want.
Space as Data
Readings
Online text for this class
Due Wed, Apr 06: Free exercise: use some bag of words on the texts of your own choosing and explore comparisons between subsets using PMI or Dunning. These can be full-text, XML, or–if–you prefer–wordcounts for books from the HathiTrust as described in the online text. Post as images or tables to the slack channel #getting-text-files."
Dogs as Data
description: I think we need a little reboot, so we’ll focus on dogs for a little bit. Claim a possible question in the slack as described there. It’s OK if you can’t fully realize what you want to do, but you must try something, post your questions, your broken code.
Readings
Due Mon, Apr 11: Download a shapefile or geojson from the Internet, read it into R, and make a map that you are confident no one has made before. Post in Slack.
Due Mon, Apr 11: Identify data/datasets you’ll be working with for the rest of the class
Supervised Learning and Predictive Models
note: From this point on, the weekly readings and topics are about specific applications of algorithms to different types of problems. To this point, everything we’ve done has been foundational–from here on out, it’s more about specific applications that you can do if you want, but don’t necessarily need to.
Class agenda
Readings
online text: Classification.
Clustering, topic modeling, and unsupervised approaches
Readings
In class agenda
Due Mon, Apr 25: due
Due Mon, Apr 25: text
The Embedding Strategy and representation learning.
description: Modern machine learning requires data, but it doesn’t just look like an XML or TEI representation. Instead, a particular trick for turning items into strings of numbers–the embedding strategy–has emerged as the dominant ways for computers to represent information to themselves.
Readings
Assignment for this class
Online text
Going deep
Readings
Class agenda
I am indebted to a variety of people for contributions to this class. Those whose syllabi I have taken readings, ideas, and (in one case) a unit title from include Andrew Goldstone, Johanna Drucker, Lev Manovich, Jason Heppler, and Ted Underwood.
I’ve leaned especially heavily on Ryan Cordell’s 2017 offering of a version of this course at Northeastern University. Thanks also to the graduate students who took it in 2015 and 2019 at that institution.
I also gratefully acknowledge Andrew Goldstone’s contribution to the syllabus template.
Allison, Sarah, Ryan Heuser, Matthew L. Jockers, Franco Moretti, and Michael Witmore. “Quantitative Formalism: An Experiment (Stanford Literary Lab, Pamphlet 1).” Stanford: Standford Literary Lab, January 15, 2011.
Blevins, C. “Space, Nation, and the Triumph of Region: A View of the World from Houston.” Journal of American History 101, no. 1 (2014): 122–47. https://doi.org/10.1093/jahist/jau184.
Daston, Lorraine, and Peter Galison. Objectivity. New York; Cambridge, Mass.: Zone Books ; Distributed by the MIT Press, 2007.
Drucker, Johanna. “Humanities Approaches to Graphical Display” 5, no. 1 (2011). http://www.digitalhumanities.org/dhq/vol/5/1/000091/000091.html.
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep Learning.” Nature 521, no. 7553 (May 2015): 436–44. https://doi.org/10.1038/nature14539.
Logan, Trevon D., and John M. Parman. “The National Rise in Residential Segregation.” The Journal of Economic History 77, no. 1 (March 2017): 127–70. https://doi.org/10.1017/S0022050717000079.
Michel, Jean-Baptiste, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K Gray, Joseph P Pickett, Dale Hoiberg, et al. “Quantitative Analysis of Culture Using Millions of Digitized Books.” Science (New York, N.Y.) 331, no. 6014 (January 14, 2011): 176–82. https://doi.org/10.1126/science.1199644.
Rosenberg, Daniel. “Data Before the Fact.” In Raw Data Is an Oxymoron, edited by Lisa Gitelman. Cambridge: MIT Press, 2013.
Tukey, John W. Exploratory Data Analysis. Addison-Wesley Series in Behavioral Science. Reading, Mass: Addison-Wesley Pub. Co, 1977.
Underwood, Ted, David Bamman, and Sabrina Lee. “The Transformation of Gender in English-Language Fiction.” Journal of Cultural Analytics, 2018. https://doi.org/10.22148/16.019.
Unsworth, John. “Knowledge Representation in Humanities Computing,” 2001. http://www.people.virginia.edu/~jmu2m/KR/KRinHC.html.
Witmore, Michael. “Text: A Massively Addressable Object,” December 31, 2010. http://winedarksea.org/?p=926.
Ten years ago, experts tended to pontificate that python was better than R because it had a small standard library, cleaner syntax, and promoted a single way to do things effectively. One of the great ironies of modern data science is that, for programming with data, the situation has almost completely reversed; the pandas library presents a↩︎