Although I’ve given up on historically professing myself, I still have a number of automated scripts for analyzing the state of the historical profession hanging around. Since a number of people have asked for updates, it seems worth doing. As a reminder, I’m scraping H-Net for listings. When I’ve looked at job ads from the American Historical Association’s website, they seem roughly comparable.

Blaming the humanities fields for their travails recently can seem as sensible as blaming polar bears for not cultivating new crops as the arctic warms. It’s not just that it places the blame for a crisis in the fundamentally wrong place; it’s that it
It’s coming up on a year since I last taught graduate students in the humanities.

Last week we released a big data visualiation in collaboration with the Berens Lab at the University of Tübingen. It presents a rich, new interface for exploring an extremely large textual collection.

Happy WebGPU Day Apr 06 2023

Yesterday was a big day for the Web: Chrome just shipped WebGPU without flags in the Beta for Version 113. Someone on Nomic’s GPT4All discord asked me to ELI5 what this means, so I’m going to cross-post it here—it’s more important than you’d think for both visualization and ML people. (thread)

This is a Twitter thread from March 14 that I’m cross-posting here. Nothing massively original below. It went viral because I was one of the first to extract the ridiculous paragraph below from on the release of GPT-4, and because it expresses some widely shared concerns.

Marymount majors Mar 03 2023

Recently, Marymount–a small Catholic university in Arlington, Virginia–has been in the news for a draconian plan to eliminate a number of majors, ostensibly to better meet student demand. I recently learned the university leadership has been circulating one of my charts to justify the decision, so I thought I’d chime in on the context a bit. My understanding of the situation, primarily informed by the coverage in ARLNow, is this seems like bad plan,

1

so I thought I’d take a quick look at the university’s situation.

I sure don’t fully understand how large language models work, but in that I’m not alone. But in the discourse over the last week over the Bing/Sydney chatbot there’s one pretty basic category error I’ve noticed a lot of people making. It’s thinking that there’s some entity that you’re talking to when you chat with a chatbot. Blake Lemoine, the Google employee who torched his career over the misguided belief that a Google chatbot was sentient, was the first but surely not the last of what will be an increasing number of people thinking that they’ve talked to a ghost in the machine.

1

I attended the American Historical Association’s conference last week, possibly for the last time since I’ve given up history professorin. Since then, the collapse of the hiring prospects in history has been on my mind more. See Erin Bartram, Kathryn Otrofsky and Daniel Bessner on the way that this AHA was haunted by a sense of terminal decline in the history profession. I was motivated to look a bit at something I’ve thought about several times over the years: what happens to people after receiving a PhD in history?

Hello again, RSS Jan 01 2023

The collapse of Twitter under Elon Musk over the last few months feels, in my corner of the universe, like something potentially a little more germinal; unlike in the various Facebook exoduses of the 2010s, I see people grasping towards different models of the architecture of the Web. Mastodon itself (I’ve ended up at @benmschmidt@vis.social for the time being) seems so obviously imperfect as for its imperfections to be a selling point; it’s so hard to imagine social media staying on Rails application for the next decade that using it feels like a bet on the future, because everyone now knows they need to be prepared to migrate again.

New Directions Oct 26 2022

I’m excited to finally share some news: I’ve resigned my position on the NYU faculty and started working full time as Vice President of Information Design at Nomic, a startup helping people explore, visualize, and interact with massive vector datasets in their browser.

When you teach programming skills to people with the goal that they’ll be able to use them, the most important obligation is not to waste their time or make things seem more complicated than they are. This should be obvious. But when I’m helping humanists decide what workshops to take, reviewing introductory materials for classes, or browsing tutorials to adapt for teaching, I see the same violation of the principle again and again. Introductory tutorials waste enormous amounts of time vainly covering ways of accomplishing tasks that not only have absolutely no use for beginners, but which will confuse learners by making them

It’s not very hard to get individual texts in digital form. But working with grad students in the humanities looking for large sets of texts to do analysis across, I find that larger corpora are so hodgepodge as to be almost completely unusable. For humanists and ordinary people to work with large textual collections, they need to be distributed in ways that are actually accessible, not just open access.

I’ve never done the “Day of DH” tradition where people explain what, exactly, it means to have a job in digital humanities. But today looks to be a pretty DH-full day, so I think, in these last days of Twitter, I’ll give it a shot. (thread)

A Rose for Ruby Feb 27 2022

Ruby Logo There are programming languages that people use for money, and programming languages people use for love. There are Weekend at Bernie’s/Jeremy Bentham corpses that you prop up for the cash, and there are “Rose for Emily” corpses you sleep with every night for decades because it’s too painful to admit that the best version of your life you ever glimpsed is not going to happen.

I’ve been spending more time in the last year exploring modern web stacks, and have started evangelizing for SvelteKit, which is a new-ish entry into the often-mystifying world of web frameworks. As of today, I’ve migrated this, personal web site from Hugo, which I’ve been using the last couple years, to sveltekit. Let me know if you encounter any broken links, unexpected behavior, accessibility issues, etc. I figured here I’d give a brief explanation of why sveltekit, and how I did a Hugo-Svelte kit migration.

Scott Enderle is one of the rare people whose Twitter pages I frequently visit, apropos of nothing, just to read in reverse. A few months ago, I realized he had at some point changed his profile to include the two words “increasingly stealthy.” He had told me he had cancer months earlier, warning that he might occasionally drop out of communication on a project we were working on. I didn’t then parse out all the other details of the page—that he had replaced his Twitter mugshot with a photo of a tree reaching to the sky, that the last retweet was my friend Johanna introducing a journal issue about “interpretive difficulty”—the problems literary scholars, for all their struggles to make sense, simply can’t solve. I only knew—and immediately stuffed down the knowledge—that things must have gotten worse.

This article in the New Yorker about the end of genre prompts me to share a theory I’ve had for a year or so that models at Spotify, Netflix, etc, are most likely not just removing artificial silos that old media companies imposed on us, but actively destroying genre without much pushback. I’m curious what you think.

I’ve been yammering online about the distinctions between different entities in the landscape of digital publishing and access, especially for digital scholarship on text. So I’ve collected everything I’ve learned over the last 10 years into one, handy-to-use, chart on a 10-year-old meme. The big points here are:

I mentioned earlier that I’ve been doing some work on the old Bookworm project as I see that there’s nothing else that occupies quite the same spot in the world of public- facing, nonconsumptive text tools.

I’ve recently been getting pretty far into the weeds about what the future of data programming is going to look like. I use pandas and dplyr in python and R respectively. But I’m starting to see the shape of something that’s interesting coming down the pike. I’ve been working on a project that involves scatterplot visualizations at a massive scale–up to 1 billion points sent to the browser. In doing this, two things have become clear:

Bookworm Caching Mar 07 2021

I used to blog everything that I did about a project like Bookworm, but have got out of the habit. There are some useful changes coming through through the pipeline, so I thought I’d try to keep track of them, partly to update on some of the more widely used installations and partly

I last looked at the H-Net job numbers about a month ago.

Since then, the news isn’t exactly good, but it’s also probably as good as anyone could expect. For most of September and October, history jobs were at about 25% of their average for the 2010s; this was slightly worse than we’re seeing in the approximate numbers in–for instance–science jobs, where new job openings are at about 30% of their normal levels (Thanks to Dylan Ruediger at the AHA for passing along that link.)

History Jobs Update Oct 01 2020

Out of a train-wreck curiosity about what’s been happening to the historical profession, I’ve been watching the numbers on tenure-track hiring as posted on H-Net, one of the major venues for listing history jobs.

[Update 10-2: switching to US and Canada only. An earlier version of this included other countries, even though I said it didn’t.]

Circle Packing Sep 01 2020

I’ve been doing a lot of my data exploration lately on Observable Notebooks, which is–sort of–a Javascript version of Jupyter notebooks that automatically runs all the code inline. Married with Vega-Lite or D3, it provides a way to make data exploration editable and shareable in a way that R and python data code simply can’t be; and since it’s all HTML, you can do more interesting things.

Every year, I run the numbers to see how college degrees are changing. The Department of Education released this summer the figures for 2019; these and next year’s are probably the least important that we’ll ever see, since they capture the weird period as the 2008 recession’s shakeout was wrapping up but before COVID-19 upended everything once again. But for completism, it’s worth seeing how things changed.

Ranking Graduate Programs

While I was choosing graduate programs back in 2005, I decided to come up with my own ranking system. I had been reading about the Google PageRank algorithm, which essentially imagines the web as a bunch of random browsing sessions that rank pages based on the likelihood that you–after clicking around at random for a few years–will end up on any given page. It occurred to me that you could model graduate school rankings the same way. It’s essentially a four-step process:

As I often do, I’m going to pull away from various forms of Internet reading/engagement through Lent. This year, this brings to mind one of my favorite stray observations about digital libraries that I’ve never posted anywhere.

As part of the 2016 Republican Primary, Jeb! Bush released a website enabling exploration of e-mails related to his official accounts as governor of Florida in the early 2000s. This whole sentence has an antiquity to it; the idea of pre-emptive disclosure (in large part to contrast with his presumed general election opponent, Hilly Clinton) seems hopelessly antique. And at the time, it was critized for accidentally disclosing all sort of personal information, both stories and Social Security Numbers. It did not make Jeb! president. Anyhow, back then I downloaded Jeb!’s e-mails–and Hillary’s–to think about what sort of stuff historians will do with these records in the future.

(This is a talk from a January 2019 panel at the annual meeting of the American Historical Association. You probably need to know, to read it, that the MLA conference was simultaneously taking place about 20 blocks north.)

Disciplinary lessons

Web Migration Jun 30 2019

Since 2010, I’ve done most of my web hosting the way that the Internet was built to facilitate: from a computer under the desk in my office. This worked extremely well for me, and made it possible to rapidly prototype a lot of of websites serving large amounts of data which could then stay up indefinitely; I have a curmudgeonly resistance to cloud servers, although I have used them a bit in the last few years (mostly for course websites where I wanted to keep student information separate from the big stew.)

Some news: in September, I’ll be starting a new job as Director of Digital Humanities at NYU. There’s a wide variety of exciting work going on across the Faculty of Arts and Sciences, which is where my work will be based; and the university as a whole has an amazing array of programs that might be called “Digital Humanities” at another university, as well as an exciting new center for Data Science. I’ll be helping the humanities better use all the advantages offered in this landscape. I’ll also be teaching as a clinical associate professor in the history department.

Critical Inquiry has posted an article by Nan Da offering a critique of some subset of digital humanities that she calls “Computational Literary Studies,” or CLS. The premise of the article is to demonstrate the poverty of the field by showing that the new structure of CLS is easily dismantled by the master’s own tools. It appears to have succeeded enough at gaining attention that it clearly does some kind of work far outsize to the merits of the article itself.

I wrote this year’s report on history majors for the American Historical Association’s magazine, Perspectives on History; it takes a medium term view of at the significant hit the history major has taken since the 2008 financial crisis. You can read it here.

As part of the Creating Data project, I’ve been doing a lot of work lately with interactive scatterplots. The most interesting of them is this one about the full Hathi collection. But I’ve posted a few more I want to link to from here:

I have a new article on dimensionality reduction on massive digital libraries this month. Because it’s a technique with applications beyond the specific tasks outlined there, I want to link to a few things here.

New Site Oct 21 2018

I’m switching this site over from Wordpress to Hugo, which makes it easier for me to maintain.

It may also confuse the RSS feed a bit. This should be hopefully be a one-time occurrence.

I have a new article in the Atlantic about declining numbers for humanities majors.

I put up a new post at Sapping Attention about how bad the decline in humanities majors has been since 2013. In short, it’s been bad enough to make me recant earlier statements of mine about the long-term health of the humanities discipline.

This is some real inside baseball; I think only two or three people will be interested in this post. But I’m hoping to get one of them to act out or criticize a quick idea. This started as a comment on Scott Enderle’s blog, but then I realized that Andrew Goldstone doesn’t have comments for the parts pertaining to him… Anyway.

I’ve gotten a couple e-mails this week from people asking advice about what sort of computers they should buy for digital humanities research. That makes me think there aren’t enough resources online for this, so I’m posting my general advice here. (For some solid other perspectives, see here). For keyword optimization I’m calling this post “digital humanities.” But, obviously, I really mean the subset that is humanities computing, what I tend to call humanities data analysis. [Edit: To be clear, ] Moreover, the guidelines here are specifically tailored for text analysis; if you are working with images, you’ll have somewhat different needs (in particular, you may need a better graphics card). If you do GIS, god help you. I don’t do any serious social network analysis, but I think the guidelines below should work relatively with Gephi.

Practically everyone in Digital Humanities has been posting increasingly epistemological reflections on Matt Jockers’ Syuzhet package since Annie Swafford posted a set of critiques of its assumptions. I’ve been drafting and redrafting one myself. One of the major reasons I haven’t is that the obligatory list of links keeps growing. Suffice it to say that this here is not a broad methodological disputation, but rather a single idea crystallized after reading Scott Enderle on “sine waves of sentiment.” I’ll say what this all means for the epistemology of the Digital Humanities in a different post, to the extent that that’s helpful.

Rate My Professor Feb 06 2015

Just some quick FAQs on my professor evaluations visualization: adding new ones to the front, so start with 1 if you want the important ones.

-3 (addition): The largest and in many ways most interesting confound on this data is the gender of the reviewer. This is not available in the set, and there is strong reason to think that men tend to have more men in their classes and women more women. A lot of this effect is solved by breaking down by discipline, where faculty and student gender breakdowns are probably similar; but even within disciplines, I think the effect exists. (Because more women teach at women’s colleges, because men teach subjects like military history than male students tend to overtake, etc). Some results may be entirely due to this phenomenon, (for instance, the overuse of “the” in reviews of male professors). But even if it were possible to adjust for this, it would only be partially justified. If women are reviewed differently because a different sort of student takes their courses, the fact of the difference in their evaluations remains.

I promised Matt Jockers I’d put together a slightly longer explanation of the weird constraints I’ve imposed on myself for topic models in the Bookworm system, like those I used to look at the breakdown of typical TV show episode structures. So here they are.

I’ve been thinking a little more about how to work with the topic modeling extension I recently built for bookworm. (I’m curious if any of those running installations want to try it on their own corpus.) With the movie corpus, it is most interesting split across genre; but there are definite temporal dimensions as well. As I’ve said before, I have issues with the widespread practice of just plotting trends over time; and indeed, for the movie model I ran, nothing particularly interesting pops out. (I invite you, of course, to tell me how it is interesting.)

I’ve been seeing how deeply we could integrate topic models into the underlying Bookworm architecture a bit lately.

My own chief interest in this, because I tend to be a little wary of topic models in general, is in the possibility for Bookworm to act as a diagnostic tool internally for topic models. I don’t think simply plotting description absent any analysis of the underlying token composition of topics is all that responsible; Bookworm offers a platform for actually accessing those counts and testing them against metadata.

This is a post about several different things, but maybe it’s got something for everyone. It starts with 1) some thoughts on why we want comparisons between seasons of the Simpsons, hits on 2) some previews of some yet-more-interesting Bookworm browsers out there, then 3) digs into some meaty comparisons about what changes about the Simpsons over time, before finally 4) talking about the internal story structure of the Simpsons and what these tools can tell us about narrative formalism, and maybe why I’d care.

Like many technically inclined historians (for instance, Caleb McDaniel, Jason Heppler, and Lincoln Mullen) I find that I’ve increasingly been using the plain-text format Markdown for almost all of my writing.

I thought it would be worth documenting the difficulty (or lack of) in building a Bookworm on a small corpus: I’ve been reading too much lately about the Simpsons thanks to the FX marathon, so figured I’d spend a couple hours making it possible to check for changing language in the longest running TV show of all time.

Here’s a very technical, but kind of fun, problem: what’s the optimal order for a list of geographical elements, like the states of the USA?

If you’re just here from the future, and don’t care about the details, here’s my favorite answer right now:

["HI","AK","WA","OR","CA","AZ","NM","CO","WY","UT","NV","ID","MT","ND","SD","NE","KS","IA","MN","MO","OH","MI","IN","IL","WI","OK","AR","TX","LA","MS","AL","TN","KY","GA","FL","SC","WV","NC","VA","MD","DE","PA","NJ","NY","CT","RI","MA","NH","VT","ME"]

String distance measurements are useful for cleaning up the sort of messy data from multiple sources.

There are a bunch of string distance algorithms, which usually rely on some form of calculations about the similarities of characters. But in real life, characters are rarely the relevant units: you want a distance measure that penalized changes to the most information-laden parts of the text more heavily than to the parts that are filler.