I am Vice President of Information Design at Nomic, where I am working on new interfaces for interpreting and visualizing embedding models. For several years before that, I was a professor in the history departments at Northeastern and then NYU, where I worked with and led digital humanities groups deploying new approaches to data analysis and data visualization to help people think about the past. I have also write about higher education (teaching evaluations and humanities policy), narrative anachronism and plot structure, and political history.
I live in Manhattan.
For a third-person bio or photo, click here.
Blaming the humanities fields for their travails recently can seem as sensible as blaming polar bears for not cultivating new crops as the arctic warms. It’s not just that it places the blame for a crisis in the fundamentally wrong place; it’s that it
It’s coming up on a year since I last taught graduate students in the humanities.
Last week we released a big data visualiation in collaboration with the Berens Lab at the University of Tübingen. It presents a rich, new interface for exploring an extremely large textual collection.
Yesterday was a big day for the Web: Chrome just shipped WebGPU without flags in the Beta for Version 113. Someone on Nomic’s GPT4All discord asked me to ELI5 what this means, so I’m going to cross-post it here—it’s more important than you’d think for both visualization and ML people. (thread)
This is a Twitter thread from March 14 that I’m cross-posting here. Nothing massively original below. It went viral because I was one of the first to extract the ridiculous paragraph below from on the release of GPT-4, and because it expresses some widely shared concerns.
Recently, Marymount–a small Catholic university in Arlington, Virginia–has been in the news for a draconian plan to eliminate a number of majors, ostensibly to better meet student demand. I recently learned the university leadership has been circulating one of my charts to justify the decision, so I thought I’d chime in on the context a bit. My understanding of the situation, primarily informed by the coverage in ARLNow, is this seems like bad plan,
I sure don’t fully understand how large language models work, but in that I’m not alone. But in the discourse over the last week over the Bing/Sydney chatbot there’s one pretty basic category error I’ve noticed a lot of people making. It’s thinking that there’s some entity that you’re talking to when you chat with a chatbot. Blake Lemoine, the Google employee who torched his career over the misguided belief that a Google chatbot was sentient, was the first but surely not the last of what will be an increasing number of people thinking that they’ve talked to a ghost in the machine.
I attended the American Historical Association’s conference last week, possibly for the last time since I’ve given up history professorin. Since then, the collapse of the hiring prospects in history has been on my mind more. See Erin Bartram, Kathryn Otrofsky and Daniel Bessner on the way that this AHA was haunted by a sense of terminal decline in the history profession. I was motivated to look a bit at something I’ve thought about several times over the years: what happens to people after receiving a PhD in history?
The collapse of Twitter under Elon Musk over the last few months feels, in my corner of the universe, like something potentially a little more germinal; unlike in the various Facebook exoduses of the 2010s, I see people grasping towards different models of the architecture of the Web. Mastodon itself (I’ve ended up at @firstname.lastname@example.org for the time being) seems so obviously imperfect as for its imperfections to be a selling point; it’s so hard to imagine social media staying on Rails application for the next decade that using it feels like a bet on the future, because everyone now knows they need to be prepared to migrate again.