Ben Schmidt
I am Vice President of Information Design at Nomic, where I am working on new interfaces for interpreting and visualizing embedding models. For several years before that, I was a professor in the history departments at Northeastern and then NYU, where I worked with and led digital humanities groups deploying new approaches to thinking about the past through data analysis and data visualization. I have also write about higher education (teaching evaluations and humanities policy), narrative anachronism and plot structure, and political history.
I live in Montclair, NJ and work in Manhattan.
For a third-person bio or photo, click here.
Recent Blog Posts
Although I’ve given up on historically professing myself, I still have a number of automated scripts for analyzing the state of the historical profession hanging around. Since a number of people have asked for updates, it seems worth doing. As a reminder, I’m scraping H-Net for listings. When I’ve looked at job ads from the American Historical Association’s website, they seem roughly comparable.
Blaming the humanities fields for their travails recently can seem as sensible as blaming polar bears for not cultivating new crops as the arctic warms. It’s not just that it places the blame for a crisis in the fundamentally wrong place; it’s that it
It’s coming up on a year since I last taught graduate students in the humanities.
Last week we released a big data visualiation in collaboration with the Berens Lab at the University of Tübingen. It presents a rich, new interface for exploring an extremely large textual collection.
Yesterday was a big day for the Web: Chrome just shipped WebGPU without flags in the Beta for Version 113. Someone on Nomic’s GPT4All discord asked me to ELI5 what this means, so I’m going to cross-post it here—it’s more important than you’d think for both visualization and ML people. (thread)
This is a Twitter thread from March 14 that I’m cross-posting here. Nothing massively original below. It went viral because I was one of the first to extract the ridiculous paragraph below from on the release of GPT-4, and because it expresses some widely shared concerns.
Recently, Marymount–a small Catholic university in Arlington, Virginia–has been in the news for a draconian plan to eliminate a number of majors, ostensibly to better meet student demand. I recently learned the university leadership has been circulating one of my charts to justify the decision, so I thought I’d chime in on the context a bit. My understanding of the situation, primarily informed by the coverage in ARLNow, is this seems like bad plan,1
I sure don’t fully understand how large language models work, but in that I’m not alone. But in the discourse over the last week over the Bing/Sydney chatbot there’s one pretty basic category error I’ve noticed a lot of people making. It’s thinking that there’s some entity that you’re talking to when you chat with a chatbot. Blake Lemoine, the Google employee who torched his career over the misguided belief that a Google chatbot was sentient, was the first but surely not the last of what will be an increasing number of people thinking that they’ve talked to a ghost in the machine.1