February Newsletter - One More Day

newsletter
Author

Jilly MacKay

Published

February 29, 2024

Work thoughts

I’m going to leave this post as-is for reference, but I have since had a chat with Student Voice.ai and they’ve gone a long way to reassuring me that what they do would be better termed a classification model, and they very firmly tell users that they should still read their free text data. So I feel a lot more comfortable about our use of them now (and I think they want to update their website a bit!)

AI has come up at work a lot recently. A colleague (who I’m surprised by!) suggested we use it to do a literature analysis, and apparently we at Edinburgh are using AI to analyse our free text comments from student feedback

I feel deeply uncomfortable about all of this. I have written about how I think we should make more use of open and reusable approaches to student feedback in higher ed and the use of a black box algorithm is a horrendous step in the wrong direction. The Student Voice.AI site talks about using machine learning models (which always brings me back to my question as to whether a trained logistic regression counts as AI) to ‘analyse’ what feedback is saying. Their marketing material contains this highlight:

Consistent

By applying our models across text from all of your surveys, you can make direct comparisons between your NSS, module evaluation, pulse and student experience surveys

But I can’t see how that can be the case? If a model is generated on the data, then the same model is not being applied to different datasets. If its not generated on the data, then what are the specifications? How will it pick up something novel to a particular context?

I had a College QA meeting this week where someone highlighted that they thought students saw themselves in an us vs them relationship with staff against the wider University, which is exactly what a large research project I conducted into NSS showed several years ago. I would greatly welcome StudentVoice.Ai, or any of these competitors to redo our analysis on that data and see what it comes up with.

You’ll have seen, I’m sure, Glasgow’s latest chocolate lacking AI fuelled drama. I do wonder when we’ll get an AI coming up with some kind of hallucinated theme from this student data. How much money will we spend pursuing it?

Blog Updates

I’ve decided that Fluffy Sciences will not continue its renewal this year. It doesn’t have as much relevance to my professional identity any more. I’ve ported a few favourite blogs over:

Stuff I found:

A warning about ChatGPT from an EDI perspective on mastodon here

This was fun: I also really enjoyed this challenge on mastodon.

My answer was posted in reply but for funsies lets put it here too

library(tidyverse)

pride_pal <- c("#E40303",
               "#FF8C00",
               "#FFED00",
               "#008026",
               "#24408E",
               "#732982")

fabdat <- tibble(x = c(1,1,1,1,1,1),
                 y = c(1,1,1,1,1,1),
                 fill = c("1", "2", "3",
                          "4", "5", "6"))

fabdat |> 
  ggplot(aes(x, y, fill = fill)) +
  geom_bar(stat = "identity") +
  scale_fill_manual(values = pride_pal) +
  theme_void() +
  theme(legend.position = "none")