Encouraging women studying in STEM: why I want universities to publish open data
- 6 minsI almost failed statistics last semester.
The experience reignited old insecurities about being bad at maths. But it also made me realise just how little I actually knew about the unit I was doing: how people had performed in the past; how women were going with it; whether others were struggling with the content or the teaching approach - to help put my insecurities in perspective.
Without any of this information, I filled the vacuum with a nagging sense of self-doubt.
Universities, cough up your data
This year I’ve been balancing work alongside a masters in applied data analytics - a mix of computer science, statistics and social research units. My experiences so far have me thinking about whether universities should be compelled to make public - or at least share with students - certain kinds of data. Data like:
-
The performance of past unit cohorts - median and mean marks of students broken down by gender and by tutor. This could help people, especially women, understand their individual performance in context. It could also help to identify in STEM where women might be, on average, under performing or dropping out of units, and figure out why.
-
Aggregated results from student feedback regarding unit content and learning experience. We fill out these surveys for every unit, but the information they gather about teaching quality is never made available. Understanding how past students have rated a unit - even if it’s at a high level - can help students make informed decisions about units to enroll in.
-
Basic contextual data about cohort composition. Non personal data about things like total students enrolled in a unit, gender breakdown, whether domestic or international student. This kind of information helps to make sense of cohort averages and trends.
I’ve already had arguments with academic friends about why this couldn’t work - because academic performance is measured in terms of publications and research money, not teaching quality. Because all of the major university rankings focus primarily on research, and so universities aren’t ultimately very interested in teaching. Because publishing data about poor quality units would make lecturers feel bad. One university couldn’t do it alone, all universities would need to.
I’ve thought a lot about the reasons I’ve heard for not publishing data, and I think my arguments in favor of publishing data are better.
To start with, students are paying more for university degrees every year. Maybe teaching quality should be more of a concern. Having greater visibility of unit quality and performance could drive more focus on teaching outcomes, and even increase competition between and within universities attracting students. It provides a mechanism for universities to distinguish themselves other than through research rankings.
Without open data, it’s also hard to get a handle on issues like whether women are underperforming in STEM subjects in Australia; whether they’re more likely to drop out STEM units; and why. The data can be collated, but it’s not reflected in summary statistics published by entities like the Department for Education or Universities Australia, and universities don’t individually report on it either.
There’s not much incentive for universities to share data about the performance of women as part of student cohorts, or to share data about student performance and teaching quality generally. It could expose weaknesses. It could expose systemic issues with certain teachers or units. But as US artist Mimi Onuoha has said, for every dataset where there’s an impetus for someone not to collect it, there’s a group of people who would benefit from its presence.
Weeding out bad STEM experiences
My results so far in my masters have been good. I’m sitting on 90% for one applied research unit, and received 78% for my relational database design unit. But when I got to statistics, I flopped.
My statistics experience ticked a lot of the stereotypes about STEM. Of seven tutors, one was female. The unit coordinator was male. Throughout the unit, it wasn’t uncommon for my tutor and the lecturer to say things like, “if you’re not getting this yet, this subject may not be for you” and “if you’re finding this difficult, you will find every unit that follows it difficult”. The impetus being, if you’re struggling it’s because you’re ‘not good’ - not because of the teaching approach.
Research has shown that women are less likely to be confident about their maths and science ability, and that they’re more likely to drop out of STEM subjects because of it.
I saw women on multiple occasions crying during lectures. Once when I went to see the unit coordinator about a particularly thorny practice exam problem, a woman waiting outside asked me anxiously how I was going, and confessed that she was dropping out. When I failed the mid semester exam, I cried too. My poor performance was evidence that I was bad at maths.
What I didn’t know then was that I was one of about 350 people who failed the mid semester exam. It felt like it was just me. We were told the median mark (54%) and the average mark (53%) but I had no sense of scale, that I was one of over 700 people enrolled in the unit.
After results were distributed, rumors also starting flying around about how much ‘harder’ the unit had become since this particular coordinator had taken it over. How marks were always poor, and were ultimately heavily scaled. These were all rumors, but they reduced the sense I had that I uniquely was terrible at statistics.
In parallel on Twitter a range of brilliant people working in web development, data science and senior leadership positions shared with me their own experiences of maths failures, of repeating units at university (sometimes more than once!).
The more information I had to put my own mid-semester exam result in perspective, the less I felt like I was innately bad at maths. My confidence improved. I worked doubly hard in preparation for the final exam and passed the unit comfortably.
Open data about past and present cohort experiences - their results, their feedback on unit content and tutors - isn’t just good for improving teaching quality. It can help people put their own performance in context, and see that often their struggle is a shared one. It can help to expose structural and cultural barriers to participation.
I had a bad experience with one particular unit, but I also know lots of lecturers and unit teams who are passionate about improving learning outcomes (my relational database unit through the computer science faculty in contrast was great!). They use data like student feedback to improve unit content and methods of delivery. They’re developing tools to help identify students who might be struggling and provide better support.
The problem is, these kinds of practices aren’t evenly distributed. And in the subject areas that we say we care a lot about increasing diversity (like maths and science), our expectations of teaching standards should be high. Greater transparency can only help to improve teaching across the board and provide students with better informed choices overall.