Not quite finding ada: some thoughts on ethics, gender and the humanization of chatbots

Over the past month I’ve spoken at number of events where I have been explicitly calling for the need for more critical and ethical discussions around the use of data and the implementation of any “digital” system, not only in universities but throughout the education sector.  

So I need to walk that walk and not just spurt out rhetoric.  This week at the ALT Scotland meeting, there was a presentation about a digital assistant (chatbot) system which has been developed by Bolton College. Aftab Hussain and his team have developed a pretty impressive system that allows staff and students to access a range of data (information) about their timetable, exams, where to find out about services.  You can read more about their work here.

This is all “exciting stuff” and seeing and hearing the real time responses was pretty impressive. The college are very lucky to have people like Aftab and his team who are able to develop this kind of system in house. I don’t think many colleges or universities for that matter have a team of developers who can do, or have the time to do this type of work.  This post is not criticising their work it is just sharing some wider questions and thoughts that it raised for me around the the development and implementation of systems like this in education.

Firstly, this system has been called ada. Throughout the presentation the system was referred to as “she”. Humanization coupled with classic gender bias stereotyping of the helpful, subservient “user friendly” female. The humanisation of such systems troubles me. The more it was referred to as “she” the more agitated I got.  Ada is not a person, it is a system linking APIs and processing data from multiple sources.

This led to questions around ethics and the who, where, what and when of any data processing. And I was glad that ethics were highlight in the presentation. But wouldn’t you know it, this is all GDPR compliant. Well most of it is , apart from some fuzziness around the use of voice activated systems like Alexa (hello Google you data loving monster).

I am increasingly seeing GDPR around institutional systems as both an assumption of privacy and data protection for users as well as a great get excuse for not doing things.  

I wonder if all the users digital assistants really understand the implications of where their data is going or how it is being used. Whilst data may be anonymized, I kind of suspect that in this case, Wolfram Alpha will be able to use patterns of queries to develop more (biased) algorithms.  

So whilst I can see the benefits of not having to trawl around website to try and find out where to get information about bursaries, timetables etc  and that many students don’t want to/ or perhaps don’t know who to ask for help. I have to say I was impressed by what must be an pretty robust institutional data architecture.  I couldn’t help who is making the decisions about what data is added to the system?   The low hanging fruit (haven’t use that phrase in a while) is all there, but what next?  

Whilst I was at the loo after the session, I noticed that there were free sanitary products – what a great idea. Sadly we have period poverty in this country, and having access to free sanitary products in colleges is wonderful. Asking about where to find free sanitary products could be quite embarrassing on several levels for lots of women. Wouldn’t that be great to be included in a digital assistant? I wonder how many typical (and by typical, I mean male) developers would think of adding that to the system, or highlighting that as a key feature of the system?  Hello, Caroline Criado-Perez  Invisible Women: Exposing data bias in a world designed for men.

Broadening understanding of digital assistants, what they really are, what the can do and what the could or should’t do from a broader perspective is, I feel, increasingly an area where education should be taking the lead. I can’t help thinking that there is an opportunity for educational developers and researchers to work with central teams like the one in Bolton to develop a similar approach to research  ethics applications for this kind of work. It’s not enough just to wave GDPR and check that data box.  

Surely that would help to broaden understandings of terms such as “risk”. In ethics applications you have be explicit about any risks to your subjects. Sharing data in this day and age is a huge risk.  

Again during the presentation it was highlighted that staff could ask the system to show them student “at risk” in their courses.  Risk in this sense was based on I presume assessment activity and VLE data. So students at risk of failing but there are lot of other nuances of risk (including mental health) that our current student data doesn’t, and quite probably shouldn’t ever be able to indicate. 

I also heard the phrase “calm technology” for the first time.  Calm or controlling?  We will drip feed you with the data we think you need, and lull you into acceptance . . . . and when you hear “I’m sorry I don’t have that information, I’m sorry I can’t answer your question” we will send you a video to divert your attention to something we think you might like based on the “calm” experiences of 6 million other users.  We will do as much as we can to divert you from speaking to an actual person as possible . . . Sorry I might have got carried away there, but there is more than a hint of “unexpected item in the bagging area” about all of this. 

So, whilst I can see the appeal of digital assistants, I really think we need to have some wider discussions and debates about just what they are, and who is involved in developing and evaluating them.  

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php