I sometimes use a wheelchair and would appreciate the extra efficiency of getting help without having to find or be seen by staff. It would also be good if the AI could be trained to spot broken lifts (e.g. if no-one used it for an unusual number of hours) which are a nightmare when one being broken means the station is totally inaccessible, and TfL shared that information with the likes of Google Maps. AI is not going anywhere and the benefits are so strong it seems obvious to implement this, refine the use cases and training over time, whilst ensuring reasonable protections remain to protect privacy etc. A fascinating read, but I am disappointed the PAF was not mentioned.
I've often wanted this in Morrisons: for the cameras to detect that there's a customer repeatedly circling the same 3 aisles, scanning the shelves and getting steadily more agitated, then send a member of staff over to ask what it is that I can't find.
Though if the system could predict the specific item that I was looking for, that'd probably be going too far …
I'm sure I remember reading a long time ago that TFL had a policy of not letting graffitied trains leave HQ/home. TFL applying the broken windows theory into practice: as soon as there's vandalism, act immediately to discourage copycats. Not sure if this is the main reason why graffiti on the tube has become so rare. Remember this chap? https://en.wikipedia.org/wiki/Daniel_Halpin
I’m more optimistic on the “terrifying” part I think. Some of the problems are technical (eg bias) that are (broadly) solvable as long as we’re aware of the risk and agree it needs mitigation. Others are issues of law (giving us a “better” means of doing something illegal does not actually make it legal) or societal norms (which are convertible into law, in principle). I have a basic faith, however unjustified, in our ability to address those issues rather than treat them as intractable and egregious ban the tech. Perhaps we might even start paying more attention to them *because* of the tech…
You mention "the AI was programmed to alert staff if a person was sat on a bench for longer than ten minutes or if they were in the ticket hall for longer than 15 minutes," given that it's a train station doesn't that mean it will be flagging up people waiting for trains (or waiting to pick up someone arriving by train)? Which, after a while means the alert will just get ignored because it's too frequent.
Also, on the efficiency point, on face it would lead to an increase in efficiency for the public but only if staffing levels are maintained. If it's used to reduce staff levels, as management feel AI is doing part of the work, than it could actually lead to a decrease in service.
It was noticeable that all the advantages relied on staff to actually respond (eg with the fair dodging) and management have already made it pretty clear in the last decade that they would really like to reduce the number of people working at stations. So, is there a danger that the technology is only used to reduce the number of people "needed" to provide a minimum service?
Having recently spent some time in hospital I did find myself wondering if some kind of panopticon AI surveillance system would be beneficial. Obviously the privacy implications are extreme but the staffing was so thin that I strongly suspect mistakes of varying degrees of severity were being made and not caught. Something that could note the basic "admin and maintenance events" - when staff attended (for anything no matter how minor), drug rounds, blood tests, whether the water needs topping up or anything like that - and then further that could perform basic observations such as "asleep", "awake", "in distress", etc. The apparent success of the TfL trial using old cameras and local processing suggests that actually this might not be too hard to do? And, in a similar vein to this trial, it might also turn out to be a "if you won't/can't pay for the necessary number of human staff to do it, then you have little choice if and when a cheap AI system becomes available and can fill in (some of) the gaps"...
I felt slightly worried in even the optimistic part about the intrusion of AI. Whilst I recognise and welcome the advances possible because of the technology, it's surely making station staff's job more stressful as they constantly need to be on alert and will be pulled up on not being on alert. As someone who works in a manager type role and struggles to focus in my role for over 4/5 hours of work a day, I would worry about the quality of work for the staff, particularly the loss of autonomy, and how this would affect them.
I hadn't realised this would all be done with existing camera hardware (even if it is aging somewhere).
Can we train every traffic camera in the country to recognise littering from cars? Sends an alert to a Litter Task Force somewhere who can review the footage and issue a fine.
I sometimes use a wheelchair and would appreciate the extra efficiency of getting help without having to find or be seen by staff. It would also be good if the AI could be trained to spot broken lifts (e.g. if no-one used it for an unusual number of hours) which are a nightmare when one being broken means the station is totally inaccessible, and TfL shared that information with the likes of Google Maps. AI is not going anywhere and the benefits are so strong it seems obvious to implement this, refine the use cases and training over time, whilst ensuring reasonable protections remain to protect privacy etc. A fascinating read, but I am disappointed the PAF was not mentioned.
I've often wanted this in Morrisons: for the cameras to detect that there's a customer repeatedly circling the same 3 aisles, scanning the shelves and getting steadily more agitated, then send a member of staff over to ask what it is that I can't find.
Though if the system could predict the specific item that I was looking for, that'd probably be going too far …
I'm sure I remember reading a long time ago that TFL had a policy of not letting graffitied trains leave HQ/home. TFL applying the broken windows theory into practice: as soon as there's vandalism, act immediately to discourage copycats. Not sure if this is the main reason why graffiti on the tube has become so rare. Remember this chap? https://en.wikipedia.org/wiki/Daniel_Halpin
I’m more optimistic on the “terrifying” part I think. Some of the problems are technical (eg bias) that are (broadly) solvable as long as we’re aware of the risk and agree it needs mitigation. Others are issues of law (giving us a “better” means of doing something illegal does not actually make it legal) or societal norms (which are convertible into law, in principle). I have a basic faith, however unjustified, in our ability to address those issues rather than treat them as intractable and egregious ban the tech. Perhaps we might even start paying more attention to them *because* of the tech…
Regarding the coment about AI ticket barriers - already been researched.
https://www.ianvisits.co.uk/articles/the-future-of-rail-travel-will-be-ticketless-22068/
This is fascinating - thank you.
You mention "the AI was programmed to alert staff if a person was sat on a bench for longer than ten minutes or if they were in the ticket hall for longer than 15 minutes," given that it's a train station doesn't that mean it will be flagging up people waiting for trains (or waiting to pick up someone arriving by train)? Which, after a while means the alert will just get ignored because it's too frequent.
Also, on the efficiency point, on face it would lead to an increase in efficiency for the public but only if staffing levels are maintained. If it's used to reduce staff levels, as management feel AI is doing part of the work, than it could actually lead to a decrease in service.
It was noticeable that all the advantages relied on staff to actually respond (eg with the fair dodging) and management have already made it pretty clear in the last decade that they would really like to reduce the number of people working at stations. So, is there a danger that the technology is only used to reduce the number of people "needed" to provide a minimum service?
Having recently spent some time in hospital I did find myself wondering if some kind of panopticon AI surveillance system would be beneficial. Obviously the privacy implications are extreme but the staffing was so thin that I strongly suspect mistakes of varying degrees of severity were being made and not caught. Something that could note the basic "admin and maintenance events" - when staff attended (for anything no matter how minor), drug rounds, blood tests, whether the water needs topping up or anything like that - and then further that could perform basic observations such as "asleep", "awake", "in distress", etc. The apparent success of the TfL trial using old cameras and local processing suggests that actually this might not be too hard to do? And, in a similar vein to this trial, it might also turn out to be a "if you won't/can't pay for the necessary number of human staff to do it, then you have little choice if and when a cheap AI system becomes available and can fill in (some of) the gaps"...
I felt slightly worried in even the optimistic part about the intrusion of AI. Whilst I recognise and welcome the advances possible because of the technology, it's surely making station staff's job more stressful as they constantly need to be on alert and will be pulled up on not being on alert. As someone who works in a manager type role and struggles to focus in my role for over 4/5 hours of work a day, I would worry about the quality of work for the staff, particularly the loss of autonomy, and how this would affect them.
I hadn't realised this would all be done with existing camera hardware (even if it is aging somewhere).
Can we train every traffic camera in the country to recognise littering from cars? Sends an alert to a Litter Task Force somewhere who can review the footage and issue a fine.
Indeed, the AI ticket barriers might work if you discount the pensioners who, perhaps, do not link their payment methods.
We Must Dissent.