Pwyllgor yr Economi, Masnach a Materion Gwledig

Economy, Trade, and Rural Affairs Committee

05/12/2024

Aelodau'r Pwyllgor a oedd yn bresennol

Committee Members in Attendance

Alun Davies Yn dirprwyo ar ran Hannah Blythyn am ran o'r cyfarfod
Substitute for Hannah Blythyn for part of the meeting
Hefin David
Jenny Rathbone
Luke Fletcher
Paul Davies Cadeirydd y Pwyllgor
Committee Chair
Samuel Kurtz

Y rhai eraill a oedd yn bresennol

Others in Attendance

Ceri Williams TUC Cymru
Wales TUC
Felix Milbank Ffederasiwn Busnesau Bach
Federation of Small Businesses
Gian Marco Currado Cyfarwyddwr, Materion Gwledig, Llywodraeth Cymru
Director, Rural Affairs, Welsh Government
Huw Irranca-Davies Y Dirprwy Brif Weinidog ac Ysgrifennydd y Cabinet dros Newid Hinsawdd a Materion Gwledig
Deputy First Minister and Cabinet Secretary for Climate Change and Rural Affairs
Klaire Tanner CreuTech
CreuTech
Matt Buckley United Tech and Allied Workers
United Tech and Allied Workers
Matt Davies Ada Lovelace Institute
Ada Lovelace Institute
Paul Teather AMPLYFI
AMPLYFI
Yr Athro Alun Preece Prifysgol Caerdydd
Cardiff University
Yr Athro Lina Dencik Goldsmiths, Prifysgol Llundain a Data Justice Lab
Goldsmiths, University of London and Data Justice Lab
Yr Athro Rossi Setchi Prifysgol Caerdydd
Cardiff University
Richard Irvine Prif Swyddog Milfeddygol, Llywodraeth Cymru
Chief Veterinary Officer, Welsh Government

Swyddogion y Senedd a oedd yn bresennol

Senedd Officials in Attendance

Aled Evans Cynghorydd Cyfreithiol
Legal Adviser
Elfyn Henderson Ymchwilydd
Researcher
Gareth Howells Cynghorydd Cyfreithiol
Legal Adviser
Lucy Morgan Ymchwilydd
Researcher
Madelaine Phillips Ymchwilydd
Researcher
Nicole Haylor-Mott Dirprwy Glerc
Deputy Clerk
Rachael Davies Ail Glerc
Second Clerk
Robert Donovan Clerc
Clerk
Sara Moran Ymchwilydd
Researcher

Cofnodir y trafodion yn yr iaith y llefarwyd hwy ynddi yn y pwyllgor. Yn ogystal, cynhwysir trawsgrifiad o’r cyfieithu ar y pryd. Lle mae cyfranwyr wedi darparu cywiriadau i’w tystiolaeth, nodir y rheini yn y trawsgrifiad.

The proceedings are reported in the language in which they were spoken in the committee. In addition, a transcription of the simultaneous interpretation is included. Where contributors have supplied corrections to their evidence, these are noted in the transcript.

Cyfarfu’r pwyllgor yn y Senedd a thrwy gynhadledd fideo.

Dechreuodd y cyfarfod am 09:32.

The committee met in the Senedd and by video-conference.

The meeting began at 09:32.

1. Cyflwyniadau, ymddiheuriadau, dirprwyon a datgan buddiannau
1. Introductions, apologies, substitutions and declarations of interest

Croeso, bawb, i'r cyfarfod hwn o Bwyllgor yr Economi, Masnach a Materion Gwledig. Mae Hannah Blythyn wedi anfon ei hymddiheuriadau ar gyfer y cyfarfod heddiw, a bydd Alun Davies yn dirprwyo ar gyfer eitem 6. A oes yna unrhyw fuddiannau hoffai Aelodau eu datgan o gwbl? Na.

A very warm welcome to you all to this meeting of the Senedd's Economy, Trade and Rural Affairs Committee. Hannah Blythyn has sent apologies for today's meeting, and Alun Davies will substitute for item 6. Are there any declarations of interest that Members have? No.

2. Papurau i’w nodi
2. Papers to note

Symudwn ni ymlaen, felly, i eitem 2, sef papurau i'w nodi. Mae yna chwe phapur i'w nodi. O ran eitem 2.1, bydd y pwyllgor yn ystyried drafft cychwynnol ei adroddiad ar y memorandwm cydsyniad deddfwriaethol yn y sesiwn breifat. Ond oes yna unrhyw faterion yn codi o'r papurau yma o gwbl? Na.

We will move on, therefore, to item 2, papers to note. We have six papers to note. In relation to item 2.1, the committee will consider an initial draft of its report on the legislative consent memorandum in private session. But are there any issues arising from these papers to note? No.

3. Deallusrwydd Artiffisial ac Economi Cymru - Panel 1 - Arbenigwyr annibynnol
3. AI and the Welsh Economy - Panel 1 - Independent experts

Symudwn ni ymlaen, felly, i eitem 3, a dyma'r panel cyntaf ar ymchwiliad undydd y pwyllgor i ddeallusrwydd artiffisial ac economi Cymru. Gaf i felly estyn croeso cynnes i'n tystion ni heddiw i'r sesiwn yma? Cyn ein bod ni yn symud yn syth i gwestiynau, gaf i ofyn iddyn nhw gyflwyno eu hunain i'r record? Efallai gallaf i ddechrau gyda'r Athro Alun Preece.

We will therefore move on to item 3, and it's our first panel in our one-day inquiry into artificial intelligence and the Welsh economy. May I therefore extend a very warm welcome to our witnesses today to this session? Before we move to questions, could I ask them to introduce themselves for the record? Perhaps I could start with Professor Alun Preece.

Diolch, Paul. I'm Professor Alun Preece from Cardiff University. Here today I'm really representing the Hartree Centre Cardiff Hub, on which I'm sure I'll say a bit more as the discussion goes on, but I'm really pleased to receive the invitation, and thank you for giving me this opportunity. 

Hello. Good morning. Bore da. I am Rossi Setchi. I am a professor in high-value manufacturing at the School of Engineering of Cardiff University. I am the director of the research Centre for Artificial Intelligence, Robotics and Human-machine Systems, which is a collaboration of engineering with computer science and psychology. Thank you.

Hello. Bore da. My name is Matt Davies. I'm economic and social policy lead at the Ada Lovelace Institute. We're an independent research institute based in London and in Brussels, and our mission is to make data and AI work for people in society. So, we have a big focus on carrying out research into these systems and their impacts, particularly in the public sector and in how Governments can respond to them with regulation and other measures.

Thank you very much indeed for those introductions, and perhaps I can kick off this session with just a few questions. What could be the economic impacts for Wales of increased adoption of AI technology, and what could AI help us do better, in your view? Who wants to start? Professor Preece.

In a word, I would say 'productivity'. I feel very, very strongly that we are not talking about human replacement, at least not for the foreseeable future. It's really about what I'd prefer to call intelligence amplification, or intelligence augmentation, and a lot of that boils down to allowing people to do more with the resources that they have, rather than be put out of a job. So, yes, productivity.

09:35

Yes, the same answer from me. In fact, there is some data coming from McKinsey, for example. They've studied this aspect. And we know what productivity levels could be, or what increase in productivity levels could be expected. It's different for different sectors, of course. We need to consider the specifics here in Wales, and plan accordingly. But productivity is my answer as well. Yes.

Yes, I definitely echo that. It's not about replacement, it's about augmentation and potentially other changes to the structure of work. Some of those might be negative, and we can maybe touch on that later. I'd also echo that there are potential productivity benefits. In our research, which again has focused, in this regard, mostly on the public sector in the UK as a whole, we definitely heard optimism from public sector leaders and procurers about this—the opportunity to enhance tasks such as knowledge management, document analysis and so on. I would strike a note of caution about a lot of the estimates that we see in the media, for example, about cost savings or productivity benefits. I think some of the studies we see have flawed methodological approaches, and a lot of them are limited, as Rossi alluded to, by the lack of specificity about use cases. And I think above all we need to see this as a long-term challenge. It's not going to be an easy or quick win.

And, of course, AI is a broad term, but recently there's been an emphasis on generative AI. And, of course, you've just mentioned productivity. How does this form of AI help to increase productivity, and are there any other forms of AI that the committee should actually be aware of?

That could be a very long discussion. Certainly, some of us at this end of the table have been working in AI for many years. The hype around generative AI is certainly stifling activity in other areas of AI. I would say that we need a more diverse research and development ecosystem in this country so that we get the benefits from all AI. But the point about generative AI is that, these models are astonishing in their capabilities. Their capabilities are not well understood. But, as we've heard, it's not just about content generation, it's also about finding information, extracting information from documents. These very old knowledge management problems of allowing people to get the right information, the information they need in a timely way are being addressed in a new fashion by these technologies. But we do have to be cautious, because I don't believe anyone really understands how these models are working.

Yes, so the current hype is around deep machine learning and generative AI, but in manufacturing, we see a lot of examples of AI being used for different processes, not only dealing with customers and analysing documents. But over the last 20, 25 years, there have been a lot of improvements in predictive maintenance, for example, using data from sensors to detect on the monitors, quality assurance, quality control, also design optimisation, inspection. In fact, manufacturing companies know a lot about different forms of AI.

I've been working a lot in symbolic reasoning. In fact, I have to say that some scientists believe the hype connected now with deep learning et cetera—this is not true AI, because these algorithms can learn very quickly. It's very impressive. They can complete your sentences, but they use statistics, rather than formal methods, and Alun here, he has a lot of experience in formal methods. AI is very strong when you can trace the reasoning. You can make bad reasoning transparent, you can justify the decisions. I think what the current methods need more of is more transparency—when I say 'current methods' I mean generative AI. As Alun said, not many actually understand how these models work. And this is a problem for manufacturing, which I represent, and many other sectors, because businesses have to justify decisions. So, what we need is transparency, interpretability and explainability, and the hype associated with the latest wave does not provide answers for several processes, including decision making, for example. For safety-critical applications, this is critical.

09:40

Yes, I'd echo those comments on the need for greater diversity or for investment in different avenues of research, and also the need for greater transparency. I think, in particular, I'd like to speak to some of the distinctive features of the economy of generative AI, because this is a really concentrated market that we're seeing dominated both in terms of the development of the leading models and also supply chains by a small number of companies. And I think this has some quite significant implications.

Firstly, it could lead to a small number of companies potentially steering or homogenising the development of these technologies, which, as we've heard, is what's happening. But I think it also affects this transparency point, because information about these systems—about, for example, OpenAI to GPT-4 ,or the leading model, Claude from Anthropic—fundamental information about whether they're safe, whether they're effective for the purposes for which they're sold or provided to businesses isn't really accessible. There's some limited testing that has been done by the UK AI Safety Institute, but it doesn't have a statutory underpinning. The UK Government has promised to bring that in; we're still waiting on that.

But, really, I think what we need to give businesses in manufacturing and other sectors, particularly those doing safety-critical work, is the assurance that these systems are actually fit for purpose—that there's a proper statutory regime for the pre-market testing, like we see in other safety-critical sectors, like pharmaceuticals, car manufacturing and so on.

Yes, and on that point, what other sectors are most likely to actually be impacted by the development and adoption of AI over the coming years, do you think?

Hartree Centre Cardiff Hub was set up a couple of years ago to support small and medium-sized businesses specifically. We do have sectors that we focus on, but we’re open to SMEs in all sectors. But in particular, we're seeing terrific interest—creative, medtech, fintech, certainly security and cyber security, broadly defined, and companies working in, however you call it, net-zero, sustainability environment sector. I would struggle—I was thinking on my way in here—to come up with a sector where there aren't really interesting levels of engagement and ideas around this technology. 

We tend to work with companies that have got past the playing-with-ChatGPT stage and are starting to think seriously—from their knowledge and know-how and what they've seen of the technology—about how they could use it to be more productive: smart, specific, targeted ideas across all these sectors. Now, I would struggle to name a sector that won't be impacted. And I am talking specifically about genAI before I even get started on all those other forms of AI. But that's where the interest is at the moment, and I think it's our job, really, to help these companies get something good out of genAI and, maybe, steer that technology a little tiny bit.

Yes, genAI can help in any process that interfaces people—communication, written and oral communication—so, summarising content, generating content facing the customer, trying to help with customer operations. So, this is quite productive. In marketing and sales, we do expect improvements in those sectors, also software engineering, because the product development cycles will be accelerated, not only because genAI can produce code for you, or you can easily find the code you need, but also because it can quickly test your code, so the cycle will be accelerated. Research and development as well, but it's unrealistic to expect the same quick—or not so quick—achievements that have been seen in life sciences, for example, and the reason is, in life sciences, there is a concerted effort to provide open-access data, a really large database, and so this explains the advances in genomics or protein structures, which are really impressive. In manufacturing, I fear that sharing of data is very problematic, so we need to work with companies. With their data, I would say, always keep your data and algorithms in-house, otherwise the problem becomes a problem of security.

09:45

Okay. Before I bring in Luke Fletcher, I know Jenny Rathbone would like to come in on this point. Jenny.

I just wanted to follow up Professor Setchi, because it seems to be that one of the big risks here is that it makes companies who deliver services even less likely to want to speak to a human being, because, at the moment, all my constituents are driven mad by services that never want to respond. It's always just 'press 1', or do something like that, and it seems to me that this could be used as a golden excuse for never actually listening to what the consumer has to say about the particular problems that they may have and which won't fit into a box. So, how are we going to avoid it being even more alienating for people—just the ordinary citizen?

I share your concern completely. But I think the new type of technology is slightly better than the deployment of telephone services, if you remember that initially, like 10, 15 years ago. But I do understand. I think the approach should be, rather than deploying the whole operation, that we can automate elements of the role, so that those elements and those functions quickly analyse the main concerns in what the customer says, for example, quickly bringing additional resources, quickly viewing the history of interactions with that customer. That information can be collated very quickly and this is how the person there can be supported. So, I think the technology should be helping the human being—

—the human talking with the customer. But, sometimes, I don't want to necessarily have a long discussion with a person who sometimes asks me about hobbies and wants to sell me something else. Sometimes, I actually want a very quick and focused answer. So, I think we need to reconsider how to develop more inclusive human-centred approaches, and help the employee, rather than completely eliminate that role. I think the roles will have to be decomposed, just redesigned, so that the person there has a higher level of satisfaction in what they do. So, there is an opportunity—

09:50

I'm very conscious of time, because, obviously, we've got quite a few other areas we want to cover, but, Matt Davies, you wanted to come in on this point.

Just very quickly on that, because I think it's really important, and it's something we definitely see in our research. There's a lot of interest in the use of, particularly, genAI systems for communications and engagement with the public. Also, in our public attitude research, we've seen this as a key concern the public have in terms of the reduction in human contact, in human oversight and human discretion in decision making. I think that last point is particularly acute, because the alienation and the psychological effects are a problem. But also, critically, these systems might not just be involved at the front end; they might also be involved in quite important decisions about customers or about service users. So, there are various proposals about ring-fencing certain business operations, to build on what Rossi was saying. But, critically, also, we have existing protections in UK law, through the UK general data protection regulation that provides a right to human oversight and scrutiny of automated decisions. So, we think those need to be defended and, ideally, extended further for a world and economy in which we're going to see more widespread use of these practices.

Diolch, Gadeirydd. I'm really grateful for your time this morning—it's the evidence session that I've been looking forward to for some time now. Just touching on some of the evidence provided by Professor Setchi, you made reference to conversations that are happening within the scientific community at the moment around the potential for an AI Fukushima event. I think that made a number of us stand up. What do we mean exactly by that? What are the risks associated with AI that could result in an event like that?

Sorry, can you repeat—? Is it about Fukushima?

Okay. I'm just quoting one of the 2024 Nobel prize winners, Demis. So, I was in the meeting—I was one of the attendees at that session—organised by the Royal Society. It was primarily focused on life sciences, the whole meeting, and this is what he said, so I felt that it was my duty to exactly repeat that, because it's very, very serious and we have to prepare for that.

What this may mean, well, I don't know. I think we need to go through that risk mitigation exercise and imagine the impossible—a world where the internet is down, and data is used not in the way it was supposed to be used, and algorithms give us misleading information. People can give us misleading information, and we don't have strong enough barriers for that, but we can prepare for that sort of event. I think we need to do this exercise, all of us, with social scientists, and imagine what this may look like. I think we have to do that due diligence. So, I don't have an answer; I'm just quoting someone who has larger access to information, I would say, someone from DeepMind.

So, it's essentially a cautionary warning for us when we look to develop this technology, and that it needs further investigation is what you're saying. Am I right in saying that, yes?

Obviously, there are perceptions among the wider public and concern among the wider public around the use of AI. I have a particular interest in the use of AI and how it might affect workers' rights and the wider workforce. When we consider, then, that sort of warning, is it fair then to say that perhaps the technology isn't ready to be rolled out across sectors in a more broader way? And this is open to all the panel as well. It would be really interesting just to understand your thoughts there, because I think there's a balance here, isn't there; it's about how we ensure that we're developing this technology so that we're getting the maximum potential from it, but, at the same time, then, guarding against some of those things that—[Inaudible.]

09:55

Yes. Thanks. So, firstly, I think there's something of a kind of irony or contradiction in the founder of DeepMind talking about this potential Fukushima event if AI is integrated into safety critical infrastructure, because this is precisely what companies like DeepMind are pushing for and pitching for. I also think it's really important that we bear in mind that we're not talking about some hypothetical event in the future; there are harms and risks occurring from AI all the time now. The Member talked about workers' rights. We can see all sorts of challenges associated with the degradation of work quality, algorithmic management and so on. We can see issues with—. I've talked already about the use of AI systems to make key decisions about whether someone can receive a loan, for example, or a mortgage, and the risk of bias and discrimination in that process. We can also think about the kind of systemic risks of mass misinformation on democratic institutions; I'm sure we've all seen that. And really critically, as well, I think it's important that we don't forget the harms in the supply chain. Particularly, the generative AI systems take huge amounts of computational power to train, and that's hugely costly in terms of energy and water, so I think that's something that we need to factor in when we look at Wales's or the UK's climate and environmental targets: is mass roll-out of these technologies compatible there?

Finally, as to whether these technologies are fit or mature enough for widespread roll-out, I do think that that's, ultimately, a sector-by-sector case. And the UK Government, both under the previous administration and now the Labour Government at Westminster, is taking a largely sectoral approach to regulating these systems. We think there's a lot of sense in that, because, as we've already heard, sectors across the economy are adopting these technologies. In some cases, there are really robust, empowered regulators with the proper systems to assess. Let's take medical technologies, for example. The Medicines and Healthcare products Regulatory Agency has been doing a lot of work to embed an understanding of AI into its processes. On the other hand, you have other sectors—education's a really good example—where there is very limited oversight or scrutiny. So, really, I'd say it's a case-by-case basis, and regulators across the board need to be empowered with greater resources and greater powers to request information from companies and properly scrutinise these systems.

So, I suppose, then, that's how you would look to mitigate some of the risks around this, then: greater powers for regulators and so on. I think the perception, unfortunately, though, of regulators isn't a positive one, not necessarily within this particular field, but in other fields, I think, with a wider perception there from the public that, if we were to say that we mitigate against these risks by putting regulators in place, that might be something that doesn't convince people that this is the way forward. I'm just thinking now about water, and the regulator of water or the regulator of energy—the trust for those regulators at the moment is at an all-time low. So, I take the point, I think, that, if we have a regulator in place, they have to have the power, they have to have the teeth, to actually enforce the regulations put in place.

The key thing here is that we need to bring people with us. So, when we talk about some of these risks with AI, how do you see that affecting public trust? Because, if we don't bring the public with us, it's going to be very hard to roll out some of these things, I think. I think Jenny touched on it in her last question.

I think, fundamentally, we need better education, the broadest possible education, about what AI actually is, what it isn't, what it can do, what it can't do. It has to start at the earliest levels of education and go right up to lifelong learning, and that needs to be kept alive and refreshed, because a lot of these problems come from that lack of awareness, people accepting the science fiction hype, which has always been a specific problem to AI—the robots are coming, and all of that kind of fantasy and fiction—and separating that from reality of this technology, which, when you boil it down, looks very much like, I would say, what's been happening with big data and the cloud and these other digital transformations for years.

GDPR takes you some of the way. Now, I completely accept what was being said there about regulation. We all love GDPR and our privacy impact assessments and all of that sort of thing, but it's a solid approach to grappling with these issues and making people think about these issues. We see it slightly differently, I think, with the SMEs that we're working with, because they're small—they can be one person, sole trader, two people, they don't have HR departments, they don't have data offices. They still need to grapple with these issues, but they do so in a different way, which I don't think we fully understand. I don't think there's as much new, when you—. If you educate people fully as to what the technology is and what it isn't, there isn't as much new as people might think.

10:00

If I can add, educating people, yes, agree completely with that, but addressing their concerns directly is what we need to do. I read very carefully the Office for National Statistics survey last year about public perceptions, and people are concerned about use of their personal data without consent, they're concerned about cyber crime, fake news. So, these are the top percentages of concerns. They're also concerned about a loss of human interaction, but also they mention regulation. So, there is this need for more regulation, but this needs to be done very carefully, and we need to start by addressing the biggest concerns. So, it's about education, but also showing how the measures, which, hopefully, will be introduced, directly address those concerns.

Diolch, Cadeirydd. I'll admit, I use AI. I use AI in my office already, and, exactly as Professor Preece has said, it hasn't replaced my staff, it's improved productivity. And I'm a big user of it—of ChatGPT, Gemini, as two examples, integrated with Canva for graphic designs. But I know I'm still only scratching the surface at what AI can offer in terms of improved productivity. But how well placed is Wales and the UK to really benefit from these potential opportunities around AI, and quickly, so that we're at the forefront of this? Matt, I'll start with you.

Thanks. It's a really good question. I don't want to sound like I'm banging on about regulation and the challenges, but I really do think it comes down to that, because some of the use cases that you're talking about within your office—the supervised use of these systems for quite specific tasks—are relatively low risk, but, for a lot of businesses, as we've heard, there might be more risky or safety critical applications, so it's really key that we actually get the regulation right. To the point we've been touching on before, about is regulation enough, I think what I'd say, based on the research we've done into UK public attitudes to AI, is that people really want to see strong, independent regulation, and they also want better agency—

So, when you say that people want to see that, is that the lay person on the street, or is that those who are looking to implement AI into their business? Because if a lay person is talking about—. They just want to make sure their security is safe, that their private details aren't being leaked, but surely it's a high-end issue of making sure that—. Because a lot of people might not realise a company that they go into is using AI, because the end product is exactly the same, it's just the productivity element, arm of it, that's improved. 

Of course. I'd say the two things are linked, because, as we know, you need widespread public trust in these systems for businesses to legitimately use them in the knowledge that they're not going to lose customers or see a decline in service quality, for example. So, I think it's really important that those two things aren't seen as in tension. As I've mentioned, we can think of all sorts of other sectors where the products are used across the economy, they're considered drivers of innovation and growth. Again, pharmaceuticals is a really good example, where you actually have very strong both pre- and post-deployment regulation, to make sure that—. These are really complex technologies, and most people who use them—let's say you're taking a drug, or even a lot of professionals prescribing the use of a drug—they don't know what's actually going on under the hood, as it were, or all the details of how it was developed. But it's really important that we have that assurance.

10:05

Well, I think we are in a good position. For example, the 'Digital strategy for Wales', which was introduced four years ago, had a very positive influence, I think, on public services. And you not only improve the services per se, but you also educate the people using those services. So, I see that improvement deployed across the whole of Wales. This is a very positive way. Also, there is a lot of investment in Welsh universities, a lot of support structures, like my institute and your institutes, which are there directly to help. So, I think we are in a good position overall.

I'm going to be positive too. I use the word ‘ecosystem’ a lot, because it's got a lot of moving parts and it's really messy and chaotic and it's trying to hit a fast-moving target with AI at the moment. But, yes, where to start? We meet a lot of often quite small, start-upy companies with terrific ideas that need a little bit of guidance about where to go next—post being a ChatGPT power user, where next? So, a lot of really good ideas that can—. They'll be employing people. I hope they'll be employing some of our graduates, across the skills.

I mean, okay, I'll start with university. So, we've got our data science academy in the last few years, turning out hundreds of graduates every year in data skills. Before that, we had the national software academy, which was doing computer science degree-level education, but aimed specifically at the needs of industry. Those two units are very tight together. Going down, further education, I was up in Y Coleg Merthyr yesterday at an event organised by the University of South Wales/Cardiff capital region, deliberately going out to the FE colleges: great workplace-based experience going on there.

Schools, it's hard, the curriculum is probably not agile enough. But we've got to be smart—like I said earlier, that education piece across all levels to make sure the workplace is ready for this.

Fab. So, sticking with you, Professor Preece, if I may, SMEs and the Hartree Centre SME Hubs, how are you supporting? What is it that you're helping with SMEs? And can you outline the steps that that takes? If someone comes to you, what do they get?

The Hartree national centre, which was set up some years ago in Daresbury, between Manchester and Liverpool, couldn't reach SMEs very far from there, set up the three regional hubs: we're one, there's one in the north-east, there's one in Northern Ireland. Our specific mission is not about education. The national Hartree centre does EXPLAIN; that's the name of their programme. We do EXPLORE, which is the next step, which is bespoke support, locally accessible bespoke support, minimum financial assistance subsidy. So, we're funded through UK Research and Innovation’s Science and Technology Facilities Council. What we offer is initially a 12-hour assist, which is a deep conversation, maybe a demo, having understood where the company is at and where it believes it wants to go next, provide some advice, and then a one- to three-month sprint project. It's quite seed-type funding. It doesn't take them very far. After that, it's reliant on other programmes: so, Innovate UK would be one, knowledge transfer partnerships. It doesn’t interface brilliantly well. There are sectors that have good next steps, but that's a little bit patchy.

So, just to explore that ever so slightly further, is that around, when we talk about AI, generative AI, or is that around automation of production lines or all encompassing?

We didn't set it up to be specific. Do you know, ChatGPT hadn't happened when we wrote the original proposal? We are responding to the interest in the sector and it's become about genAI, content generation, bots to automate—sorry—customer interaction, retrieval-augmented generation, pulling information out of documents, semantic search. This has all happened since we got the proposal accepted and this is where the interest is at the moment. Eighteen months from now, I hope the team, I believe the team, has a deep base of AI skills, because it'll probably all look different, but at the moment we're responding to the demand. 

10:10

That's interesting. And then just a final point. In terms of barrier, would that regulation be seen as the barrier that all three of you would agree with that Matt outlined? 

A fintech SME is well used to working within the sector-specific regulations, ditto medtech, transport, manufacturing. They absolutely know more about that side of things than we do, plus the general data protection regulation and others. I don't want to sound complacent, and everyone is very well aware, of course, of the EU AI Act if that's their trading future, and what the UK might evolve through the action plan. But there is awareness—they are watching, and they want their businesses to thrive, so they have to. 

Diolch, Cadeirydd. We've heard an awful lot from Luke about the ethical and risk dimensions. Before I ask about legislation, is there anything else the panel wants to say about the ethical issues around AI and any potential high-level ethical problems? 

If I could just come back in on this point about what the solution is, because I really don't want to sound like regulation is the only answer. I mentioned our opinion research with the UK public. People want strong independent regulators but they also want agency. They want to be able to have a say over how these systems are used, how they affect their lives, and they want human oversight. And obviously, as we've heard—we've talked a bit about the GDPR—there are some rights providing this in law. They're obviously very costly, expensive and time-consuming to exercise.

We've commissioned independent legal research, which has shown that, actually, in a lot of sectors—. And finance, it's funny you mentioned, is one of the better sectors for exercising these rights, because there's an ombudsman. But largely, across the board, it's really difficult to exercise these rights in relation to AI. So, we'd like to see measures taken to actually empower people, whether that's exploring an AI ombudsman or looking at other ways to help people exercise their collective agency. I think you're hearing from TUC Cymru later on. The TUC and its sister bodies have been doing great work looking at how trade unions and other related bodies can help people exercise their rights in relation to AI in the workplace, and elsewhere. 

I think looking into those mechanisms as a complement to and part of the regulatory system is really important to empower people, and, speaking to the point before, ensure they're comfortable with these systems so that they can be rolled out comprehensively. 

One of the things that was raised during an evidence session I gave as a Senedd commissioner to the Finance Committee was the use of AI in politics, and the fact that fake accounts of meetings could be used and all the risks associated with that—that it could be used to present politicians as saying things they hadn't said. Is there any way that that can be regulated, given the wild west of social media? 

We're all going to be saying things we didn't say, which neatly brings me on to the UK Government's position and their AI action plan. Do you think the Welsh Government can dovetail with that? Is there anything particularly in it that is useful to a devolved legislature? 

I guess I can say something on this. We're still waiting to see what's exactly in the action plan, obviously. As I mentioned, the UK Government has said on a number of occasions it wants to bring in regulation on the developers of the leading foundation models, and we think this is an important step. As my other remarks have probably made clear, we don't think that's enough. There needs to be a more comprehensive framework, like we see in other sectors, in order to give people the right assurance. So, I think one role that devolved administrations can play is pushing for that and trying to complement it. 

I think one thing that the UK Government is looking to progress as well is this agenda of transparency in the use of AI in Government with the algorithmic transparency recording standard. We think this is really important, because, as I say, there's just not enough evidence about how these systems are being used in practice, how effective they are, what sorts of interventions are actually working and should be pursued further. We actually have much less information on that when it comes to AI than we would in other sorts of public sector interventions, precisely because there is, at present, no mandatory obligation to record and assess. So, progressing that—it's supposed to become mandatory but it's slow progress—will be really important. The Welsh Government has signed up to that; I think it could take a leading role there in actually helping to ensure that these records are actually kept. And that will directly benefit AI in the public sector, but it should also help to build up this body of evidence for how these systems are working that, ideally, the private sector will be well placed to build on as well.

10:15

I often come back to what is distinctive about Wales, about how we do things in Wales, and try to amplify—. We have a border with England, and trying to stop development—. Having one side of the border with one set of regulations and one side of the border with another set of regulations is problematic. But what's distinctive? And one of the things that has emerged recently as being distinctive is the Well-being of Future Generations (Wales) Act 2015 and the way Wales approaches that—that policies need to be mindful of impact on the future generations. And I think that's respected—that is seen as a strong position and one that could hopefully amplify our voice.

Another thing, of course, is the bilingual nature of the country. We have a lot of conversations around Welsh language technology. Some of our SMEs are in sectors where their services, their bots, or whatever, have to be delivered bilingually. So, we talk a lot with folks across the country, from the great unit up in Bangor, colleagues in Cardiff. That community, particularly the early career people who are grappling with that and understanding the state of the models, talking to other countries with similar—'issues' is not quite the right word, but in a similar situation, for example New Zealand and the Māori language. And just grappling with that and being on the front foot—that not every language is a majority language, and languages coexist and need to serve the people, across different—. Actually, it's culture and language. So, I think these are features that we have that could be used to amplify. I honestly don't know the extent to which the Welsh Government is involved in the UK action plan.

We'll be trying to work that out as a committee. Can I just come back on one thing? Have I got time?

On the global issues, you've got a divide now between the EU, which has an artificial intelligence Act, and then the incoming Trump administration, which is all about deregulation. How is that going to impact us in Wales? Because, obviously, as you've said, this is a global project, which crosses national boundaries.

One thing I'd say is that there may be this bifurcation, and then, obviously, the UK Government is kind of taking a middle path, but, actually, a lot of businesses, including SMEs—in Wales, and in the UK more broadly—will actually be looking very closely at the AI Act, as we've heard, because they might be selling into European markets, or other markets that might adopt similar standards. So, that's a potential risk for us, right? If our standards are too different, you've got this double compliance burden.

And then I think the other thing I'd say is the Trump administration stance is obviously really worrying—as it is on many areas. But we've seen a lot of progress in the last couple of years towards developing more robust standards for evaluating and testing these systems, again, particularly the high-end generative AI models that are being sold by these big Silicon Valley companies. The US has been working very closely in collaboration with the UK AI Safety Institute, and the US institute is now at risk. So, I think there's a potential challenge that this whole agenda will be set back, which is worrying.

Thank you. We talk a lot about things like food security and energy security, and behind all that, obviously, there is a huge role for AI in all that. I'm just looking at the role of both Welsh and UK Governments in trying to protect the interests of these relatively small countries, in global terms, and particularly Wales, with 3 million plus citizens, and how we protect the engineers and the innovators of the future from not just being swallowed up by these global interests from which they can't break out. What would you want to see the Welsh and the UK Governments prioritising, to enable us to continue to innovate in Wales rather than just be having to be a sideshow from somebody the other side of the world?

10:20

That's such a good question. I think a form of intelligent sustainable innovation. I'm thinking now of some of the things that are happening in agriculture, agritech, which I don't think we hear as much about as we probably should, about the transformations that are going on there, but to be mindful that the farmers—. Well, we've heard quite a lot about the farmers recently. But taking everyone forward, and listening, talking, understanding, educating again, and just, yes—. Sorry, this sounds very wishy-washy, but I think that's what we've got to do.

Clearly, there are huge benefits in trying to ensure that soil health is maintained and improved, and that we use AI in science to analyse scans of individuals, where the machine is possibly more accurate than the human. But I think it's more about how do we produce a framework that will enable the innovators of the future to not have all their good work stolen, basically. Matt, do you want to go first, and then I'll come to Professor Setchi?

I'm happy to. I think, ultimately, this is about how do we keep talent and IP that's produced here here, and I do think it's a big challenge. The sorts of public interest use cases you're talking about, yes, some of them, the big companies do pay attention to, but for various reasons, they might think it's problematic to be directing that agenda. And I don't think it's purely just a risk of talent flight or of IP being bought up. We could think of companies being bought up wholesale as well. Famously, DeepMind was a British success story until it was then bought by Google. And we see a lot of this pattern, where SMEs and academic projects are driving innovation, which is then hoovered up by these big companies. I think more robust competition regulation, merger control, things like that, both from the Secretary of State at the UK level, and from the CMA, which has been doing great work and really ought to be supported in that, will help. But I think we also need to look at why, for example, people who are educated in fantastic Welsh and British academic institutions are then going to work in Silicon Valley and not staying here and working on those sorts of use cases. A lot of the time, it's because those big companies have the resources, they have the data that will be interesting to work with, they have access to huge amounts of compute. So, I think we really need to be thinking about how we can mobilise UK public resources to support researcher access to compute, researcher access to data. We have really interesting and valuable public sector data sets in the UK, so we need to be thinking about how we can make those accessible and usable for public interest projects.

My area is manufacturing, so I think a lot about public value manufacturing. One of the best examples is a business that develops software for Welsh translation, looking at databases and expanding the data sets, et cetera. But another good example is from farming, and it comes from India. In India, the Government has provided farmers with apps where they can have on those apps world-class knowledge about the local conditions, and they can be advised when to put seeds into the ground, when to harvest. Delivering that world-class knowledge to everyone is something very impressive, is something that can be actually accelerated by policy developers. So, public value manufacturing, for me, is a mechanism to bring together all the issues related to sustainability, prosperity or efficiency, but also well-being and how the society feels and lives their lives. So, in that, I see that bringing these aspects together is a good framework to focus on projects that make sense here for the people in Wales. If our innovators engage with long-term projects like that, I think there is a future for that concept and for bringing wealth to Wales and people staying in Wales. So, that would be my suggestion: a public value economy.

10:25

Okay. I'm not sure that Silicon Valley's going to want to steal our Welsh translation, because there isn't that much demand, except in Patagonia and places like that. But I think—. These are all really big concepts. Just trying to focus it down, what do you want to see the Welsh Government prioritise in its impending short-term review of AI and the economy? We're a small country, with a small Government, so what should we be looking for to make sure that they are doing what they are able to deliver on?

Yes. Business leaders, if they feel supported, if they have a good ecosystem in front of them, will be keen to work towards achieving their goals. I think what we need is practical help, practical advice, a place where they can go to talk about ideas. So, I think we don't need to reinvent new solutions; we can look at solutions that worked well in 3D printing. If you remember, 15 years ago or 20 years ago, the hype was around 3D printing and how everything will be produced on 3D printers at home. Well, this doesn't happen, but some of that innovative thinking translates now into new materials and new products, and this is good. But what we created at that time were those maker hubs, so people can go with their computer aided design and try their designs on 3D printers available for use. This is practical help, testing initial ideas at almost ideation stage.

Okay. But doesn't that help already exist at both the Hartree centre and the robotics research centre?

My centre in particular is for—. It was Objective 1 funding; it's for relatively early TRL levels—fundamental research, we call it. I think business leaders need more practical help, but not just training, how to create a website—I hear a lot about these examples—help as to how they can translate their ideas into something tangible to start with.

Just very, very quickly. I mentioned the ecosystem earlier. So, yes, there are elements like the Hartree centre who are looking to do 55 SMEs over our three years, which isn't many. I think we do have a good model as part of that ecosystem on the elements that Rossi's talked about. There are lots of good elements in the ecosystem: look for the gaps and join them up. Recently, I've been looking quite closely at Media Cymru. So, Media Cymru, serving the creative sector through their Strength in Places award, have joined up seed funding, development funding, scale-up funding, mirroring the pathways of companies. That seems to be a good model. There's a lot of money in it, but it seems well thought out and joined up.

I mentioned earlier the gap between where we leave a company—. If they're lucky, there's an Innovate UK programme that they can step on to. But, if there isn't, they're going in to a very competitive pool to win Innovate UK funding. Then it's not often right for them to go down the KTP route. So, there are gaps, and I would say look for those gaps, ask companies—not us, ask companies—and try to make things a bit more joined up.

10:30

Very, very quickly, I think we need to separate this into two areas. There's the upstream development stuff, and that touches on a lot of what I was talking about earlier in terms of computing data and the national environment across the UK. I think, ultimately, the spending power you need to really address that and try and get more AI development here, in the ways that we want it, is something that has to be addressed at UK level, because it's too big.

I think, in terms of deployment and adoption, there's a lot that the Welsh Government could be focusing on. We've already talked—. I'd echo all of that. I think we've talked about the need for greater transparency across the public sector and modelling best practice there. And I would also say—. We've talked a bit about the future generations Act, thinking about other distinctively Welsh legislative contributions, like the Social Partnership and Public Procurement (Wales) Act 2023 as well, thinking about where AI fits into that picture and how that can help to bring about a more responsible paradigm for deployment.

Okay. So, just my last question, because I can see we're running out of time. You talk about non-regulatory tools. We're about to have a bus re-regulation Bill to enable us to manage the bus services to maximise social value, joining up the dots. So, is this something where AI is going to really help what is quite a complex picture? Because, obviously, bus routes—. I can just about envisage what we do in Cardiff, but, clearly, for the whole of Wales, it's a complex picture. How would we apply artificial intelligence to help us ensure that the public investment in buses is done to the best of social value?

I'd say that's something that needs to be decided on in consultation with service users, with other stakeholders, with workers, and I think vehicles like the social partnership approach can help to bring those voices in. It's not something that—. I've not looked at that piece of legislation.

No, I know you haven't, but it's really just how you'd use AI to—. Clearly, we have to talk to stakeholders, but, at the end of the day, it's about having that whole picture in your brain without it blowing up.

It's interdisciplinary, because there will be experts in business, sustainable transport, transportation. But, yes, algorithms people, mathematicians, computer scientists—I can put you in touch with a few people in maths and computer science in Cardiff who live and breathe that stuff.

Thank you, Jenny. I'm afraid time has beaten us, so our session has come to an end. Thank you very much indeed for being with us today. Obviously, your evidence will be very useful to us as a committee. A copy of today's transcript will be sent to you in due course, so, if there are any issues with that, then please let us know. But, once again, thank you for being with us today.

We'll now take a short break to prepare for the next session.

Gohiriwyd y cyfarfod rhwng 10:33 a 10:42.

The meeting adjourned between 10:33 and 10:42.

10:40
4. Deallusrwydd Artiffisial ac Economi Cymru - Panel 2 - Busnes
4. AI and the Welsh Economy - Panel 2 - Business

Wel, croeso nôl i gyfarfod o Bwyllgor yr Economi, Masnach a Materion Gwledig. Fe symudwn ni ymlaen nawr i eitem 4 ar ein hagenda. Dyma ail banel ymchwiliad un dydd y pwyllgor i ddeallusrwydd artiffisial ac economi Cymru. Gaf i groesawu ein tystion i'r sesiwn yma? Cyn ein bod ni yn symud yn syth i gwestiynau, gaf i ofyn iddyn nhw gyflwyno'u hunain i'r record? Efallai gallaf ddechrau gyda Klaire Tanner. 

Welcome back to this meeting of the Economy, Trade and Rural Affairs Committee. We will move now to item 4 on our agenda. This is the second panel of the committee's one-day inquiry into AI and the Welsh economy. And may I welcome our witnesses to this session? Before we move to questions, can I ask them to introduce themselves for the record? Perhaps I can start with Klaire Tanner. 

Helo. Fi yw Klaire Tanner, a dwi efo cwmni o'r enw CreuTech, sy'n dysgu pobl AI ac XR.

Hi. I'm Klaire Tanner, and I work with a company called CreuTech, which teaches people AI and XR. 

Bore da. Fi yw Felix Milbank. Dwi'n gweithio i'r Ffederasiwn Busnesau Bach yma yng Nghymru.

Good morning. I'm Felix Milbank. I work for the Federation of Small Businesses in Wales.

Bore da. I'm Paul Teather. I'm the chief executive of AMPLYFI.

Thank you very much indeed for those introductions. Perhaps I can kick off the session with a few questions. Now, we've received written evidence from the FSB and AMPLYFI, which highlights that businesses in Wales are already using AI technology. Could you describe for the committee what some of the uses are for us? Who would like to kick off with that? Paul?

Sorry, do you want me to go first?

Yes. So, I think not just everyone in Wales, but I think that everyone in the world, is using AI now extensively, and there's some recent evidence—very recent evidence—that's been shared by McKinsey, which now shows that about 75 per cent of people around the world who are working in professional capacities are using AI in the workplace, in their personal lives, and in some cases both. So, I don't think that this is about how it's been adopted in Wales, I think it's actually how the world is changing, and I think we're now beyond the initial hype cycle of generative AI.

My personal view is that generative AI has catalysed the adoption of AI more widely across enterprise, initially at a relatively individual level. It's at maturity level 1, which is, essentially, where individuals are using it for personal productivity, and very high penetrations of that, across all age ranges. I think that what we're starting to see emerge is maturity level 2, which is where people are starting to adopt the technology into work flows to drive productivity into process, and that's delivered some immediate benefits, which we can talk more about. And I think we're starting to see maturity level 3 emerge, which is where systems get redesigned on the premise of what the technology can do in the future. And I think that is a worldwide phenomenon and it's happening just as materially in Wales as it's happening everywhere else.

10:45

Yes, well, I think, from our own data from our own research, we know that around 20 per cent of SMEs in the UK are currently using AI, either generative tools or other multilearning systems. And from our perspective, it actually provides a very exciting opportunity, and I think, as Paul alluded to, predominantly around productivity levels.

On an individual level myself, I tend to use generative AI most days, whether that's to help me skim over 120-page document that I just need a quick brief on, so that I can go into a meeting and adequately inform colleagues on where some of the main points might be in a report, or whether that might be to quickly get some up-to-date information that, usually, might not be so easily signposted on web search engines such as Google. 

So, I think, from our perspective, in knowing that SMEs make up the majority of businesses in Wales, it provides a very exciting opportunity for predominantly productivity and also potential growth. And for us, what we'd like to see is Welsh Government, in conjunction with the UK Government, looking at the opportunities a lot more acutely, like other developed countries have done, to ensure that our firms here in the UK, particularly in Wales, remain ahead of the curve or try and at least now keep up with the developments that other nations and other businesses have managed to go through in other countries.

I work with a lot of companies—I know a lot of companies in M-SParc and Tramshed at the moment are building, not only using AI to help their own workload, but they're actually building their own platforms, AI platforms, and that's something that they're building to be able to ship out globally as well. So, there's a lot of scope for economy when it comes to global economy coming out of Wales as well. And Bangor University—they're doing a lot with language as well when it comes to AI, so making sure that the Welsh language is kept up to date with this big AI revolution that's happening and making sure that our language doesn't fall behind in that as well.

And are there any particular sectors where the adoption of AI is greater, and why do you think that is?

It's probably more likely to be seen at the moment in financial services, probably with consultancies and tech innovation companies as well, for the very reason that it's able to fit in very much with the needs of tech firms. I know that we'll probably come on to business support later, but just amongst our own membership, we know that, particularly, professional services and manufacturing are where we've seen the largest amount of take-up, particularly around things such as research and development and ways to excel that further.

I think, from my point of view, what I've seen within my own workshops—I've seen a lot from the food and drink industry come to me for help with data analysing, so getting the information on how many people come in, what sort of times of year, things like that, and being able to better analyse the data. I'm seeing a lot from language as well. Fintech, quite a bit, is using AI; financial as well. Creative industries—there's a lot of—. There's a bit of a big boom at the moment with the creative industries in Wales that I've noticed, so a lot of them are using generative AI to help them develop ideas quicker. Yes, to be honest with you, when I'm putting on workshops, I'm seeing people from all sorts of walks of life wanting to use it.

Yes. And what are some of the benefits and some of the challenges that businesses are facing and experiencing as a result of using AI technology, do you think?

What I'm seeing is that there are not enough places where people can find out the right information. There are not enough places where people can learn how to use it and bring that into their businesses, but it's also making sure that they're finding the right places that are giving them the right information that is ethical in its use as well.

10:50

From my point of view, I come back to the fact that it's not particularly industry-specific who's using this technology. I'd say anyone who is working in a knowledge-intensive organisation or industry is probably using this technology already, anyone who works with a lot of knowledge capital is using this already. We're seeing that across all sectors. I think some industries are more progressive about adopting the technology into workflow, and I think you start to see that particularly with the professional services firms, where they have a huge amount of people costs, and the productivity benefits of doing this are substantial. So, our customers tell us that they they've achieved north of 80 per cent productivity benefit by embedding our technology in their workflows. That's an extraordinary amount, and that can either deliver you greater margin or it can allow you to do more with the same capacity to avoid adding cost, and we're seeing both of those business cases apply.

I think there's also an opportunity to harvest a lot of information, synthesise it, and have it augmented for humans to consume in a way that leaves humans in the loop, but humans empowered by a lot more knowledge than they would otherwise have been able to be empowered by, and that's a recurring theme that we're seeing across our customers.

And I suppose just to echo that in some regard, I mean, again, just looking at the data here for our own members: 24 per cent of them say that they're currently adopting AI predominantly to improve products and services, with 40 per cent saying that they're adopting it in order to work more efficiently. But at the same time, we're also getting 37 per cent of our members who are saying that one of their biggest challenges is actually how bigger firms are able to adopt AI a lot more effectively and far quicker as well. So, we know that there is an unlevel playing field at the moment in the economy, with small firms who are able to adopt the same technology that bigger firms are currently using.

I think if we are able to use it, to take a case example, at a very micro level—there are some coffee shops in the US at the moment who are using AI tools in order to determine how and when they need to restock their shelves, or how to restock, to bring in additional stock. Now, those tools can help administrative tasks, and can be a huge cost-saving asset to businesses. But with SMEs in particular, we just know that that ability to commit to adopting those types of tools—they're not in the same space, whereas for a local high-street coffee shop, having your till, say, match up with stock inventory and using AI to tools, then, to just automatically place an order when it's necessary, would be massively beneficial, and could potentially unlock additional growth by allowing that coffee shop, in particular, to focus on essentially those front-facing roles and getting customers through the door.

You hear it a lot coming from Aberystwyth as well when it comes to agriculture and farming, using AI image recognition on drones to be able to see certain things and pick up certain things that's going on in the fields as well. So, it really does go right through to everywhere.

Just building on a couple of those points, one of the things I'd also highlight, as well as the productivity benefits, is that we are now seeing margin benefits appear. Profits are increasing by more than 40 per cent in a number of organisations. I think there's a bit of a lag between the technology being adopted by organisations for productivity and effectiveness, and for that to impact the commercial model and the pricing, because as the customer organisations start to become aware of the gains that supplier organisations are making, I think we'll start to see that normalise a little bit, so that's one dynamic that I think the committee should be aware of.

The second thing I would say is, I talked earlier about the three different levels of maturity, and I talked about the fact that, right now, lots of organisations are applying AI to workflows. One of the limitations of that is the trustworthiness of the technology, so what we're seeing, and I echo the points of the panellists, is adoption in places like financial services. However, there is a lot of caution around the level of faithfulness that comes with the outputs of these models. Therefore, we're seeing organisations applying the technology to workflows where it doesn't matter if the answer is wrong, which is massively limiting. So, there's an addressable market constraint there. Until such time as you can trust the AI, you can't apply it to all workflows, you can only apply it to the ones where it doesn't matter if it gets it wrong.

10:55

Diolch, Cadeirydd. We've touched in this session already, and in our previous session, on the uptake by businesses of this technology, so if I could come to the FSB, just in the context of SMEs, in the evidence that you submitted to the committee in preparation for this session you mentioned that 20 per cent of SMEs are now using AI. What are some of the barriers that have been presented to these SMEs that have adopted that technology and how might we be able to move some of those barriers? Is, actually, the fact that only 20 per cent are using AI an indication that those barriers are quite significant as well?

I think there's a lack of knowledge about how to adopt some of the AI tools that are out there on the market, so education is very, very important, and there is a responsibility for our higher education and further education institutions to now look at what the demands on the economy are. So, we know that, particularly amongst FSB members, there's an interest in adopting AI tools, but we also know that there's a high level of poor digital literacy in Wales as well, so looking at how our education institutions can now meet the demand of the economy, meet the demand of our firms is really, really key. And that's something that Welsh Government needs to be looking at, where the economy is demanding additional digital literacy skills and oracy, perhaps, and aid in adopting AI tools, that educational institutions respond to that and also predict potential trends in the future.

The one thing that we know at the moment is that—again, we'll come on to business support later—where AI tools are currently being used in the Welsh economy, they're still quite far behind other small nations, and that's a huge disadvantage to us. I think many of us here would have seen the article in The Economist yesterday that highlighted some of the challenges that are currently presented for public services here in Wales and particularly with educational standards. It's important now that Government acts responsibly, looks at the potential that's in the Welsh economy and ensures that our future generations are well equipped and in a better position to then go into firms of all sizes to deliver on the growth that our nation needs.

Before you continue, Luke, I know that Jenny wants to come in on this very point.

I just wanted to leap in on this, the digital skills that we're giving our young people. So, the new curriculum is, obviously, focusing on literacy, numeracy and IT literacy, but it hasn't yet reached the people who are current school leavers. Have you any insight into whether that new curriculum is going to deliver the things that you said we're behind on so that we can be more competitive?

I would need to come back to you on the specifics on that—

So, if we compare, say, Wales's educational system with Denmark's, Denmark started its digital strategy back in 2006, and whether you're a private citizen or a business, you're now able to, say, file your tax returns or file your business rate returns through digital services. The—

Fine, okay. I've got that. So, do the Programme for International Student Assessment tests enable us to see where we need to get to, because Estonia is the best European country?

I would need to come back to you on that.

Sorry, Luke, we're having problems hearing you at the moment. Can you try and repeat that again?

I think you mentioned earlier, Felix, that Wales is behind other nations in the adoption of AI within SMEs. Where roughly do we rank, just out of interest?

I'd have to come back to you on that. I wouldn't know off the top of my head.

11:00

Okay. That was just a curiosity of mine. We've discussed some of the risks in our previous session and some of the perceptions around that. From what you're hearing from your members who've adopted AI, do you think that it's reliable enough to be rolled out at scale and pace? Or do your members at least think that it's ready to be rolled out in that way?

I think there are simple AI tools that firms can start to adopt already, things like Copilot. There are already tried-and-tested tools out on the market that can be used by small firms. In our evidence, we referred to Northeastern University, which has been developing AI tools to help freelancers in the US be able to better track their admin and business tasks. The tools that have been developed by the university are being piloted in the economy, and that's the kind of activity that we need to see particularly in Wales. If we know that there's growth potential for SMEs to adopt AI tools, well how can we make sure that where the Welsh Government are saying that digital is a priority, because they have a digital strategy, they're also working with the private sector to make sure that digital tools, AI tools are being tested in the economy, and predominantly with the firms that are going to benefit the most out of it. Or, just out of the very basics of the nature of the Welsh economy, small firms make up the majority of businesses in Wales, so let's try it there. I think there's a bit of a mismatch between what Government is saying is a priority and how they're focusing on the Welsh economy, in particular.

In terms of adoption, I can talk from a personal point of view. I would say that 100 per cent of the people in AMPLYFI are using generative AI in our everyday work, and we have been doing for probably 18 months now. And what I would also say is that software engineers are using it as a matter of course, so, just like when you type an e-mail in Gmail and it will prompt you to offer you the opportunity to tab to the end, and it automatically builds the rest of the paragraph for you, the same thing is happening in software engineering. It's increasing productivity by 40 to 50 per cent of our team, which is extraordinary. Also, you've now got these tools embedded in things like Google Docs and Microsoft Word, so you don't need to go and ask, it will prompt you. You can ask Google Docs to rewrite a section of your document and it will do it for you and you can then accept it or decline it. I find it hard to believe that we haven't got much wider adoption than is perhaps visible at the moment within these organisations. I can't speak to the small organisations, but many large organisations are buying these technologies as a matter of course, and most large enterprises have an enterprise GPT or equivalent subscription for all of their knowledge workers. I think smaller organisations are perhaps leaving it to individual professionals to choose to pay and expense that back, or pay and absorb the cost themselves. Within AMPLYFI, we're making that part of the working infrastructure that we provide all of our people.

From my personal view, I was working in creative technology for about 14 years for other companies, and last year, I went freelance. I wouldn't have been able to have done it at the rate that I did—gone freelance for a year, tested the market and then been able to build a business for myself—if it wasn't for the AI tools I was using. I'm not very good at writing, I'm not very good at articulating myself, but I'm good at my craft. But with ChatGPT and things like that, it meant that I could build my website within two weeks, rather than two months. If I was going to write that myself then that's probably how long it would've taken me. And from that, I did end up in that first year freelancing where I just put all my skill sets out there, but then, what people were interested in hearing was when I was talking about how I was using AI to help me, and that's when I started getting interest from companies and from places like Business Wales. That's when I got the interest from them to show my methods and what I was doing to help me personally, and so I do see a lot of uptake in AI from other companies wanting to know how to do that. So, starting in January, I've got somebody starting to do five hours a week for me. It's somebody who does admin. I haven't gone down the route of trying to use AI to do all of my admin for me. I don't feel like AI itself could do all that admin for me, but I've got somebody who knows admin and who is a professional in admin who is now using AI to help their work flow in a better way. So, I feel like it does empower people and fills those gaps that you need as a one-man band or a small business.

11:05

Thank you. That was very helpful. I'll hand back to you now, Chair, because of time. 

Can I direct my question to AMPLYFI and particularly the AI supply chain and the infrastructure required in Wales to support it? Can you make some comments on that? 

Absolutely. The first thing I'd say is that we're around this table talking about AI as a new and exciting and potentially threatening and disruptive thing, which is absolutely true, but I'd also say that we don't talk about the technology that sits on our laptop or in the technology that we're using to communicate today. Already, AI has been adopted into the technology infrastructure. It is just infrastructure; everybody is using it. When I go into my office building at One Central, when my pass reaches the scanner it knows what floor I'm going to and has organised the lifts to make sure that the most appropriate lift is next to come and pick me up. This is just the way we live. It's in everything we do, you just don't see it. And so, the first thing I'd say is AI is embedded and it's everywhere.

The second thing I'd say is that there are multiple layers of capability that are required to bring all this stuff into the hands of people in a productive and scalable and secure way, and there's a combination of things required. The first is you need the infrastructure. The infrastructure includes data centre capabilities, networking, server, storage. The kind of technologies that AI runs on are really resource intensive, so you need specialised chips, you need graphics processing units. GPUs are very efficient, but they get very hot and they're very power hungry, so you need a lot of that capability. You've probably read in the press that there have been significant challenges with the supply chain globally for GPU technologies and people have been paying an incredible premium to get these things.

Huge investments are being made by the US Government in particular to build onshore manufacturing capability for really high-end GPUs, because they are very dependant, as are most of the world, on Taiwan at the moment. Other parts of the world are making similar investments and investments are being made here in the advance semiconductor industry in Wales. For me, that's a really foundational part of the supply chain. We have to, I think, from a national security perspective, invest heavily in building GPU capability, and I think that's something that the Welsh Government should look at in terms of the next step with the semiconductor strategy. That's only one component—the infrastructure. 

I would say another piece of the infrastructure that I would like the Welsh Government, perhaps, in co-ordination with the UK Government, to look at is foundational models. Today, we use extensive sets of large language models. We have quite a unique proposition, where we have our own proprietary traditional machine learning capability, which is basically maths. You transform information through data science and we blend that with generative capability and we've become craftspeople around how we use the outputs from the mathematical transformation of data—data science—into research outputs for our customers using generative technologies. And we use a multitude of generative technologies in how we do that and we've become craftspeople at how do we prompt them, how do we control them, how do we make sure that the answers they give back are accurate, how do we fact check everything that we do so that there's no hallucination. 

But all of that requires an infrastructure stack that is really powerful. There are not that many companies in the world that can provide that. The likes of Amazon, Google and Microsoft are the principal ones. I think, if you can find a way to build some core data centre infrastructure, if you can find a way to equip the ecosystem, the supply chain, with advanced GPU semiconductors, if you could find a way to partner with some hyperscalers to build the server storage and the application layer that organisations like ours consume from, and you can find a way to create some independence around the foundational models, I think you can have something that's incredibly powerful and unique in the market today. I don't know of any other region in the world that's got that kind of fully integrated value chain perspective around AI. 

The reason why I think foundational models are so important is that you're probably all aware of, as it's often called, the 'AI arms race', which China and the US have embarked on. Europe is, at the moment, standing on the sidelines wondering how heavily we should regulate, but regulating one tenth of the research and development around this stuff is not regulating anything, right? The reality is that the US invest 20 times more than we do and China invests something like 10 to 15 times more than we do. One of the things that they're investing in is this foundational model capability. Why are they investing in the foundational model? It's not so it can complete a sentence in an e-mail for you, it's because, actually, the more broad and deeply capable you make large language models, the more proximate you make artificial general intelligence.

So, we're now at a place—. You've probably heard of a concept called agentic AI, which is where you have multiple large language models, all operating in a process asynchronously and independently from each other, governed by an orchestrator agent. At the moment, that exists, that technology exists. We use it in some of what we do, and at the moment, those things are hardcoded, so those agents operate up and down a process as defined by a human. Within, I would imagine, 18 months, we'll be at a place where the concept of a meta agent will have arrived, which has the ability to reprogramme all the agents in the system and redefine the process that they operate in. At the moment, if you accept the evidence that says a large language model is human capable in specific domains but not across all domains, but you can build a set of large language models that operates under an intelligent agent that can design the systems and processes that they run within, then what you have is artificial general intelligence. At that point, that's system redesign for the world.

I don't want to go all Elon Musk on this, but it is extremely exciting, but it is extremely dangerous. And you've got the US and China running full tilt towards this solution, this outcome. Why? Because who gets there first will define the geopolitics of our planet and possibly other planets over the next 100 years. In the UK and in Wales, we don't have any foundational model capability—none. Germany has it, France has it, the US, China. We're not doing anything, and the UK Government needs to really look at this quite hard in terms of critical national infrastructure, and I think the Welsh Government have a role to play in bringing that to their attention. 

11:10

In Wales, we're great when it comes to sustainable energies, and the infrastructure that you're talking about requires a lot of power. I feel like we can utilise our sustainable energies here. And with the landscape, we've got access to the seas, to the waters, when it comes to cooling those machines down as well. 

That's actually part of our proposal—a fully sustainable supply chain for AI. I think you have the opportunity to put some of this infrastructure in north Wales, and take advantage of the natural landscape, the wind, the wave. The Norwegian Government are doing a lot of this. There's a huge amount of investment in AI infrastructure. They did it initially for the crypto boom; they're now doing it for the AI boom. These things are being built in the north of Norway, close to the source of the energy, which in their case is hydro. The by-products of the power requirements of these really dense GPUs is heat, and they use the heat to heat the homes. It's a beautiful system, but there's no reason why Wales can't take advantage of similar geographic benefits that we have. 

11:15

If I can maybe just come in on top of that.

We currently have the UK Government's industrial strategy. This really ought to be the perfect opportunity for UK Government and Welsh Government to come together to look at the opportunities that do exist in Wales. One of the very frustrating things we have experienced over the past few months is the lack of clarity, really, on where Wales will play a role in that. There's an awful lot of nice language, and Ministers are making very nice gestures, but there's no detail, there's no detail on the mission for growth, there's no detail on exactly which industries will benefit most from the industrial strategy here in Wales, and there's no detail regarding whether major bits of national infrastructure will be committed to Wales to potentially support, as both Paul and Klaire have suggested. And this is really going to hamper Wales's ability to stick out as a very innovative nation over the next 10 years.

If I just come back to where Estonia and Denmark sit in particular, and I suppose Norway, albeit, yes, they are independent sovereign nations, but, at the same time, with the levers that we have with devolved governance, there's no reason why we can't be going out onto the international stage and trying to sell Wales, as best as possible, as a place for inward investment. There is an argument here to be had on the need to have larger companies, international companies, starting to headquarter themselves in Wales, to help build up our ecosystem. This is not really a question of whether it's big business versus small business: the FSB is confident on the need for SMEs in Wales to play a very significant role in economic growth, but there are benefits that come with large companies headquartering themselves here, and that comes with capital inflow, and that also comes with research and development. And building up that broader ecosystem will help to sell Wales as the place to do that investment, and it will help, hopefully—. It should really be convincing UK Government that Wales is the best place to be doing it. 

I just want to go back to what was being said earlier about the need for humans to be empowered and being in the loop on all this. So, just taking the example of the Brixometer as a way of reading the nutrient density of food. So, when you're going shopping, 'Should I buy this one, the cheap one, or should I buy the more expensive one, which actually is more nourishing?' With Brixometers, you can buy them quite cheaply, or you can pay a lot of money for a better one. Is this the sort of thing that the food and drink industry that you're involved in is talking about? Because you can see the benefits for the agricultural industry to produce nutrient-dense food, rather than the rubbish that is dished up at the moment, in order to maximise profit.

Yes, that's putting the power in the people's hands, isn't it, to be able to make those informed decisions themselves, and I think things like that are great. It impacts mental health, how we eat—it does. It affects a lot. If we have a happy nation, then we have a productive nation.

Totally. So, what's the role of Government? Should Government be giving all citizens a Brixometer and telling them to look after their bodies better, and their minds?

From what I see, there's already a push to help people eat better, and things like this.

But, yes, with something like that, that's putting power in the people's hands, isn't it, to make the decisions themselves. I think that things like that would help, and, again, if people in this country were to develop the AI tools and things like that, which could do those sorts of things as well. 

Just on your question about the role of Government, we currently have the 'Digital strategy for Wales', and within that there will be a natural discussion around AI. I think Government have a responsibility to start really implementing the most elementary types of digital platforms, and that's not happening. I think there are challenges in terms of the governance structure in Wales, in particular about how digital strategy can be equally rolled out across the country, and how particularly elements of procurement can be improved within that strategy.

But I think if I just come back maybe to food in particular and the opportunities that sit there, in Pembrokeshire we have Câr-y-Môr, a fantastic small business. They've been doing some fantastic work in terms of better understanding the role that seaweed will play in our food chain and food security. In a recent meeting with the UK Government, I emphasised the need to invest in small firms like this that are doing some fantastic R&D work, but actually where there's also scope to better improve access to AI tools to just do data analysis and conversion. Because when our firms have got that information to hand, they can start to go to the international markets and they can better pitch themselves and they can better sell their products.

That's the same thing with Welsh food and drink. Welsh food and drink exports last year were worth almost £1 billion. So there's something to be said on how data analysis through AI can not only help us better understand our supply chains, how our food is grown, how our farmers are coping in particular, but also how we can actually use it as a product to generate wealth here back at home.

11:20

So, when you say that the Welsh Government's not even implementing the most basic platforms, could you use an example—you know, sticking with Câr-y-Môr or whatever? What do you mean? Because I'm afraid I'm not a scientist.

If you want to access your medical records, it's only really through COVID-19 that we actually saw the UK Government in particular excel in terms of building that digital platform, so that people could access their medical records, book appointments, book vaccinations. Since the pandemic, I would struggle to tell you where public services have looked to excel any further digitalisation of public services.

It's very slow. Very, very slow. And again, when you take the language of Government at the moment, it's very nice, it's very enthusiastic, but really what's going on on the ground? We're years behind most mature countries in Europe.

Okay. That's very useful, thank you very much. You've clearly indicated—. Obviously, Paul, you were saying something, if you like, much more strategic, which is around our ability to not be just subject to either the US or Taiwan or China in having systems that will enable us to do our own thing if we don't like what any of those people are offering us. So, what needs to happen there? We've got this semiconductor centre up in Cardiff Gate that's just getting going, but if they haven't got enough energy to power the system, and we're not using combined heat and power, then obviously we're nuts. What needs to happen to ensure that we're not going to be completely just the serfs who are supplicating with some other big operator somewhere else in the globe? 

In all honesty, we're doing a lot of things, we're just not doing them in a joined-up and strategic way. Investing in a semiconductor industry: absolutely the right thing to do. I'm not close enough to it to know what the strategy for that is and where that goes, but I argue strongly it should be towards GPU manufacturing and more advanced. If I look at the other areas of the supply chain that I've talked about, we have invested heavily in a digital technology ecosystem through the Development Bank of Wales and through the Cardiff capital region innovation investment capital funding processes. Actually, the data shows that Cardiff and the capital region is outperforming London at the moment in terms of rate of addition of new technology jobs and rate of creation of new technology companies. It's actually one of the fastest growing parts of the country and possibly the fastest growing part of the country right now, from a low base. I do believe there's some data centre investment going on already, but I don't think any of this stuff is joined up, and when I say 'joined up', I mean a holistic Government-led strategy. I think there's a missing piece around foundational models, which I don't think Wales should try and do on its own—I think there's a part in more critical national infrastructure for the UK Government play.

But I would love to see—. There are essentially three pillars that any sector needs to flourish. The first is access to talent, the second is access to capital, and the third is access to market. So, I would like to see, across the AI supply chain that we've articulated, a clear strategy for each layer separately, which is: what is the access to talent, what is the access to funding, and what is the access to market? And I think that we can make Wales a really attractive place for talented people to come. We can build better, even closer working relationships with the prestigious academic institutions here. We can co-create R&D activities, much like the Cambridge technology ecosystem does with the University of Cambridge. We can do those things, that is within our gift.

We have, I think, done a great job of unlocking early stage capital for new companies. I'd argue that maybe we're a little bit too democratic in who gets what, but I think that's a function of Government; you can't—. But I don't think all of these opportunities are created equal, and I'd really like to see us have a high-potential stream within the application community, which we form part of, that says, 'These are the things that could have an overweight impact on the long-term viability and success of the Welsh economy, and we want to put them through a special track that gives them some certain advantages', which might include help with export opportunities, which might include help with introduction to international growth investors who can bring from seed into scale-up growth capacity.

And then the third thing, which I think is probably the most critical thing, is one of the limitations that I have found—. I'm not Welsh, but I'm really proud that I've been down here investing not just in AMPLYFI; I've got a portfolio of companies that I've invested in. It's my own capital; I spend a lot of my week here. I spend the vast majority of my working life working on these businesses to try and make them successful. And the big frustration I've always had is with all of these point activities and investments, they're all the right things, but they're not joined up and they're not being brought to bear on the most high-potential opportunities at the point at which they need them. And none of it involves support accessing the market.

What any growth stage or seed-stage company needs is proximity to its customers. It needs short feedback loops to get better, to understand the value it's delivering and how to improve that value. And if we could find a way—and this is also in my supply chain proposal—for us to build a consumption mechanism around the entire supply chain here in Wales, so that there's some kind of incentivisation for Welsh businesses, both public and private sector, to consume from the Welsh AI supply chain, I think that will give us a lot more opportunity to iterate rapidly early in growth-stage businesses, so that they can understand, sharpen their edge, understand the business case for what they're doing, and have a much more flourishing ecosystem.

I know I've said a lot and I apologise, but if I summarise it all, it's doing most of the right things, doing them, in my opinion, in a way that feels disjointed and not strategic. Doing those things in a more joined-up way and applying them to the highest potential opportunities to have the biggest outcome for the economy and society in Wales over the next two, three generations is what I'd like to see, and I think that's within our gift. And I think there's an advantage that Wales has with its devolved Government where it can do some of this stuff without, I think, the necessary bureaucracy that comes with dealing with Westminster.

11:25

You mentioned the public sector. So, we make policy and it's all delivered by local authorities, in the main. So, Warm Wales, a social enterprise, has developed the AI ability to target the worst homes in terms of fuel poverty in any local authority. Only five or six of the 22 are actually engaging in it, and they're being invited to get involved for peanuts, frankly, and they're not doing it. And you can see the benefits—the huge benefits—in terms of both more productivity, better health et cetera. How are we going to change that? How is Welsh Government going to say, 'Wake up, guys. You can make your community much wealthier by doing this'?

11:30

I think there is a divide as well, where you say, yes, Cardiff central region is doing really well, but I'm from up north Wales, and I volunteer for a youth homeless charity, and I see the struggles that those young people have trying to find work up there. There are small pockets of people, and now, thanks to M-SParc, there are more companies coming together, so they're finding stuff, but it's still really hard. I come down here every week to work, because this is where the work is, even though I live in the north. Travel costs—well, it's about 200 quid a week; that's not including the accommodation for me to stay here. And those young people that I volunteer with, they don't have that kind of money to be able to do this. I don't know, maybe it's transport, you know lowering the transport costs, more subsidies with the transport, so people—. Because if it is Cardiff central region that is doing well and the rest of Wales are in poverty, one of the things we need to make sure of as well is—. I mean, AI, it's going to bring a lot of wealth to the economy, but it's important that it doesn't just affect one section; the wealth needs to spread throughout the economy.

Sure. We need a just transition. But we do need all our local authorities involved and focusing on this. So, how are we going to—? Have you got anything further you want to say on this?

There's a need to incentivise local authorities, and this comes back to the comment I made earlier about there being maybe a weakness in the governance structure in Wales in particular. It's often very, very complex, and there's a lack of direction. So, if you're telling us here now that there are local authorities not taking up this pilot or this project, well, then, clearly there's a disconnect between Welsh Government and its ability to effectively bring into line the local authorities with the agenda. You know, the private sector cannot be blamed for that, obviously. There is a role for Government to, for want of a better word, get its house in order, to make sure that, where there are strategies, they are effectively being implemented across Wales and that they are effectively working with the local authorities to do that.

Diolch, Gadeirydd. We took evidence this morning from another panel, and in one of the professors' opening remarks, he said he does not see it as a human replacement and, rather, there's augmentation. I was just wondering if you would agree with that, in the short scale, knowing that, obviously, the future is far more difficult to predict. Paul.

I'm probably not the right person to ask, given what I do for a living.

That's why you came to me first, yes. [Laughter.] So, I'm not in the Elon Musk category of, 'In the future, no-one will work unless they want to.' I am very much in the category right now that it is augmentation, for a bunch of reasons. If I come back to what I said in my opening remarks, around the three levels of maturity, I personally believe we're through level 1 and we're into level 2. Level 2 is about the application of AI to workflows to improve productivity, and at the moment that is resulting in people being able to produce more, with the same resource, and it's also resulting in people being able to make more money by producing the same with less resource. So, we're seeing those two dynamics.

I think the commercial models will evolve, and people will become more aware and understand, and I think there'll be a levelling of that playing field. But I think there's a reason why the applications of our technology are quite limited. I mean, there's, again, a report, which is linked to our submission, from McKinsey, which shows we're now in a position where something in the region of 72 per cent of organisations are using AI in at least one workflow. But, interestingly, almost 10 per cent of companies are using it in five or more, but they're using it in workflows where it's controlled by a human, the output is governed by a human, and the reason they're doing that, frankly, is because you can't necessarily trust the outputs. Some of that is real, and some of that is, I think, paranoia. 

11:35

Yes. Look, I work in this industry, and I think that it's a healthy paranoia. I think we should be concerned about some of these things. I personally don't think you can regulate this, but I think that we can educate people and we have to do a better job of this than we did with the social media revolution. But my strong feeling is now that we will be, probably in the next three to five years, in a place where the trustworthiness will have been solved. We've solved it for our use case; we haven't solved it for all use cases, but I think it will be solved. I think models will be so broadly as well as deeply capable that it won't be as very specific that we achieve a human level of capability in a specific segment of human knowledge, but within three to five years, models will be at human level or better across a broad spectrum, which is basically the definition of artificial general intelligence. So, if you solve the trustworthiness and you solve the capability, and you've educated, and you've removed the paranoia, I think it is inevitable that AI will operate independently and autonomously within certain use cases. It's already capable of doing that in some use cases, but we just don't switch it on.

So, I use AI in my office quite regularly, and I can think of at least three or four work streams where the human input is at either end. So, the human starts the process, includes AI, and then the final product is proofread—let's call it that—and, through that paranoia, by a human. The process itself, that productivity gain, is managed by a human at every element. So, I see that quite clearly in my office. So, it's just interesting, then, taking it further in terms of training—and I know Felix and Jenny talked a lot around skills and training around this—the feedback that I'm having from users of AI is a new set of skills of prompting AI, and that's the terminology that they use, prompting. The skill development will be now what you ask AI to do or how you ask AI to do something rather than what it is doing. Is that a fair simplification of where we are?

We call that prompt engineering, and we've been doing that for over two years. I think where you'll get to and what you'll find if you use our application is that we do that for the user. So, you can prompt it yourself, but we give you options. We know who you are, we know what your job is, we know what you're interested in, we know the outputs you're trying to create, and we can tell you what we think you should know, and we can invite you to ask these questions and get these outputs. So, absolutely, the way you prompt this stuff is absolutely critical to the output you get, not only in the quality and relevance, but also in the accuracy, and there are lots of really sophisticated ways that you can do that. But I think it's less about—. I think, within a short space of time, it will be less about everybody being educated to be really good at prompting; it will be much more like Google search. It will be just much more natural, and the machine will understand all of the context to be able to do the prompt engineering for you without you seeing it.

So, the inclusion of the individual, then, the development when the paranoia has moved away, as you've explained, Paul: in what fields, then, do you see that AI just working autonomously without human interaction?

So, there are fields that it's already doing that in. It does that in certain areas of defence, although I don't think it gets switched on, for obvious reasons. I'm chair of a board, an AI company, that operates in the insurtech space, and we are already automating insurance claims. So, we are automating millions a day. These are usually high volume, low value—things like medical claims, where the machine can take the claim, extract all the information, assess whether it's covered, based on reading your policy, not a standard single policy, but your policy, make a determination of coverage, and will then check to see, if it’s a health claim, for example, whether the prescription that you’re asking to be paid for is valid against your health conditions. And it can do all of that in a fraction of a second, and it will green-light you for payment without a human touching it.

Now, there are other health processes that we run, which are more augmentation, because if it’s a complex motor claim, or a property claim, then there are multiple asynchronous processes running, and you still need the human at the end of that process, to validate and check and say, 'Yes, it's a big cheque, and I want to make sure that this is all—'. So, that’s an example where humans are involved, typically; sometimes at the front, always at the end. But now the human is being served with all of the information that the machine has used to make its decision. The machine has given a recommendation, a confidence score, around its decision, and highlighted to the human where it’s not sure, and a human can—. So, there are massive productivity benefits, but, my honest belief, within three years—fully automated.

11:40

And do you think—? Sorry, I'm conscious of the time, so this is my final question, just to you, Paul. Do you think, then, in the scheme of this, that a lot of people will notice a difference themselves, the user—I say the user—or the layperson, because so much of this AI is going to be backroom AI? What is the benefit to the individual who is using that? They’re not the business owners, and they’re not going to see the productivity gain, but what is the benefit they will see?

I mean, for me, the whole goal here is for AI to be—. I would love, in three years’ time, for us not to be going, ‘Did you use AI in that?’; it should be just taken for granted that AI is in everything that we do. I would say, in the example that I’ve just given you, that company, the benefits right now to the user—in this case, it’s the consumer—when you have a stressful event where you need to make a claim—and we’ve all been there, where you’re waiting, potentially, weeks, sometimes, months, to say, ‘Am I covered or not?’, and, in the meantime, you’re financially exposed—is that all of that stuff can be done in seconds. You can have an instantaneous evidential response. So, we’re seeing, actually, insurers investing heavily in this technology, principally because of the net promoter score impact—the quality of experience that it gives to their end users, their customers, their consumers. But what we’re also finding is that it takes some of the specialism away from their claims agents. So, instead of having a different set of agents for every line of business, you can have a single central customer experience management team, who are much more focused on helping the customer through this process, with the technology doing some of the technical bits of that process. On AMPLYFI, where we’re seeing is—and I gave an example earlier—88 per cent productivity benefit that people are seeing now. They are doing—. One of our customers is a global sportswear manufacturer, with headquarters in Germany. We’re not allowed to say who they are, but there are—

They’re using this for market intelligence, and they have a relatively small team, who are overwhelmed, because they’re a cost centre, not a revenue centre for the company, and so they’re constantly fielding requirements from all over the world, and they can’t get more head count to deal with those. So, their jobs are miserable, and they never satisfy anybody. All of a sudden, they’re telling us, ‘We can do 80 per cent more than we could before, because your machine is doing the heavy lifting.’ Now, our machine doesn’t go, ‘Ping, ping, this is AI.’ We just give you a—. You build your intelligence and our machine will read every piece of content that’s relevant for you, but it knows what you’re interested in, and it knows your job function. So, it synthesises that content, it does a human lifetime’s worth of reading every 20 minutes, and it tells you what you need to know. And if you want, you can then use generative to build reports, to analyse data, and all of this stuff, and you can see the transformational impact it has on people’s working practices. And it’s doing all of the stuff that they don’t want to do, because it’s boring, hard work, and, actually, it gives them the capacity to think about more strategic applications of their time without having to go and double or triple the team size. So, I find that is the really exciting—. When you see the unlock for human potential, wow, that’s just—. Well, that’s why I’m passionate about it.

11:45

I was just going to quickly add on that, if there are a couple of minutes.

So, if we just look at investment and development banks, they're using IMS systems at the moment—so, particularly Nordea; Deutsche Bank are doing the same. There is a much broader question that needs to be had at the moment on how the Development Bank of Wales can start to adopt AI systems, particularly to improve the way that they analyse the data that is being submitted by SMEs who are looking for equity finance and loans, because, at the moment, IMS systems, what they can do is they can analyse an awful lot of that data and they can improve, then, how an account manager interacts with that business and whether they can make a better strategic decision on whether that business needs equity finance or whether they need additional business support. At the moment, those systems, from what we're aware, just aren't being used in the same capacity as to what other development and investment banks are doing around Europe, and that's a huge disadvantage to SMEs in the Welsh economy that the decisions being made on whether or not they can access equity or loans is not on par with how other SMEs are being able to do that around Europe.

Okay. Thank you very much indeed. I'm afraid time has beaten us, so our session has come to an end. Thank you for your evidence this morning. It will be very, very important to us for our inquiry. A copy of today's transcript will be sent to you in due course, so, if there are any issues with that, then please let us know, but, once again, thank you very much. We'll take a very short break to prepare for the next session.

Gohiriwyd y cyfarfod rhwng 11:46 a 11:55.

The meeting adjourned between 11:46 and 11:55.

11:55
5. Deallusrwydd Artiffisial ac Economi Cymru - Panel 3 - Gweithlu
5. AI and the Welsh Economy - Panel 3 - Workforce

Croeso nôl i gyfarfod o Bwyllgor yr Economi, Masnach a Materion Gwledig y Senedd. Symudwn ni ymlaen nawr i eitem 5 ar ein hagenda, a dyma'r trydydd panel, a'r panel olaf, o ymchwiliad undydd y pwyllgor i ddeallusrwydd artiffisial ac economi Cymru. Gaf i groesawu'n tystion i'r sesiwn yma? Cyn ein bod ni yn symud yn syth i gwestiynau, efallai gallaf ofyn iddyn nhw i gyflwyno'u hunain i'r record. Efallai gallaf ddechrau gyda Lina Dencik.

Welcome back to this meeting of the Economy, Trade and Rural Affairs Committee at the Senedd. We will move on to item 5 on our agenda, and this is the third and final panel of the committee's one-day inquiry into AI and the Welsh economy. May I welcome our witnesses to this session? Before we move to questions, could I invite them to introduce themselves for the record? Perhaps I can start with Lina Dencik.

Yes, thanks. Hi, my name is Lina Dencik. I'm a professor at Goldsmiths University of London, where I'm also the university research leader in AI justice, and I'm also the co-founder and co-director of a research initiative called the Data Justice Lab, which started at Cardiff University seven or eight years ago now.

Bore da. Ceri Williams ydw i. Dwi'n swyddog polisi i TUC Cymru. 

Good morning. I'm Ceri Williams. I'm a policy officer for TUC Cymru.

Bore da. My name's Matt Buckley. I'm an AI researcher and here today in my capacity as a committee member for United Tech and Allied Workers.

Thank you very much indeed for those introductions. I'll now bring in Hefin David to ask the first set of questions. Hefin. 

Could I ask for the panel's understanding of the implications of the use of AI in the workplace and whether there are certain sectors that are more affected than others?

Yes, I can. So, I think we see that AI—and that's often bound up, I think, with algorithmic management, not all of which is AI, but I think it's part of what we think of when we talk about algorithmic management—is more prevalent in certain sectors. I think that perhaps some of the others on the panel can give more of an overview of which sectors are particularly impacted, but they tend to be the ones that we're familiar with, like warehouse work; platform labour, obviously, is very driven by algorithmic management. We've also done research into postal work, where it's become very prevalent as well, and also call-centre work is another sector that's well known to be deeply embedded in algorithmic management techniques.

And in terms of its impact, I think it's quite wide-ranging in the sense that it's transforming how management is carried out quite fundamentally, including also, therefore, how workers experience work. And I think that there's been a lot of emphasis on job losses and how it's impacting in that regard; our research has been more focused on quality of work and how it's impacting in that way, and on worker well-being. And I think it has had quite significant implications for, or impacts on, how workers feel about the work that they're carrying out and how they're being asked to do it and their ability to also raise concerns when they have them et cetera.

Diolch. Yes, just to build on Lina's evidence, TUC Cymru conducted research last year with focus groups with workers in seven different sectors, and what we found in terms of algorithmic management confirms what Lina's research says, which is that it is very prevalent in areas like delivery, also in retail. It is being introduced in the civil service. And what we find in those areas—what people told us—is that there is then overbearing surveillance. There are unrealistic targets being set by the management systems. People are being given their targets by AI; they're, effectively, being managed by AI. And there are discriminatory aspects to this as well. We're concerned that the targets being set don't take into account people's age or disabilities. We've had workers in our report telling us that they'd seen people being managed out for failing to meet unrealistic targets. So, algorithmic management is having a very big effect on workers here today in terms of intensifying their work and making it more difficult to challenge decisions. 

Is that typically, to use a cliched phrase, white-collar work, or is that across the board?

Well, I think it's very prevalent, as Lina said, in platform work—so, Uber and companies like that, delivery—but it's also used in logistics. So, other aspects of the delivery trade. But we are—

Yes. We had one person who works in a Government department who explained that they work on casework, quite sensitive casework, and then that is managed through algorithmic management in terms of setting targets, in terms of expectation of work carried out, and they found that it was driving targets unfairly, not taking into account how much thinking time some of the tougher cases took. So, it's being introduced. And also social care: wider research at a European level from Friedrich-Ebert-Stiftung shows that it's being used in social care in Ireland and other places. It probably is being used in Wales too to set people's—. If they're working as domiciliary care workers. So, it's prevalent now and it's having a negative effect, which we've got positive ideas to try and address.

12:00

And then the other big area, if I may, Hefin, is around generative AI, and that is having a major effect in the creative industries. So, that's where AI creates new images or new text or new music, but based on artistic creations that are already out there. So, people from journalists to actors to musicians are very concerned indeed about copyright infringement, about losing income. From their point of view, they feel their work is being stolen by these AI applications, which supposedly create new materials, but are actually basing it, without payment often, on existing people's work. So, it's a type of automation that's hitting the creative industries in Wales very hard indeed, and, when we to spoke creative workers, they were very concerned indeed about this.

Okay. I did raise earlier the impact on politics as well, and politicians facing the same kind of issues, although not, obviously—. Well, intellectual property, I suppose, issues.

I'll come to Matt in a second, but, Ceri, I just wanted to ask you: what about the impact on—and I'll come back to Lina as well, sorry—what about the impact on job replacement? Are we likely to see a swathe of job losses, and are they likely, then, to be offset by new technology jobs that will be required alongside the roll-out of AI?

Yes, job displacement is certainly something we're concerned about; across the Welsh economy, this is a major concern. It's difficult. There are many different people who've put predictions there, but it is an uncertain future as to which jobs will be lost, which jobs will be created. We have seen in telecoms, which perhaps Matt can speak about a bit more, that BT announced a very substantial round of job losses last year that was linked to their use of AI in places like call centres, but also in terms of Openreach engineers. So, where the new jobs come from is a worry. We can see that productivity is something that could be potentially improved by AI so long as workers are part of the conversation. Another option for the Welsh economy overall is that, if companies and public services are going to be that much more productive with AI, perhaps those benefits should be shared with workers in terms of less intense [Correction: 'shorter'] working hours. But, in terms of where the new jobs are going to come from, that is something that we'd be interested to hear. Certainly, training would be important so that workers are in a good position to take advantage of any new jobs that arise.

Thank you. Before I ask if Lina wants to say anything, I'm going to ask Matt to come in, because he hasn't come in yet.

Sure. I think, for the most part, I'd reiterate what's already been said. I think, speaking about BT as an example, there are 50,000 job losses expected there; 10,000 of those are directly due to AI. And generally, I think what we see is, the more vulnerable the employment, the more likely exploitative use of AI is, in part due to a lesser ability to challenge and resist the implementation of those technologies or to raise issue with them. And that comes both in the form of workers that are more likely to face discrimination, but also those with shorter tenures, those that are not protected by existing legislation, and also workplaces where there is increased surveillance, either due to the nature of the work, or due to regulatory requirements around compliance.

But I would say that the impact it's having is really affecting every industry. We've had, over the last couple of years, major lay-offs at some of the biggest tech firms involved in rolling out and creating the AI that is being used elsewhere.

And is that being offset then? Or do you think that offsetting is not—?

No, it's not being offset; it's simply a matter of lay-offs and decreases in total staff numbers. Even at those major tech firms, despite the involvement that they're having in AI, there are questions around the profitability of that for those firms, and, regardless of the intention, there are fewer workers now there than there were.

12:05

Essentially. And we do see it a lot, unfortunately. We see it even with people involved in labelling data. For example, at TikTok recently, there have been hundreds of workers in content moderation who are potentially losing their jobs, and those workers involved in labelling content, which is then used to train AI. That is also happening at a global scale outside of the UK, but it very much is a matter of taking worker expertise, using it to train AI systems, and then nobody benefits, except the shareholders—certainly not the workers themselves.

Okay. And just to wrap up my section, Chair: Lina, do you want to come in and make any comments on anything we've heard so far?

I think it's pretty comprehensive so far. Maybe just to kind of reiterate the point that I think also, with generative AI, what we're seeing is disparate impact. I think that's really important to recognise, that it's not—. It's very linked to inequality issues, and generative AI is likely also to further that even more. So, sectors, positions, occupations that are likely to be more compatible with generative AI, where it can be integrated into the workplaces, are likely to, obviously, perhaps experience positive benefits from generative AI, whereas those that are less compatible with it are likely not to. So, I think there are questions of inequality all the way across that.

Diolch, Cadeirydd. Good afternoon; it's just gone past 12, so good afternoon to all three of you. Thank you very much for joining us. Just developing on the theme around inequalities with AI, particularly generative AI, I'm just wondering how those are impacting on the workforce. Ceri, we'll start with you.

yes. When we spoke with workers in the creative industries, which includes journalism, they were very concerned there about the potential impact: in terms of journalism, for example, if there are AI models that can create news articles themselves or produce information, that they are often learning the information that they reproduce from biased sources, so there's a real concern there that discrimination could be exacerbated by generative AI. So, that's one big area. Could I talk about inequality in terms of algorithmic management too?

Yes, by all means. Any form of inequality you see, that would be really helpful for us as a committee.

I think it's really important, because I think there are—and I know Lina will want to say more on this, as it's one of her expert areas—. But I think, when we spoke to workers in delivery, they were telling us that the targets had been set for delivery rounds not taking into account if people were older or if they were disabled, and they gave us examples there with people being managed out of their jobs because of their age.

So, could I ask on that, then: would it not be able to input new data into that algorithm that does take into consideration an individual's age, disability et cetera, to give them a fairer slot?

It's a really good point, and that's the kind of thing that we'd want to see. Working with public sector employers and the Welsh Government, the workforce partnership council has just produced this week—and it's been shared with the committee, I understand—guidance for the use of algorithmic management in the public sector, and one of the key aspects there is we want worker involvement in these systems, so that they can help design the system, they can consider the service people receive, but also that it's worker-centric, so ensuring worker voice in the design and implementation of these applications is really important towards tackling these concerns—well, these more than concerns, the examples that we're already seeing of discrimination.

Yes. I think one of the challenges with AI and inequalities is that the way that AI has developed and current AI systems work is by identifying patterns and measuring coherence to those patterns. That's something that workers who are neurodiverse, for example, or who don't conform to a standard or normal way of thinking—or, rather, what the training data for an AI system is designed to identify—they will score worse. They will not be able to conform to those patterns, and there's a limit to how the integration of additional data sets can challenge that. I think there are ongoing questions in the research space that Lina might be able to talk to as well around the ethics of that. Is it a matter of a system should be able to identify a worker's disability and see how well they conform to the standard pattern for somebody with that disability? Is that ethical?

12:10

That's what I was just going to say. You flip that on its reverse, and you could have a neurodivergent member of staff, and AI can identify a better methodology for them to work through, rather than an individual who's the boss, who isn't au fait with those types of neurodivergent needs for that workforce. Is that a benefit, then, improving that member of staff's workability and their inclusion within the team?

It really depends on the implementation. I think what we've seen is there are systems where that can help, but, mostly, what we see is the use of well-being AI systems to avoid training management and to avoid providing education to senior members of staff around the expectations and how those systems can be adapted. One of the examples I'll give around the use of AI tracking and surveillance is workers at some of the delivery companies that we represent around how there are expectations from management in terms of meeting certain metrics—

I think a colleague of mine is going to come on to surveillance more broadly.

Sure. I'll speak more about that later.

To pick up on your point about inserting demographic categories as a way to secure or try to order a system, I think it's important to remember that the majority of AI systems are trained on majority populations. So, even if you include demographic data, they will often pick up patterns that are based on what we consider as proxies for protected categories, different demographic categories. So, even if you do, say, include age as part of it, there'll be other aspects that perhaps might serve as proxies for age, for example what kind of browser you use, which is a classic example that's been used in the past, or what kind of e-mail address you have, or something like that that actually serves as a proxy to filter out people for certain reasons that don't match the definition of success or optimal worker. Inequality is very difficult to exclude, and bias is very difficult to exclude—

Sorry, Lina, can I just quickly come back there? Is that in the recruitment process? So, say if someone's got an AOL e-mail address—I'm not discriminating against anybody with an AOL e-mail address domain, but I remember that being big in the 1990s—it would just filter that out, thinking that's someone of a specific age demographic, not what we want for this job. Is that what you're referring to there?

Not necessarily intentionally. It might be the fact that well-performing workers based on historical data have used other types of browsers, for example, and they might also be younger workers, et cetera, and so these other types of data points can be picked up, and because it's based on pattern recognition, as was mentioned, these kinds of data points end up serving as proxies. So, it's not necessarily an intentional aspect of the design, but it is what can happen, and it can have a large-scale impact, excluding entire groups of people based on that kind of filtering.

That's really helpful. Staying with you, Lina, if I may, I might just take the conversation on to awareness and training within AI and the use of AI, and just what the levels of understanding and awareness among the workforce are at the moment when it comes to AI. Where do we need to improve on that? Is it, as I've asked in a previous panel, around the prompting and how we use AI, or is it just the bare understanding of what AI is and the differences between open source, augmented—? Not augmented, I've forgotten the other terminology, but the different types of AI. 

I think it's tricky, because, obviously, when employers introduce AI systems, they're introducing services, they're actually buying entire services, and, often, they won't have access to how it works, the model or the algorithmic variables that are being used, because that's part of the business of the provider, and so it's not something that's been made clear or open. So, I think it's tricky for them to necessarily get a deeper understanding of how it actually works.

I guess a key aspect would be to understand the risks and perhaps make more explicit demands on the providers of these technologies to make clear what the risks are, but also be responsible and accountable for those risks, so, if there are harmful impacts, that they are responsible for those. Because I think accountability is where it gets lost a little bit with AI in particular, and who is responsible for what. So, I think, definitely at that level, in terms of what might be harmful impacts, there definitely needs to be more awareness and understanding.

But also, perhaps, it's about what are the broader consequences of introducing AI into lots of different workplaces. Even from an environmental point of view, there might be more awareness that needs to be made around that. If more and more sectors of the economy start to become AI driven, that's also plugging into an unsustainable technology that's actually extremely resource intensive, and there might be some responsibilities there, even, at a broader level, too. There are lots of levels at which there should be more awareness, I think.

12:15

I'm really grateful for that. Ceri, I can see you're indicating there, around awareness and training.

Thanks, Sam. When we spoke to workers last year, we found that people understood, broadly, AI and especially as it affected them, but I think there is lots of marketing out there and managers are being sold products that they don't fully understand. So, I think, building on what Lina said, it's really important that workers have a better understanding of this. As TUC Cymru, we're planning courses on AI and understanding broadly how it works, and particularly the risks, as Lina said. But I think what's really important is the workforce partnership council guidance that we've been working on with public sector employers. It talks about the importance of managers having a good understanding of how this works, and also procurement—that's a key stage—that procurement officers have a good understanding of this. Because there's a bit of a power imbalance now, between the huge tech companies, which are promoting these products, a lot of them driven around algorithmic management and driving workforces ever harder, so we do need to—. That's a really big role that the Welsh Government could play in terms of improved training around how AI works, what the risks are, what questions to ask, the importance of having a human responsible for the decisions it takes. So, that's the type of area where I think awareness needs to be driven as a priority.

Forgive me, but I'll just come back. In terms of upskilling the workforce, where would you see that priority lying?

As I said, there's a responsibility on trade unions to help their reps understand the technology more. I think it's a balance, because you don't need to have degree-level knowledge of how it works, but an understanding that it's data driven, about pattern recognition and that it relies on data about workers. These are really important core issues that people need to understand. Often, the user interfaces are relatively straightforward to be trained on, but that's another area, as well—people need to be trained properly on using the packages that they've been introduced to. We'd like all this to be done in social partnership, and there are examples where new packages are being introduced together, and the unions, workers and managers are learning together and able to improve the services. So, yes, I think training around the fundamentals of AI is important, as well as the user interface.

So, it's twofold. It's understanding what AI is and then understanding how to use AI. Matt, forgive me for coming to you last on this topic, but you've got the broad discussion that we've just had to be able to respond to. So, over to you.

I think there needs to be a level of standardisation coming forward in education and training, particularly around topics of AI risk and AI ethics. It can be incredibly difficult to audit some of the risk assessments taking place in companies when they're examining which workers might be more vulnerable to certain uses of AI when there's very little standardisation, when there's very little in terms of a unified approach to how these kinds of assessments should be done, audited or verified. So, that's one aspect.

And then in terms of training around using the technology, I think I'd agree with everything that's already been said. What I would add is that there needs to be training that is generalisable—it needs to ensure that workers are not locked into a particular system, into a particular service provider. Some of the training around things like prompt engineering can do that when you're essentially learning to game a particular system, rather than learning to use AI in a way that can take you forward and that can benefit you in the future.

On that level as well, something we're seeing in the tech industry in some areas is a narrowing out of skills, where there are, for example, opportunities and roles available at the mid and senior levels, but an increasing lack of roles at junior levels. Those are the roles that are most likely to be replaced by AI or where AI is used to enhance human productivity, and so there is a lack of investment in junior workers at early stages in their career. It is reminiscent, somewhat, of situations we've seen with apprenticeships over the last couple of decades. There needs to be someone investing in the future, because it may not be in a certain company's interest to train those workers, but it's certainly in the interest of the economy as a whole.  

12:20

That's very interesting. Thank you, Chair. I think the other questions have been answered in the wider discussion, so I'll hand back to you. 

Thank you, Sam. Perhaps you could explain to us how AI is used in recruitment processes and some of the more pressing issues emerging as a result of the use of AI. Who'd like to go on that? Ceri. 

Just briefly on this one, because it's an interesting one. As unions, we represent workers already in post, rather than the ones that are being recruited in, so this is an important place where we need to learn more. But from the wider research, and I'm sure Lina and Matt will have more, there are very real concerns about discrimination in the recruitment process. The data that recruitment applications are learning from is based on who has been successfully recruited in the past, and that could overwhelmingly exclude minoritised groups. So, that's a concern, that there's a risk of intrinsic bias.

And then, a type of technology that we believe should be banned altogether, and it's on the EU's list of banned technology, is emotion recognition. That is a very risky tool to be used in interviews or any place in terms of the workplace, because of the many faults with it—emotions are conveyed differently in different cultures, and whether it's that effective with older people, recognising their emotions. So, that's one type of technology that we think should be ruled out altogether, as the EU has said. In the workforce partnership council guidance, we've asked for it to be considered to be ruled out. So, yes, there are very many risks, but, as it is an emerging area, I think there's a need for a lot more research on this.  

I think it really comes back to what's already been said about pattern recognition and how vulnerable workers are much more likely to be discriminated against by some of these systems. I think it's an interesting one in recruitment in particular, because it's often an area where the use of AI systems and automated systems is prolific, but there are questions about who it really benefits at all, whether it benefits companies or employees and applicants. It often becomes a situation where employees are expected to game the system in order to get a job. Then, it becomes more important to include the right words in your application than to actually be able to demonstrate skills. I think there are situations where a lot of companies are being sold HR packages that include AI recruitment systems that are not necessarily to their benefit. And so, in addition to what's already been said, it really is an area where I think there's an expectation of reducing cost, but there is a question as to whether it's to the long-term benefit of anyone. 

We've done quite a bit of research on automated hiring systems. It's something that obviously exploded during the pandemic. It was already on an uptick before then, but really came into its own during the pandemic, and it is now very widely used—in part, because it's seen as a way to overcome issues of bias and discrimination that have been long-standing in hiring processes. These providers often sell themselves as being able to resolve issues of unconscious bias, et cetera. So, actually, for that reason, I think, it is seen as a solution by employers and widely used for that reason too. But, of course, it comes with, as has been raised, issues of its own, and across the entire hiring funnel.

So, it's used to place jobs ads and to search for candidates, but it's also used to filter and it's used to assess and profile, as has been said, through various forms, a combination of data sources, but also the use of games and questionnaires, et cetera, as a way of profiling candidates, and also facial recognition technology in interviews as a way of profiling candidates. We talk to providers of these technologies, and I think one of the key aims is to shift what it means to be an ideal candidate away from expertise and jobs and hard skills to cognitive skills, and what they called persona-centric profiling as being a—. For them, it's seen as something that can satisfy issues around efficiency in recruitment and retention and so forth, by emphasising cultural fit more in how these assessments are carried out. But, obviously, there are lots of issues around discrimination with this, particularly from people with disabilities and neurodiversity, but also in terms of filtering out candidates on bases that are very problematic—for example, the speed with which they might complete a game or a questionnaire—or other kinds of issues that really play a significant role in how people perform that we wouldn't normally include in a recruitment and hiring process. 

Another issue I just wanted to raise around this is that a lot of these AI systems for hiring are actually being developed—. We found that a lot them are coming from the United States, and when they then design for bias, or to resolve bias, they use US discrimination law, which has a statistical definition of discrimination, which, in the UK, is not the case with equality law. So, actually, what we're doing is importing a legal system or a computational system that's designed under certain legal parameters that actually don't match the ones that are in place in the UK. So, there are there certainly also some legal questions about the use of these technologies in that area, disregarding data protection questions, which are also, obviously, very different in the US than they are in the UK and Europe when they're being used.

12:25

And can you tell us how the technology is being used to manage and supervise the existing workforce? Can you give us some examples of that? Ceri.

I think Ceri can speak first on this, and then I can come in.

Thanks, Lina. Thanks, Paul. Yes, this is an area of grave concern for us. There are many examples that we've collated in our research, 'A snapshot of workers in Wales' understanding and experience of AI'. We found that it is very prevalent. I gave the examples before of people working in the delivery industries, and what's a problem there is the surveillance. That's one issue. People are now issued with personal devices that track their movements. We had people working for a delivery firm saying that managers back in the office had a map showing where everybody was, and, if they stopped for more than a set period of time, that created an alert on the system, and one worker said they were called in for having, over the course of two weeks, stopped for a total of 15 minutes. But there might have been a number of reasons for that—speaking to a customer; you know, perfectly legitimate—but they were being challenged to say, 'You weren't working during those times.' Luckily, in that company, it was strongly unionised and they were able to challenge it on the grounds of privacy; that type of tracking shouldn't happen unless it's for health and safety reasons, or concerns around criminality, neither of which applied here. But, in companies that are less unionised, that type of overbearing surveillance is a big concern.

And then, in terms of management by algorithm, it is something that's happening in many industries. So, there's research about how it's being used for social care in Ireland, where people are being given their daily tasks in terms of visiting people in homes, allocated work, and often those targets aren't accurate and people can't meet them in time. So, it means that people are far more stressed and given targets that are unrealistic.

But, back to Sam's points earlier on, I think there would be a way, if these systems were developed with workers, that they could be adapted, they could take out the unfair elements and the targets could be in line with what's realistic. So, worker voice in all of this is really important. So, with the WPC guidance on the use of algorithmic management, we're hoping that public sector bodies and departments in Wales will take that up and be willing to pilot it to find out, and then Wales could be a leader in this area in terms of having a worker-centric approach to algorithmic management that also helps bring in these efficiencies that we're all seeking in the public sector. So, Wales could be in a good position to be a pioneer in terms of worker-centric algorithmic management that also helps to improve services for the public. But, at the moment, the situation is far more challenging. I'm sure Matt and Lina have got more examples. 

12:30

Yes. Just to add on that, what we've found is also the use of tracking devices a fair bit. So, for example, in postal work, it's a prevalent method, but also, in call centres, voice recognition systems are used to assess tone of voice et cetera as part of performance assessments as well, and actually some of this data is tied to bonuses, so directly tied to pay. These kinds of data-driven assessments are being carried out around productivity in particular, but also things like emotional labour, if you like, or performance, and so forth. So, those are also other ways in which we know that it's being used. Yes. I don't know if—.

Yes, I think I would tie this back to the issue of automation bias and overconfidence in automated systems. I think this is what we see quite frequently in some of the examples that come up in the tech industry. Some of the examples I'd give are around call centre workers being tracked using an emotion and sentiment tracking system, where the words they use and the tone of voice they're using is tracked and analysed and is then used as a performance metric. There's no validation or verification that these systems work, that they actually result in better outcomes. But, regardless, there is a feeling of, 'This sounds like it should work, so surely it must. Surely someone has verified that this really results in increased productivity.' And obviously, even then, there's a question of: is that productivity increase ethical? We've seen examples of delivery workers being asked why they're going to the toilet, because it's seen as them stopping work and it doesn't count towards their supervisor's metrics. And I think some of this also comes up around a fundamental misunderstanding in some corporate contexts of the right to privacy, where AI is allowed to observe a worker in a situation where a manager never would, and a feeling that as long as a human is not involved in that observation process directly it's not a problem. But that's not how the right to privacy works. It's the right to not be observed and to keep that data private.

I think Ceri’s pointed out that, so far, there's been the ability to challenge some of these deployments, particularly at big firms where there is a heavy unionised presence. But that is not always the case, particularly at smaller firms. And so these protections need to be rolled out not just for unions and recognised workplaces but for unrecognised ones as well.

And the final point I'd make is around how this software and AI technology makes its way into the workplace. It's not always proactively sought out by organisations. There are cases where it is integrated into existing platforms and systems, where it's an up-sell based on software that companies have already adopted. And so it's seen as an area of growth for some software firms. And there's a need to challenge it at the level of development and integration, as well as relying on companies to have knowledge and audit these systems properly.

Yes. Thank you, Chair. Another area we're particularly concerned about is those systems that have automated decisions around disciplinary or even firing people. It's analogous in terms of platform work, in terms of Uber delivery—anybody working for one of those platforms, there's a risk that they won't be given more work, and often they've got no recourse to speak to a human manager about that at all. And were that to be replicated in—. Well, that's terrible for those workers, and we need greater protection there. But if that model were to be rolled out into public service, into wider parts of the Welsh economy, that's something that we would oppose. So, the TUC, we've published a Bill that we’d like to see introduced into the UK Parliament, which puts specific controls around what we call high-risk decisions, which relate to people's health and safety, their disciplinary, a loss of a job. There shouldn't be automated decisions around those, and, equally important, there should be a human who's responsible, who takes responsibility for those decisions, who can also be challenged for them. We've built that into the WPC guidance for the Welsh public sector, and, at a UK level, we're pushing that to be a national law too.

12:35

So, that's the workforce partnership council. 

Okay, fine. Okay, got it. So, I just want to explore—. We've heard a lot on areas of concern around the application of AI, but I just wondered what the workforce partnership council has done to discuss some of the opportunities in terms of improving the productivity of the job, and also the interest of the job, because a lot of things are quite routine and, if machines can do it, why not, if it enables us to then move on and do more meaningful things. So, I just—. Productivity levels in Wales are lower than in England and the disability employment pay gap is also a cause for concern. So, have you actually discussed the opportunities of making work more interesting and improving, for example, the performance of local authorities, health boards et cetera? 

Yes, thank you for the question, because I do realise we've been concentrating on the risks. Certainly, parts of the WPC guidance document talk about the opportunities that are there in terms of increased productivity and improved public services. And the way we'd see it is that they can be fully realised if workers and unions are part of the process of auditing the existing use of algorithmic management, being part of the procurement process, monitoring it as well, because certainly there are positives that could be there; Sam was saying earlier about designing products that give targets that are more tailored. 

But it's something that, you know, if you take a step back—. I think one of the problems we face is that a lot of this technology is designed by huge tech companies and is mainly designed around getting ever-greater efficiencies from workers around driving them harder and harder. But if we took a step back and thought, 'Well, in Wales, what are the issues that public services face that are related to collating data in a safe way about service users and the workers and the overall management?', what would AI look like if it was ethically designed, worker-centric, primarily focused as well on what service users need? 

So, we're hoping in the pilot of the WPC guidance in the year ahead that public bodies can work together with researchers to see how this works in terms of auditing current systems. But something TUC Cymru would like to see at some point as well is research on what an ethical AI algorithmic management package might look like, to tackle some of the other knotty problems that we face in terms of public services, but that civil society should be involved, service users, trade unions, and it shouldn't just be driven by the expertise of the four or five global tech companies, who have their own interests in terms of shareholders and are less interested in our problems. There's an opportunity in Wales, perhaps, that we could come forward with some of those answers, working together. 

Okay, but we heard in an earlier session that the Cardiff capital region has actually got a higher performance in terms of developing AI than, for example, London, which is obviously very interesting. Obviously, it's really important to know how the partnership council is understanding the way in which we can make the work that public sector workers do more interesting, because we've all heard about people just not having enough time to do the job they would like to do—and, certainly, processing data, that's important in the health sector, for example, to make sure that we're tracking how well or not somebody is—to enable them to then do the things that AI definitely can't do, which is listen to the human being in front of them and advise them on how to manage the particular complaint that they've got.

12:40

I think that'll be a big part of piloting the guidance, to see where those benefits are. You know, that's why we're keen as unions, we want to see work improved, and if AI can help improve working life, that would be a great thing. We are not against AI, but we are against the current system where there's a power imbalance, really, between the tech firms and everybody else. So, that's what we want to tackle. We're not against AI.

Okay. But would it be fair to say that, at the moment, you've spent a lot of time on worrying about some of the injustices that have been committed with AI, rather than the opportunities that we need to push forward on, to have better public services for all of us?

Well, yes, we'd be keen to work on that, together with—

I think it's important to look at these risks—well, they're not even—. They're things that are happening now. And, yes, we're keen to look at the opportunities, and it's something that we acknowledge as well.

Okay. So, Lina, I think you also mentioned this in your paper as well. Have you seen much evidence of the public sector embracing AI in order to be able to reach more of the needs that, at the moment, are not met?

Yes. So, I think there is a—. I mean, that's a mixed picture, because it depends a bit on who you ask and where they are within the public sector. So, I think, at a managerial level, there is a real enthusiasm for bringing AI in, but it's not necessarily matched by professionals who are doing the day-to-day work in public services, for example, where we see more scepticism and resistance to it.

But I think it's really important just to say that, in a lot of cases, these AI systems are developed without domain expertise, and I think that's one of the key things that is really problematic about the use of, the introduction—. The way that AI, at the moment, is being thought of within the public sector is that, often, AI systems are brought in across the board, across different sectors, without much domain expertise, purely on the advantage of having technical expertise, as it were, and that's overriding, in many cases, domain expertise, and that's where a lot of issues, including around the extent to which it can serve efficiency and make work better, come in. So, I think the issue is around consultation and who counts as an expert and who's being consulted when systems are being designed. What they are being designed for, what they are being optimised for, et cetera, is really very key to this question around efficiency as well, and how they can do the work that they need to do.

So, how do you think we could apply AI to strengthen the foundational economy, so that the profits made remain in this country, in Wales, as opposed to having our lives run by multinationals who really don't give a stuff about the well-being of future generations Act?

I think the problem is that the AI market is not a democratic market, the way that it's being advanced, and particularly with generative AI, it's closing more and more; there are only really a few players that are in that. So, I think it is difficult, and it would need a lot of local investment if it was going to be a more locally driven AI economy that would be pursued. But I think that's a real challenge, considering the resources that it takes to run AI models, both in terms of data processing—who has the data—but also in terms of the computational power that's required, and the infrastructure that's required.

Okay. So, what advice would you give to the Welsh Government to prioritise strengthening, using AI to strengthen the foundational economy, rather than us completely losing control of our destiny?

I mean, I would be interested in questions around the possibilities of creating public infrastructures for AI. I think that's something that needs much more research into the possibilities of doing that, and what would be required for, say, publicly owned AI systems to be in place.

Okay. All right. And, Matt, I mean, I don't know if you're involved in the workforce partnership council—okay. So, is there anything you'd like to say that would enable the workforce to be more engaged in shaping the future, the satisfaction around a job without having to endlessly be doing the same thing that could be done by a machine?

12:45

Yes. I think one aspect is that there need to be greater incentives to develop AI systems that tackle those kinds of rote aspects of work. The reality is that, putting it bluntly, skilled workers are more expensive, and so the incentive at the moment for many companies is to automate out skilled work rather than unskilled work, in some aspects, as long as the technology can provide it. Actually, I think that's been one of the surprising things over the last couple of years, that a lot of the AI systems that have emerged are targeting creative work to a degree that we perhaps didn't expect. I think the typical expectation before the ChatGPT era, if you want to call it that, was that creative work is what would remain once rote and non-interesting work was automated away. And I think what we're seeing is that there is more of an incentive to automate and to use AI to tackle those aspects of work, so there needs to be a level of providing incentive there.

I think I would reiterate what Lina said about systems being too generalised and not involving domain expertise, to a degree being very US-centric, and I think one of the reasons for that—again, I'll put it bluntly—is a level of arrogance in the tech industry that the only questions that matter are technical questions. Matters of ethics, risk, social and cultural questions are not as important, and that's something that I think there is active work to challenge, but until that is done, comprehensively throughout the tech industry—. And unfortunately, with a lot of the tech industry being based in the US, there is a limited degree to which we can challenge that. So, we need to encourage an alternative AI industry that takes into account those questions and that is more empathetic to interdisciplinary understandings.

And the final point I'd address is, I think, a point you raised around the ability of AI to, for example, give employees with disabilities greater access to the workplace and to match productivity statistics. I think there is potential for AI in that area, but again it comes back to an earlier question about the access to education that people have and the access to training that is not often provided. So, the impetus is still on workers with disabilities to be able to take advantage of these tools, rather than a comprehensive system that empowers them to use tools. And, again, there is still a question as to whether or not workers with disabilities should be valued based on their productivity, of course.

Okay. So, we have the well-being of future generations Act. We should be using this in the way that we procure services, and in the way in which we deliver services. Is that going to be sufficient to reap the benefits of AI, as opposed to the things we've spent most of this session on?

Yes, if I may, Matt. I think procurement and what's in the future generations Act, and supplemented by what's in the social partnership Act, does give a route to improve the procurement route for AI, in particular the requirement that there is wider consultation taken into account, that future generations and social issues are taken into account. At TUC Cymru we held a workshop with procurement experts and with Connected by Data, to look at the potential of the legislation we have in Wales to be able to frame the buying of AI products in a more positive way. Welsh Government are working on the revised guidance that is related to the social partnership Act, and we'll be pleased to work with them on that. So, there is potential there in Wales that we can define what we require from algorithmic management and other technologies in a more positive way, certainly. The potential is there.