Skip to main content

Language: English / Gàidhlig

Loading…
Seòmar agus comataidhean

Meeting of the Parliament [Draft]

Meeting date: Tuesday, October 29, 2024


Contents


Artificial Intelligence

The Deputy Presiding Officer (Liam McArthur)

The final item of business is a members’ business debate on motion S6M-13331, in the name of Emma Roddick, on recognising the dangers of artificial intelligence. The debate will be concluded without any question being put.

Motion debated,

That the Parliament recognises what it sees as the fast evolution and potential of artificial intelligence (AI); understands that there have been many warnings from those most closely familiar with AI capabilities who say that the regulation of and limits on AI’s use is required to ensure safety; notes reports that Professor Geoffrey Hinton, known as the “godfather of AI”, has warned of the need for greater social security investment when AI is entrusted with roles currently carried out by salaried humans, and the need to consider the implications and threat to humanity of AI being given military roles and resources; further notes with concern the conclusions of researchers who report that AI is already acting independently in unexpected ways; acknowledges, for example, that a recent study from the Massachusetts Institute of Technology (MIT) demonstrated AI models beginning to exhibit deception, apparently becoming aware of and adapting to safety assessments of its operation by deceiving assessors and operating differently while under observation, and notes the belief that, in order to effectively regulate the operation of and reach of AI, including in the Highlands and Islands region, policymakers have a duty to seek to understand how it works, what the dangers are, and how to protect society and vulnerable individuals from harm.

17:43  

Emma Roddick (Highlands and Islands) (SNP)

I have wanted to hold this debate for a very long time, and I feel that I need to start by recognising that artificial intelligence is here. I am not going to stand here and deny that, or say that I want to ban AI. I use AI—my Pixel phone has it built in. I speak to that AI, and I let it choose the music that plays when I am driving. AI is here and it is a tool like any other, and I am broadly fine with that.

However, while I hear a lot in the chamber, and in the media, about AI’s potential to support business, to help with healthcare and to increase productivity, I do not hear an awful lot about the risks. Legislators around the world must decide, and decide fast, how far we are comfortable with AI reaching into particular areas, and what limits need to be placed on it. We should keep in mind that AI is learning from humans, and that includes scraping the depths of the internet for forum comments to learn how to act like a human. If society is racist, AI may be racist, and if we value men over women, AI will learn to do the same.

In order to decide on AI’s limits, we have a duty to understand its capabilities, risks and dangers. We must then tell those who are currently developing AI models where they cannot take their technology, before it is too late.

In my region of the Highlands and Islands, perhaps more than anywhere else in the country, we are encouraged to make use of new technologies, to work remotely and to work smarter, and that requires using AI as an emerging technology. For so many of my constituents, including my own employees, the online world is their workplace, and they—like anyone else—have a right to safety in that workplace. Without regulation, however, AI is not safe.

I will speak to a couple of existential threats. The first is the threat to the workforce. As the use of AI grows in industry, jobs are at risk. We need to get serious about minimum income guarantees, and job and productivity opportunities for humans. Companies will automate if it reduces cost, and the conversation on workers’ rights needs to extend to the implications of AI.

We have not only to answer questions around which jobs should be undertaken by humans—perhaps, as a fairly unpopular “Doctor Who” episode suggested, by setting quotas on the percentage of the organic workforce—but to have conversations about the rights of workers to have a say on decisions that have been delegated to AI. That includes protection from liability and the right to have their accident-at-work claim, or their disciplinary—or even dismissal—case, heard by other humans.

Secondly, the climate impacts of AI are already massive. Nuclear energy business is booming as tech giants buy up reactors on a huge scale in an effort to cut down their emissions. Those emissions are soaring as servers support 24-hour AI usage. Google’s greenhouse gas emissions rose by 48 per cent between 2019 and 2023, which is attributed in large part to a huge increase in data centre energy consumption.

Earlier this year, the World Economic Forum shared that the power that is required to sustain AI’s rise is doubling roughly every 100 days. To put that in context, it is estimated that, in 2028, AI will use more power than the entire country of Iceland used in 2021. AI is not a sustainable industry, and we should talk about whether it is possible for it to be so, and whether its growth is, therefore, worth the climate cost.

I will move on to disinformation. I am one of many frustrated Google loyalists who currently cannot find things as easily, now that AI has been heavily incorporated into search results. I look at some of the suggestions that come up and are pushed to the top of the page, and I cannot wrap my head around the extreme nonsense that I am reading. I am not sure that if I were nine years old and doing a school project, I would know that what I am reading, from a trusted source, might not be true. It is important that people understand the limitations of AI-generated information, and understand that it is not fact. It needs to be questioned and scrutinised, and critical thinking must be applied.

There are currently wars being fought around the world, and what is happening on the ground is not the only conflict. Money is being poured into bots, algorithms and AI-generated content to convince people that one side is committing crimes when it is not, or to present convincing propaganda. It is becoming harder and harder for even experts to identify AI-generated images, videos, articles or even people.

Generated content comes with other dangers, too. We have all heard stories of the use of AI in sexual harassment. Taylor Swift, who is probably one of the most powerful women in the world right now, has been a victim of that. AI-generated pornographic images of her were circulated on social media at an incredible speed, and were about as unavoidable as her discography was when she played Murrayfield. That form of sexual violence is now experienced by women and girls around the world in the workplace, on the internet and at school, including many who do not have access to the lawyers and the power that is needed to erase such images from social media.

Before this debate, Zero Tolerance reached out to me and shared some research that shows that non-consensual intimate images constitute 96 per cent of all deepfakes that are found online, and that 99.9 per cent of those images depict women. It is popular: one major deepfake sexual abuse website receives 14 million monthly visits. That is terrifying.

Addressing that requires us to address all forms of violence against women and girls, but it can also be mitigated through requiring and facilitating the removal of such content and regulating how algorithms are allowed to promote it.

Privacy fears extend beyond non-consensual imagery. The European Union is leading the way on AI regulation, and the limits on AI use in its artificial intelligence act include banning what it calls “unacceptable risk” use, such as the cognitive behavioural modification of people or vulnerable groups, biometric identification systems such as facial recognition and Government-run social scoring. One example of the latter involves a Chinese city that uses a system that can see residents’ social scores go down for acts such as spreading harmful information on forums. If we do not believe that “Black Mirror”-esque systems that see people punished for non-criminal activity should be allowed, now is the time for us to say so to AI developers and Governments the world over.

AI is already being used in healthcare and other industries. What rights should people have over the way that their data is used? What rights do content creators, artists and musicians have to protect their material from copy and re-creation by a non-entity?

Finally, I point out—as my motion does—that it is not just naysayers who are calling for regulation. AI creators are calling for the same: those are people who truly understand how fast moving the technology is.

A recent Massachusetts Institute of Technology study showed that AI models were beginning to exhibit deception, acting differently when under observation by those who were performing tests to ensure that they were not acting beyond expectations. It is already a difficult technology to control and monitor.

Every other week, there are stories of a misguided legislator somewhere in the world trying to limit social media and mobile phone use for young people. That horse has bolted; AI has not—yet—but we are very, very close.

In terms of what we in the chamber can imagine, AI’s capabilities may as well be limitless in theory. It is on us, on the United Kingdom Parliament and on other Parliaments around the world to define its limits in practice.

We move to the open debate.

17:51  

Brian Whittle (South Scotland) (Con)

I am grateful to Emma Roddick for bringing the debate to the chamber, not only because I have a genuine interest in the subject, but because the growth of AI across society will have profound implications for us all. I take slight issue with her motion, however. I recognise that there are dangers and risks in the use of AI, but they have to be seen and considered in the context of the opportunities that also come with it. Just as there is potential in a utopian vision to ignore the hazards, so too can the doomsday scenarios cause us to miss the benefits.

Technology has long been a driver of human progress, but few technologies have delivered that progress on the scale that artificial intelligence has the potential to do. Not all the changes will happen today or even tomorrow; we are still at the relatively early stage of unlocking AI’s ability. That is to our advantage in enabling us to both maximise our opportunity to gain from AI and protect ourselves from the risks.

I take healthcare as an example. Members may know that, before I became an MSP, I worked in healthcare technology, so I have already seen how previous, far less dramatic technological interventions have helped medics and patients. The opportunities to use AI in healthcare are vast, from analysing huge amounts of patient data to researching public health and new methods of treatment and medical imaging. Software is increasingly able to identify cancer and other conditions, which allows radiologists to process scans more quickly and with greater accuracy.

In the field of drug discovery, which is a costly and often hugely time-consuming endeavour, AI can make a big difference, seeing connections in extremely large data sets that might otherwise be missed, and reducing the time that is taken to create new medication and analyse trial data.

All that is big, bold stuff, but before any of that, there is a huge opportunity for gains in productivity in the NHS simply by automating many of the dull back-office processes that, although they are essential, make poor use of medical staff time. The best uses of AI in the health service are not those that replace people, but those that increase their productivity, give them better information and help them to co-ordinate with colleagues.

The NHS workforce cannot continue to grow indefinitely in an endless race against growing demand. AI and other technologies offer a different path. Instead of asking even more doctors and nurses to do even more work, we can give the doctors and nurses that we already have the thing that they need most: more time in which to treat patients. Both the previous Conservative and the current Labour UK Governments have recognised the potential of technology and the urgent need for big investment in tech in the NHS and the sizeable gains that could come with it. I am interested to see the detail of the UK Government’s proposed digital information and smart data bill, and I hope that the Scottish Government is thinking along similar lines.

However, it is not clear that the Scottish Government recognises the scale of the challenge or the opportunities. Scotland has been a leader in healthcare tech—indeed, many highly successful medical technology companies are based here—but our own NHS is notorious for lagging behind the curve on technological investment. Boards across Scotland use different digital systems to record data in different ways, and they take fundamentally different approaches to the use of technology.

I am not going to call for a single national health board, but there are times when there would be a clear benefit in having all Scotland’s health boards take the same approach at the same time. The deployment of AI and investment in new technologies is one of those times. One of the greatest advantages that Scotland’s health service has is that, like its counterparts in the rest of the UK, it holds vast amounts of population-scale patient data. We need a base tech platform, on which all software technology and AI can sit, that would allow individual health boards to adopt the tech that is specific to their needs while also allowing interoperability and the sharing of patient data.

I could say far more on the subject, but in conclusion, I would argue that we need to have a conversation that is not only about the risks or the benefits of AI, but about how we understand those risks and balance them against the benefits of innovation. Many of the leading figures in the AI sector have expressed concerns about the potential dangers of unchecked AI for our world, but despite those concerns, they continue their work in the field, and they continue to develop new, more powerful AI and refine its abilities.

I agree with Emma Roddick that AI comes with risks, but that is true of almost every new technology. While artificial intelligence may be a uniquely powerful new technology that comes with dangers that we have not encountered before, that is not a reason to bury our heads in the sand. AI is coming, whether we are ready for it or not, and I, for one, would far rather be ready.

I am conscious that a number of members wish to participate in the debate, so I would be grateful if members could stick roughly to their speaking allocations.

17:56  

Michelle Thomson (Falkirk East) (SNP)

I, too, thank my colleague Emma Roddick for bringing the debate to the chamber, and for her very thoughtful contribution.

While there are a multitude of opportunities with AI, there are clear dangers, and the disruption that AI will cause will put previous revolutions in the shade. In my short remarks, I intend to draw on a few general thoughts from leading AI academics, and explore some implications for professional musicians, as that is a profession that is close to my heart.

First, I draw members’ attention to the letter of 2023 that was signed by more than 30,000 leading brains. The opening paragraph argues:

“Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

In a recent interview for Time magazine, Professor Max Tegmark, who is a leading expert in AI from Massachusetts Institute of Technology, was asked:

“What is the one thing you wish more people understood about AI?”

He responded by saying:

“The pace of change. AI development is progressing so fast that many experts expect it to outsmart humans at most tasks within a few years. People need to understand that if we don’t act now, it will be too late.”

To what extent, therefore, are we in Scotland aware of, and responding to, the threats of AI? I do not think that we are yet, so I welcome this debate.

It is clear that the answer lies in regulation. At present, we see multiple approaches, such as focusing on diminishing risk, as in the EU; sector-specific approaches; governance and framework principles across various jurisdictions; and even the example of China, which is aligning the opportunities that it sees are offered by AI with its own state interests.

As I said, I will mention some of the threats to musicians from generative AI. It is clear that there is a threat to jobs—for example, for composers. AI is already advanced in writing music for commercial fields such as gaming, particularly background music. It will certainly affect session musicians such as backing singers.

The originality of music could be diminished as AI simply scrapes from existing patterns and trends. Conversely, there are also risks to musicians from AI generating music that sounds similar to that of an artistic creation. That could lead to challenges of copyright infringement, all without the legal test bed of precedent on which to draw.

I draw members’ attention to the latest letter that has been pulled together by UK Music and signed by 10,000 musicians, highlighting concerns about the unlicensed use of creative works to train generative AI, and the fact that the regulation that I mentioned is nowhere to be seen.

The real question, however, is—perhaps arguably—a philosophical one. What is music, and to what extent can it be deemed human? Does AI have the potential to diminish our humanity, and if so, in what ways?

For me personally, music has always been the highest form of human expression, and I fear that AI will reduce authenticity and, with it, our human experience. The creation of music involves the human struggle of self-expression based on life experience. Can we feel the depiction of that struggle through the music? I would argue that we can. AI can, arguably, create more “perfect” music, but it is the imperfection that is part of the authenticity, and our humanity. As we disconnect from that imperfection and authenticity, I fear that we may disconnect from ourselves.

AI is here to stay, so the music sector needs to find ways to incorporate it and place authentic human-led music at the heart of any value proposition. Thankfully, musicians are endlessly creative, and I believe that that creativity will ultimately win through.

18:01  

Michael Marra (North East Scotland) (Lab)

I offer my thanks to Emma Roddick for a fine speech and for bringing the debate to the chamber. To be frank, I think that it would be difficult for Parliament to discuss these issues too much, and we could do with much more of that discussion. It is vital that we continue to increase not just our own literacy in this policy area but that of the public and various policy makers. It is important, too, that the Government refreshes its policy approach and updates Parliament in as close to real time as possible, given the scale and urgency of the challenge, because we know that these technologies are already having very significant impacts on people’s lives.

On my better days, I would say that I am an enthusiastic tech optimist. I think that that would be something of a revelation to my wife and children, who bear the brunt of my somewhat overwrought fears around mobile phones and social media impacts on the minds and social development of young people. The social effects of ubiquitous misogyny, violence and pornography are played out in what is quite a clear bifurcation in the political identities of men and women. That has been most apparent in recent years in work that has been done in South Korea, but it is now being played out in opinion polling in United States as that country approaches a moment of significant democratic peril. People are living physically side by side, but they are, intellectual and emotionally, living in different hemispheres. That is driven by algorithms, and a form of separation by which men and women identify with completely different social sets and see no common ground between them.

Regulation is required, and urgently, and I believe that it can make a difference. That is partly because I find myself unprepared to be permanently negative about the situation—I think that it would be an entirely self-defeating posture to be so; I would not be a functioning human being, let alone a parliamentarian, were I to be nightly convinced of a Skynet singularity. However, wishing artificial intelligence away would be akin to regretting the invention of the steam engine or the wheel; this technology will not be put away. We must ensure, therefore, that we talk about more than just a broad net benefit and the desperate mitigation of the worst of the risks.

We know about the efficiency—as Brian Whittle highlighted—of AI analysis of magnetic resonance imaging cancer scans. I know that some technology has been developed in Dundee that has been applied in respect of orthodontic adjustments for children’s teeth. AI can both save lives and improve health. However, in some respects, is the inevitable cost of that progress the ultra-efficient targeting of weaponry and killing machines? It probably is—that is the reality. In my view, the answer is, unfortunately, yes, but that only elevates the human imperative of diplomacy and human contact.

The points that have been made about fake news and the challenges of that are key. Some time ago, I read a book by Peter Pomerantsev, a University of Edinburgh graduate, called, “This Is Not Propaganda”, in which he talked about the bot farms in different parts of the world that are used to influence actions in foreign states in trying to influence their democratic processes. We know that Russia does that; we know that it is happening today in the US; and we know that it happened here in 2014. It is a real, current issue that has to be dealt with.

However, economic history tells me that the Luddite fear of mass human redundancy is a repeating pattern, and I am, as yet, unconvinced that technological unemployment is, this time around, any less of a fallacy than in the many previous examples from history—the idea being that this time it will be the ultimate machine.

That said, and as raised in the motion, there is a question about the distributional effect of technological advances as a key driver of growth in global inequality. The reference in the motion to investment in social security as a way of enshrining equity is applicable. The affordability of safety nets and springboards is applicable to technological transition throughout history and is something that we must carefully consider.

18:05  

Patrick Harvie (Glasgow) (Green)

I congratulate and thank Emma Roddick for bringing the motion and for her speech, which properly reflected the balance between opportunity and the threat that some members have emphasised.

We have already had a couple of references to science fiction and, as a lifelong sci-fi fan, I will make the case that those are not flippant. We are dealing with this sudden reality, but fiction writers have been thinking about the issues for centuries and we can learn a lot from them, including some of my favourites, such as E M Forster’s “The Machine Stops” or “Colossus: The Forbin Project”. Even in mainstream science fiction, the “Star Wars” series depicts a society in which people are surrounded by sophisticated and amazing AI, but in which most civilians are just about scratching a wretched living out of the dirt, and it can be read as a critique of Western notions of technological progress. There is a great deal to learn from the way in which fiction writers have understood AI and have explored both the opportunities and threats.

Some threats are unambiguous. Emma Roddick and others have spoken about sexual images as a form of abuse and other members have mentioned the undermining of democracy. We have already seen how disinformation and conspiracy can become the very currency of politics, even at a time when relatively few such tools are available. That can only grow and intensify, which will, in turn, intensify the issues of distrust and disinformation that human beings have been causing perfectly well by ourselves without technology.

There are many other areas of life where there is ambiguity and where both benefits and harmful consequences are likely. It has often been suggested that AI will generate new ways of coming up with drugs or with new molecules and chemicals that we would not have been able to produce otherwise. However, one researcher who I heard being interviewed on the radio, who had been doing just that, spoke excitedly about the potential benefits and was immediately asked what there was to prevent someone using the same algorithms to generate new chemical and biological weapons. He paused and did not really answer until, in the end, all he said was, “We just won’t ask it to do that.”

Many jobs will change, but will AI simply change roles or sweep them away? If AI removes the need for humans to do boring or repetitive jobs, it could create whole new categories of work, unleashing new creativity, but that is an economic question more than a technological one. Imagine, a few years from now—and we are only a few short steps away—a six-year-old merely speaking out loud what is in their imagination and, by doing that, creating a whole new computer game and sharing that around the world within moments so that millions can contribute to it. The technology could unleash creativity for video, gaming and coding, just as printing did for mass literacy and the written word. Alternatively, whole categories of creative work could be gone and absent, with no opportunities for people to explore careers in those areas.

Human skills, experience and competence could be built with this technology, or they could be massively undermined. In education, if students, teachers and curriculum creators collectively learn new things, we could enrich our education or we could be left with algorithms marking each other’s homework.

In short, there is no way to know yet whether, after thousands of years of human learning, we will become dependent and incapable as a result of this technology and will be left standing in the shadow of algorithms, rather than on the shoulders of giants. We simply do not yet know how to regulate these technologies.

As a species, so far, we have been poor at regulating our inventions, whether in the arms industry or in relation to the disinformation and conspiracies in print media or social media. We do not have a great track record on that.

18:10  

Clare Adamson (Motherwell and Wishaw) (SNP)

I thank Emma Roddick for her very measured and important opening speech and for bringing the topic to the chamber. I also thank my colleagues for their contributions this evening, which have been very interesting.

I have an interest in AI from my previous work as a computer scientist. During my career in computing, I was a British Standards Institution ISO 9001 auditor for my company. The BSI set the rules and the standards by which our company was expected to operate when we were developing software in a range of different economic areas, including healthcare.

Last week, I spent a couple of days of recess back in this place, hosting a delegation from the International Electrotechnical Commission. The commission represents 170 countries and provides global, neutral and independent standardisation and conformity assessments on the platform for 30,000 experts. It administers four conformity assessment systems, with members certifying that devices, systems, installations, services and people work as required.

The BSI remains a key member of that international commission. During the summit, it hosted two days, including on AI technology standards. While we are talking about the subject in here, we sometimes wonder what is happening out there, but such conversations are happening everywhere. The commission did a keynote on AI skills and dealt with subjects including ethics and AI, AI in healthcare and care, standards, the future of AI, resilience, the economy and the impact on the environment, many of which we have talked about this evening.

A few weeks ago, I was at the advanced research centre in the University of Glasgow to take part in the Lovelace-Hodgkin symposium on AI. That symposium takes place every year in the university, and it brings together disciplines from across the education sector to discuss AI and the impacts on education. It was really interesting. Part of the work at the very end of the symposium was to use generative AI to do a picture of some of the topics that had been discussed. The topics of surgery and healthcare had come up, so somebody did a robotic surgery unit, and all the robots were white. Someone else did an advanced surgery unit, which had humans in it who were all white. There was a man at the front and then increasingly small women, trailing behind as his support in the operating theatre. That really highlighted the bias in our society that AI can reflect back at us, which we should be wary of.

I also highlight the work of the AI Alliance and how good it is that our own AI strategy, which was released in 2021, is focused on trustworthy, ethical and inclusive AI. The Scottish AI Alliance is doing a multitude of pieces of work in all sorts of disciplines. It has been in the Parliament, working with the Scottish Futures Forum to discuss the impact of AI, and it has worked with young people in education settings. It has a lot of information out there, including a myth-busting section on its website that covers different areas of artificial intelligence and how we might use it. The website also highlights the issue of perpetuating bias.

I say to members, if we are worried about AI, just wait until quantum computing comes on board. I saw a presentation on it that showed that people could be monitored just by using the wi-fi and radio signals in the room. In a healthcare setting, that is brilliant, as it can monitor breathing and whether a fall has happened. However, just wait until slow horses get a hold of that technology.

18:14  

Pam Gosal (West Scotland) (Con)

I thank Emma Roddick for bringing such an important issue to the chamber, and I thank all the organisations that provided briefings today.

I will start with the positive side of AI before I look at some of the dangers that many members have mentioned. As we all know, artificial intelligence has many benefits. Its uses include the automation of mundane tasks, improved customer service, faster data analysis, reduced human error and analysis of patient data to reduce diseases. Some 72 per cent of global businesses have adopted AI for at least one business function and 64 per cent of businesses expect AI to increase productivity. The use of AI has increased substantially in recent years, with the global AI market expected to reach a value of £1.3 trillion by 2030.

However, we are here to talk about the dangers and risks of artificial intelligence. During a debate on AI in the chamber in June 2023, I said:

“We must ensure that AI is developed ethically, with human values at the forefront of its design, and we must address the valid concerns about the displacement of jobs and the potential for bias in AI decision making.”—[Official Report, 1 June 2023; c 80.]

More than a year later, I still have not used any AI at all, and my views have not changed.

Emma Roddick spoke about the Google experience. Just this morning, as I came into work, I was on a call with my sister. My nephew, who is 12 years old, had typed a question into Google—and what happened? An AI answer came up, and they did not know what to do. Mothers phone each other, as parents do, and they talk about how AI has ruined the experience of their children using Google. AI has positives, but, sometimes, for people who do not understand it and do not have the proper guidance or training, that world is very scary.

I want to talk about how fraudsters use AI to enable fraud. We are all using our Surface devices and, when we are accessing banks, for example, passwords are almost a thing of the past. Everybody is talking about facial recognition and voice recognition, but fraudsters can take advantage of that and create fake documents for banks. Therefore, we can see how many things fraudsters use to enable fraud, create benefit for them and hurt us.

AI also creates problems with regard to deep fakes and the spreading of fake news, which we spoke about earlier. Political elections are happening all the time—council elections and elections to the Scottish Parliament and Westminster. We have seen the issue of fake news in America with regard to the Speaker of the House of Representatives, Nancy Pelosi, and a false narrative that she was an alcoholic and unable to function as the Speaker. Elections are coming up, so there is a big fear that things can go wrong and that people will use AI for the wrong things.

I have made it clear many times in the chamber that the safety of women and girls is an issue that is of the utmost importance to me. Just as AI has many advantages, we cannot ignore its dangers, including the generation of revenge porn, whereby sexually explicit photos and videos are shared without the consent of those pictured. Earlier this year, deep fake pornographic images of Taylor Swift were spread across the internet. The images were viewed 47 million times on X before they were taken down.

However, deep fake revenge porn affects not just famous people and celebrities but common people, particularly women and girls, as well. A 2019 study from the cybersecurity company Deeptrace found that 96 per cent of online deep fake content was revenge porn. Feminist writer Laura Bates said that the use of AI-generated revenge porn is

“a way of saying to any woman: it doesn’t matter who you are, how powerful you are—we can reduce you to a sex object and there’s nothing you can do about it.”

We are running out of time so I will conclude. There is so much to say on this issue, and I hope that the Government will bring this topic of debate back to the chamber. Although AI has positives, there are a lot of negatives, and we need to look at those properly to ensure that AI is regulated and controlled properly.

I call Emma Harper, to be followed by Liam Kerr. You have up to four minutes, Ms Harper.

18:19  

Emma Harper (South Scotland) (SNP)

I welcome the opportunity to speak in the debate, and I congratulate Emma Roddick on securing it. Having listened to what she has described this evening, I value her knowledge and input. I recognise the concerns that she highlighted in her motion, which reflect my own findings regarding misinformation, extreme nonsense, fake news and the use of AI by bad actors to damage reputations, to exploit people and to harass victims, especially in relation to violence against women and girls. We know that that must be addressed, where possible, by regulation and legislation.

However, it will come as no surprise to colleagues across the chamber to hear that I intend to speak about the potential of AI in healthcare, given that I worked in a tech-driven perioperative environment as a registered nurse.

As we have heard, “artificial intelligence” is a broad term, which spans everything from simple decision trees that are akin to flow charts to complex large language models and generative AI, an example of which is ChatGPT. The risks that are posed by each type of AI are different, and it is important that we are careful not to unintentionally tar all AI models with the same brush. The risks with simpler AI and even machine learning are low in comparison with those that are associated with the deep learning that is used by platforms such as ChatGPT.

It is important to note that we have been using AI safely in healthcare since 2010. We introduced AI to replace the second clinician in our double-reader national diabetic retinopathy screening service. We also use AI in dynamic radiotherapy treatment, paediatric cardiology, paediatric growth measurement and the use of radiology for medical image acquisition, including in CT scans. Therefore, it is important that we carefully consider the risks of not implementing AI, as well as the risks and benefits of implementation. A balance needs to be struck, and we must remain cognisant of the fact that overcaution could lead to slower progress in positive healthcare outcomes.

For example, recent evidence from trials of AI to prioritise cases of suspected lung cancer in Scotland indicates that around 600 more people each year might survive the disease as a result of the introduction of AI alongside other measures to optimise the pathway. It is so important that we create a balance and recognise the distinction between different types of AI, and I ask the minister to keep that in mind when it comes to the development of AI policy.

The performance and risks of AI are highly localised to the context in which it is used and deployed. It is impossible to remove all risk in advance of implementation, and it is essential that we proceed to implement AI. The only way to mitigate and manage risk is to understand the risks, and I suggest that that should be done in healthcare through controlled AI. I recently engaged with a healthcare AI expert, who made the point that, in healthcare, the focus is and must continue to be on humans plus AI, not humans versus AI.

I turn to the need for legislation and regulation on the use of AI. To address Emma Roddick’s point about the dangers of AI, legislation, regulation and policy must all help to make AI safer. It is especially important that we focus on the standards that are necessary in implementing the use of AI in public service. For example, the medical device regulations already govern the use of AI in relation to the investigation, diagnosis, treatment, prevention, monitoring, prediction, prognosis and alleviation of disease, injury or disability.

I am conscious of time, Presiding Officer—you told me that I had up to four minutes. I recognise and agree with what Emma Roddick has described effectively. I highlight the fact that we can and should progress the use of AI, but we need to manage and mitigate any dangers and risks.

18:23  

Liam Kerr (North East Scotland) (Con)

I decided to contribute to this evening’s debate because, when I read the motion, it was a bit like rereading books such as “Brave New World”, “Nineteen Eighty-Four” and “Fahrenheit 451”, which I now suspect is on Patrick Harvie’s bookshelf.

Of course, I recognise the need for caution that has been thrown up by Emma Roddick, Michelle Thomson and others, but I also recognise that, as long ago as 1896, J M Barrie described the printing press as

“either the greatest blessing or the greatest curse of modern times, sometimes one forgets which”.

I recall learning about the fear and loathing that accompanied the invention of the spinning jenny and the first steam locomotives, to which Michael Marra referred.

I suggest that, arguably, we stand on the brink of a new era, in which artificial intelligence transforms our world in profound ways. It is less a moment of threat, and more—as Brian Whittle said—a moment of opportunity. AI is already revolutionising industries. In healthcare, it assists doctors in diagnosing illnesses more accurately and swiftly. Emma Harper’s balanced remarks about that were well made and are worth considering.

AI is also ameliorating issues: for example, as was reported in The Sunday Times last week, the world’s biggest manufacturer of hearing aids, Sonova, is incorporating AI into hearing products to improve them. In agriculture, it helps to optimise crop yields to combat global food shortages.

Of course, Emma Roddick is right that AI can take over repetitive tasks, but that surely frees us to focus on more creative and meaningful work. Patrick Harvie’s balanced comments on that were as fascinating as they were well made. Yes—AI can automate jobs such as data entry and, perhaps, driving, but, arguably, that will help to reduce human error and allow us to pursue new avenues for innovation.

In that sense, I argue that AI is here not to replace us but to amplify what we might achieve. Viewing the issue through a lens of opportunity is what will keep us ahead of the game, much as the Law Society of Scotland sought to do last week when it released a helpful, reasoned and considered guide to how firms should positively respond to AI, and as Linklaters and King’s College London have done by teaming up to provide training for lawyers on generative AI.

Several members have warned about job losses through automation, but Michael Marra’s remarks on that were well made. Here is another thing: let us ensure that education mitigates that issue, because, I suggest, AI democratises access to knowledge and opportunities. As Abertay University, which is one of Scotland’s leading innovators and drivers of AI, told me when I visited recently,

“Embracing AI in higher education offers unprecedented opportunities to enhance teaching, learning, and administrative efficiency, driving innovation in our institutions”,

thereby removing the need for physical classrooms while personatising for the needs of every learner.

AI’s ability to analyse vast amounts of data in mere seconds allows us to solve problems that were once thought to be insurmountable. It is a powerful tool that can enhance our capabilities and help us to build a better, healthier and more sustainable world through opportunities for improved efficiency, productivity and service delivery.

Will the member take an intervention?

Liam Kerr

I will not, because of time. Forgive me.

Emma Roddick is right that the challenges that are posed by the rapidly evolving technology necessitate proactive and comprehensive policy responses to ensure that AI benefits society as a whole. For example, we should create ethical frameworks that guide AI’s development to ensure transparency, fairness and accountability, and we should adopt a thoughtful and deliberate approach in which we remain in control of how AI evolves.

In summary, let us not fear AI but, instead, harness it for the benefit of us all.

18:28  

The Minister for Business (Richard Lochhead)

I thank Emma Roddick for lodging her motion, and members across the chamber for the constructive and thoughtful points that they have made during the debate. I wish that I had more time to respond to them all.

It is clear from listening to members that we all recognise that AI is life-changing, world-changing, new and fast-moving technology that we cannot uninvent. Therefore, the challenge for politicians and Parliaments across the world is in how we navigate this technological revolution and, when necessary, regulate to minimise the threat to people, society and potentially, in some cases, our world.

Members have raised a variety of perceived threats and threats that can be seen in real time as a result of damage that has been caused. We have heard about the threat of job displacement as machines take over certain functions in workplaces, we have heard about privacy issues relating to how data is used or abused and we have heard about bias and discrimination in language-learning machines. If the data that machines rely on is biased, that bias is amplified by AI in some circumstances, so we have to provide safeguards in that regard.

We have heard about ethical concerns relating to the potential dangers of using AI for surveillance or military operations. We have heard examples of deepfakes and how fake videos are causing massive issues, as they can be used for blackmail or defamation.

Of course, other threats include highly convincing phishing emails and disinformation campaigns. That issue is very topical, given the US election that is just around the corner and the potential threat to democracies.

The key then, is to navigate and to regulate. We need proper regulation in this country, but we need at the same time to capture the benefits of AI, which many members have mentioned.

Since the publication in 2021 of “Scotland’s AI Strategy”, the Government has been paying close attention to all the opportunities and risks relating to AI. We published our strategy ahead of the UK, and years before ChatGPT made the news. Our strategy in Scotland aims to make this country a leader in “Trustworthy, Ethical and Inclusive” AI. That is the strapline for our policy, and it is essential to addressing the concerns that are raised in Emma Roddick’s motion.

I whole-heartedly agree with Ms Roddick that the Scottish Parliament needs to engage regularly with this important topic. Michael Marra and others made the point that we cannot debate the issue often enough because of the implications for our society and the future of humankind.

It is crucial that we consider the entire picture of potential risks and benefits of AI. It is crucial because it helps us to reach balanced views and decisions on how to make AI work for Scotland’s people, our businesses and our environment. Most experts agree that the type of impending singularity that is highlighted in the motion—that is, machines thinking for themselves without needing humans—is still very far away from reality. We have to keep that in the back of our minds. Although, overall, there is rapid advancement in technology, that is still some time away. It is important to note that AI does not act independently, but human beings design and operate AI systems.

The motion mentions research by Massachusetts Institute of Technology on AI models that are beginning to exhibit deception. Research of that nature is very much in its early stages. Again, we have to think about whether we want to use terms such as “deception”, which carries the risk of assigning human characteristics to what are, still, machines. There is an important debate to be had on that.

AI is a complex and quickly developing field of science and technology, but we need to remember that it is people who drive the development and use of AI. I believe that we need to be careful in the language that we use so that we do not inadvertently fuel an environment of fear. That is a message that many members across the chamber have echoed, as well. Others, including Liam Kerr, pointed out how, in past technological developments, people were fearful at the time, but society has progressed and moved on with each technological revolution so far in the history of humankind, so we also have to bear that in mind.

Clare Adamson and others mentioned the Scottish AI Alliance, which leads on delivery of Scotland’s AI strategy. It has been doing valuable work to educate and empower our people and businesses. I urge people to check out its website and all the resources that are available there, and to share that and let people know about it. There are programmes on the website, such as “Living with AI” and the “Scottish AI Playbook”, which are helpful for organisations and businesses that want to know about AI and how to use it. Those online resources provide members of the public, as well as businesses, with the right tools and information to better understand what AI is, how they can best use it and how it impacts on them and wider society.

We believe in mitigating the risks that are associated with AI, as well as in seizing the opportunities. That is why we work with experts and our UK Government counterparts to try to stay ahead of those risks. It is for that reason that, earlier this year, I mandated in Scotland the use of the Scottish AI register for the Scottish Government and core agencies, and have encouraged its use by the entire Scottish public sector. We are the first country in the world to mandate such a register, because we recognise the importance of transparency in the public sector and of leading by example when it comes to the safe and effective use of AI.

As Emma Harper said—which the motion perhaps does not address in terms of putting it into the debate—it is crucial to highlight that there are equal dangers and risks if people do not utilise AI. There are risks to people’s health, the economy and the environment. Brian Whittle, among others, gave some examples of the wide benefits of using AI. The radiograph accelerated detection and identification clinical trial—RADICAL—is taking place in Scotland, for instance, to help to detect lung cancer earlier, and there is new technology that is enabling theatre scheduling systems, which can cut waiting lists by making operating theatre scheduling more efficient. Other examples include accident prevention, road safety and improvement of efficiencies in agriculture. A host of innovations are happening in Scotland at the moment, in which AI is playing a central role.

The motion, rightly, highlights the importance of regulation. I assure members in the chamber that we are working hard on behalf of the people of Scotland and looking at what regulation can do to protect their interests. Unfortunately, most AI regulation remains within the remit of the UK Government, not the Scottish Government, but we have been encouraged, since the recent election, to see a slight deepening of the understanding that we need to ensure safe and transparent control of AI. We want to see more being done, and we will continue to speak to the UK Government about the issues.

We will do that to ensure that the interests of the people and businesses of Scotland are considered in any future UK AI regulation, such as the Artificial Intelligence (Regulation) Bill, which was mentioned in the King’s speech earlier this year. We hope to have a close dialogue with the UK Government about the impact of AI on Scotland—not just Scotland in the wider world but, in particular, in our devolved responsibilities, including education and health. It is important that the Scottish Government is closely engaged in the UK Government’s work in taking forward that legislation.

I reiterate my thanks to Ms Roddick for lodging the motion. Scotland can make the most of AI only if we can minimise and control the risks, but it is really important that, at the same time, we take a balanced approach and capture all the benefits. I hope that Parliament will debate the issue time and again in the years ahead.

Thank you, minister. That concludes the debate. I close this meeting of Parliament.

Meeting closed at 18:36.