Item 4 is the exam diet for 2020 and 2021. In our first evidence session, we will look at the review that was led by Professor Mark Priestley of the 2020 exam diet and at the plans and preparations for the 2021 exam diet. I welcome our witnesses, Professor Mark Priestley and Dr Marina Shapira from the University of Stirling. We will go straight into questions from members, and I ask everyone to keep questions and answers as succinct as possible.
I should have noted earlier that we have apologies from Alex Neil MSP. The first questions are from Daniel Johnson.
Item 4 is the exam diet for 2020 and 2021. In our first evidence session, we will look at the review that was led by Professor Mark Priestley of the 2020 exam diet and at the plans and preparations for the 2021 exam diet. I welcome our witnesses, Professor Mark Priestley and Dr Marina Shapira from the University of Stirling. We will go straight into questions from members, and I ask everyone to keep questions and answers as succinct as possible.
I should have noted earlier that we have apologies from Alex Neil MSP. The first questions are from Daniel Johnson.
Thank you, convener. I begin by thanking Professor Priestley and Dr Shapira for the report. Given the timeframes and the significance of the issues, the report is a very detailed and useful bit of work.
I will focus on pages 20 to 22 of the report, and I will characterise what I think the report sets out. It highlights the fact that the Scottish Qualifications Authority stated in June that it would refer to examination centres following the moderation process to consider whether there was any evidence at a centre level to explain differences. However, the report goes on to say that, in essence, that was abandoned later that month. That was a major change, as it meant that that there would not be a moderation process that would ultimately look at whether there was evidence for the original estimate from the examination centre. Could Professor Priestley or Dr Shapira confirm that that is a correct assessment of that change? Do they agree with me that that was a fundamental change in relation to what it was that the grades that were awarded actually graded—the individual or the centre?
Thank you, convener. I begin by thanking Professor Priestley and Dr Shapira for the report. Given the timeframes and the significance of the issues, the report is a very detailed and useful bit of work.
I will focus on pages 20 to 22 of the report, and I will characterise what I think the report sets out. It highlights the fact that the Scottish Qualifications Authority stated in June that it would refer to examination centres following the moderation process to consider whether there was any evidence at a centre level to explain differences. However, the report goes on to say that, in essence, that was abandoned later that month. That was a major change, as it meant that that there would not be a moderation process that would ultimately look at whether there was evidence for the original estimate from the examination centre. Could Professor Priestley or Dr Shapira confirm that that is a correct assessment of that change? Do they agree with me that that was a fundamental change in relation to what it was that the grades that were awarded actually graded—the individual or the centre?
I will start on that; if Dr Shapira wishes to chip in as well, that is great.
It has been made absolutely clear to us throughout the inquiry process that the intention was always to have a qualitative element to the moderation, post the submission of estimations at the end of May. The SQA has made it clear that that became impossible because of the sheer scale of what it saw as overestimation. It is a moot point whether that was overestimation or not, or whether we are looking at a different system altogether. We can perhaps return to explore that.
What necessitated the change in the view of the SQA was the scale of the overestimation, which simply made things impractical, given the time constraints at the time and the resources that were available to engage directly with centres.
We think that there is a middle position, and it has been clearly communicated to us by some of the witnesses we spoke to as part of the inquiry that it would have been possible to go back to the local authorities. Although it might not have been possible to deal with anomalies and outliers at an individual level—it was probably not possible, in our view—it would certainly have been feasible to have done some statistical analysis of the patterns and trends in the data and to have then engaged with local authorities to explain and provide rationales for variants at a cohort or subject level within centres. That would have subsequently ironed out the need for so many post-certification appeals.
I agree with the committee member that what happened was a profound change, as it effectively changed what was going to be a quantitative and qualitative approach to moderation at a national level to one that was purely quantitative. That meant that the scale of anomalies that were caused by the application of a statistical algorithm was going to be greater, and that in turn meant that there were going to be far greater numbers of appeals post-certification.
Do you want to add anything to that, Dr Shapira?
I will start on that; if Dr Shapira wishes to chip in as well, that is great.
It has been made absolutely clear to us throughout the inquiry process that the intention was always to have a qualitative element to the moderation, post the submission of estimations at the end of May. The SQA has made it clear that that became impossible because of the sheer scale of what it saw as overestimation. It is a moot point whether that was overestimation or not, or whether we are looking at a different system altogether. We can perhaps return to explore that.
What necessitated the change in the view of the SQA was the scale of the overestimation, which simply made things impractical, given the time constraints at the time and the resources that were available to engage directly with centres.
We think that there is a middle position, and it has been clearly communicated to us by some of the witnesses we spoke to as part of the inquiry that it would have been possible to go back to the local authorities. Although it might not have been possible to deal with anomalies and outliers at an individual level—it was probably not possible, in our view—it would certainly have been feasible to have done some statistical analysis of the patterns and trends in the data and to have then engaged with local authorities to explain and provide rationales for variants at a cohort or subject level within centres. That would have subsequently ironed out the need for so many post-certification appeals.
I agree with the committee member that what happened was a profound change, as it effectively changed what was going to be a quantitative and qualitative approach to moderation at a national level to one that was purely quantitative. That meant that the scale of anomalies that were caused by the application of a statistical algorithm was going to be greater, and that in turn meant that there were going to be far greater numbers of appeals post-certification.
Do you want to add anything to that, Dr Shapira?
No—you covered that issue well, so I do not have anything to add at the moment.
No—you covered that issue well, so I do not have anything to add at the moment.
Dr Priestley, given that you agree with the proposition that that was a very significant change in approach and in what the methodology fundamentally achieved, could you take us through what communication there was from the SQA to other bodies? I was not aware of any announcements regarding that change, which you have described as “profound”. I do not believe that it was made known to the committee. You have examined communications between the SQA and the Government: what level of communication was there? Did the SQA seek any form of confirmation that it was correct to make that change in approach?
Dr Priestley, given that you agree with the proposition that that was a very significant change in approach and in what the methodology fundamentally achieved, could you take us through what communication there was from the SQA to other bodies? I was not aware of any announcements regarding that change, which you have described as “profound”. I do not believe that it was made known to the committee. You have examined communications between the SQA and the Government: what level of communication was there? Did the SQA seek any form of confirmation that it was correct to make that change in approach?
I do not know what discussions there were, and I cannot remember the fine detail, as we saw an awful lot of communication, but my understanding is that the public announcements were made nationally, and there were communications with groups such as the teacher unions, indicating that a qualitative process would take place. It was subsequently announced in June—publicly, I think—that that was no longer possible. I think that that was a matter of public record.
I do not know what discussions there were, and I cannot remember the fine detail, as we saw an awful lot of communication, but my understanding is that the public announcements were made nationally, and there were communications with groups such as the teacher unions, indicating that a qualitative process would take place. It was subsequently announced in June—publicly, I think—that that was no longer possible. I think that that was a matter of public record.
On the methodology, you state that there was a possibility of a middle way through. I had direct experience of this: in one school in my constituency, almost half of the higher grades were downgraded. Was any attempt made to spot and identify outliers?
Although I can understand that the volume of the grades that were affected and the scale of the process might have prevented granular investigation and interrogation of changes, to what degree did the SQA investigate what went on at centres where the changes were significant and whether there were issues with the model that was used? Was such an exercise undertaken? If not, should such an exercise have been undertaken?
On the methodology, you state that there was a possibility of a middle way through. I had direct experience of this: in one school in my constituency, almost half of the higher grades were downgraded. Was any attempt made to spot and identify outliers?
Although I can understand that the volume of the grades that were affected and the scale of the process might have prevented granular investigation and interrogation of changes, to what degree did the SQA investigate what went on at centres where the changes were significant and whether there were issues with the model that was used? Was such an exercise undertaken? If not, should such an exercise have been undertaken?
We do not believe that such an exercise was undertaken. There was certainly no communication about centre results between the SQA and schools and colleges after the estimates were put in. Appeals for a nationally run statistical approach were made by organisations such as the Association of Directors of Education in Scotland. ADES’s suggestion that the Government run a statistical approach was not taken up.
We think that no attempts were made to do such granular analysis of the grades. Although it was accepted that a large number of anomalies would be caused by the statistical moderation procedure, it was assumed, with some justification, that the post-certification appeals process—the review process—would deal with those. However, of course, that raises, in turn, further questions about the scale of that operation, the impact on the young people who would have to undergo that additional post-certification process and the fact that it was presented and, more important, construed as a post-certification appeals process rather than as being part of the awarding process.
The SQA’s contention to us is that, if the model had been allowed to run to its conclusion, many of those issues would have been picked up but, as I have just indicated, there were plenty of caveats to that.
We do not believe that such an exercise was undertaken. There was certainly no communication about centre results between the SQA and schools and colleges after the estimates were put in. Appeals for a nationally run statistical approach were made by organisations such as the Association of Directors of Education in Scotland. ADES’s suggestion that the Government run a statistical approach was not taken up.
We think that no attempts were made to do such granular analysis of the grades. Although it was accepted that a large number of anomalies would be caused by the statistical moderation procedure, it was assumed, with some justification, that the post-certification appeals process—the review process—would deal with those. However, of course, that raises, in turn, further questions about the scale of that operation, the impact on the young people who would have to undergo that additional post-certification process and the fact that it was presented and, more important, construed as a post-certification appeals process rather than as being part of the awarding process.
The SQA’s contention to us is that, if the model had been allowed to run to its conclusion, many of those issues would have been picked up but, as I have just indicated, there were plenty of caveats to that.
I will leave my questioning there and allow others to come in.
I will leave my questioning there and allow others to come in.
Thank you. I have a technical question; I should say that it is a long time since I studied statistics at university. One of the concerns was about what was known in the press as “the waterfall effect” but you describe in your report as “an avalanche effect”. That was to do with the fact that not all estimations were outwith a school’s historical estimates. A move from B to C might have been married to an historical position. The school might have had too many A bands, but the A bands were not merged with the B bands or the C bands if they were moved down.
Would it have been sound statistically for students who fitted in with the estimation in the historical model to have preserved their position and for only the moderated bands to have been merged into those groups? Would that have been a reasonable model to have followed?
Thank you. I have a technical question; I should say that it is a long time since I studied statistics at university. One of the concerns was about what was known in the press as “the waterfall effect” but you describe in your report as “an avalanche effect”. That was to do with the fact that not all estimations were outwith a school’s historical estimates. A move from B to C might have been married to an historical position. The school might have had too many A bands, but the A bands were not merged with the B bands or the C bands if they were moved down.
Would it have been sound statistically for students who fitted in with the estimation in the historical model to have preserved their position and for only the moderated bands to have been merged into those groups? Would that have been a reasonable model to have followed?
I will defer to Dr Shapira on that one, as she has considerably more expertise in statistics than I do.
I will defer to Dr Shapira on that one, as she has considerably more expertise in statistics than I do.
I should start by saying that we did not have access to the data or to the statistical algorithm or the code that was used for the algorithm. Therefore, what I am about to say is, to some extent, speculative. It is based on my understanding of what was done, what was described in the technical report and what we have been told.
Based on that, the waterfall effect or the avalanche effect resulted, to an extent, from the strategy that was chosen to apply in the algorithm. I cannot say exactly why that strategy was chosen or why it was not approached slightly differently. It is possible that if, instead of moving entire bands as a result of the moderation, the grades that were downgraded had simply been merged with the lower band grade, the result would have been different. [Inaudible.]—bands were moved down and, as a result of that, the lower band was also moved down. That means that, even if those who were on the lower band were not overestimated, they were penalised and moved further down because of the overestimation on the higher band.
Only having access to the data, the algorithm and the exam modelling would give us an answer as to whether the result would be different if that was done differently.
09:30
I should start by saying that we did not have access to the data or to the statistical algorithm or the code that was used for the algorithm. Therefore, what I am about to say is, to some extent, speculative. It is based on my understanding of what was done, what was described in the technical report and what we have been told.
Based on that, the waterfall effect or the avalanche effect resulted, to an extent, from the strategy that was chosen to apply in the algorithm. I cannot say exactly why that strategy was chosen or why it was not approached slightly differently. It is possible that if, instead of moving entire bands as a result of the moderation, the grades that were downgraded had simply been merged with the lower band grade, the result would have been different. [Inaudible.]—bands were moved down and, as a result of that, the lower band was also moved down. That means that, even if those who were on the lower band were not overestimated, they were penalised and moved further down because of the overestimation on the higher band.
Only having access to the data, the algorithm and the exam modelling would give us an answer as to whether the result would be different if that was done differently.
09:30
I will follow up on the appeals process. As it is a huge issue, colleagues will also have questions on that.
Could the SQA have better explained the importance of the post-certification review and the process that Professor Priestley referred to? The committee has been copied into a letter from Connect to the SQA that urges it to consider appeals for this year, as many pupils have been left seriously disadvantaged. Constituents have also contacted me about the issue.
Could more have been done to have better communications about the post-certification review system? Do you think that it is now too late, or should the SQA take into consideration appeals from students who were held back this year because of the situation?
I will follow up on the appeals process. As it is a huge issue, colleagues will also have questions on that.
Could the SQA have better explained the importance of the post-certification review and the process that Professor Priestley referred to? The committee has been copied into a letter from Connect to the SQA that urges it to consider appeals for this year, as many pupils have been left seriously disadvantaged. Constituents have also contacted me about the issue.
Could more have been done to have better communications about the post-certification review system? Do you think that it is now too late, or should the SQA take into consideration appeals from students who were held back this year because of the situation?
It is important to distinguish between the post-certification review process that took place in 2020 and the subsequent appeals process, which was applied post-10 August, because they are about different things.
It is evident that the PCR was widely seen as an appeals process, which is normally fairly limited and applies to grades that are being disputed. The PCR is, or should have been, much more about a moderation process of manually adjusting, through qualitative input, those grades that had been misapplied, if you like, through the statistical algorithm. The PCR is part of the moderation process. However, because it was called a post-certification review and there is a long tradition of seeing an appeal as something that happens post-certification, it was inevitable that that would be seen as a separate part of the moderation system, rather than as integral to it.
Perhaps the SQA could have done more, but there is evidence in the communications that we saw that that was clearly understood at the SQA and was communicated. The messaging could have been clearer. Perhaps the terminology could have been a little different—for example, not calling it a “post-certification” review.
However, it is easy to say that with the benefit of hindsight. A lot of the assumptions that were made during the awards process and the pandemic were based on systems that work perfectly well in normal years, but which, for various reasons, have come up as lacking in the face of the restricted timeframes and the ever-changing situation during the pandemic.
As we said in our report, it is important not to attribute blame but to think about how it can be done differently in future. Yes, we could have done things better, but perhaps we could not have done them better given what we knew at the time—that is important to stress.
On the appeals process that was put in place, we indicated in our report that we felt that it was unnecessary and counterproductive to restrict the grounds for appeal to take out what was, effectively, academic judgment. If a student is unhappy with a grade that has been awarded by a school, they can appeal only on the grounds that there was an administrative error, either by the SQA or the school, or that there has been discrimination.
It is not possible—we have seen examples of this—to correct errors that are due to insufficient use of the available evidence or where a mistake in academic judgment was made. My understanding is that there is still a small number of people in that situation—they are fighting vigorously to get those wrongs addressed. It is a small number of students, but a significant problem, in terms of the publicity that is being generated. My view, which is not necessarily the view of the team, is that it would not do any harm to have another look at those cases, because it is about the life chances of individual young people.
It is important to distinguish between the post-certification review process that took place in 2020 and the subsequent appeals process, which was applied post-10 August, because they are about different things.
It is evident that the PCR was widely seen as an appeals process, which is normally fairly limited and applies to grades that are being disputed. The PCR is, or should have been, much more about a moderation process of manually adjusting, through qualitative input, those grades that had been misapplied, if you like, through the statistical algorithm. The PCR is part of the moderation process. However, because it was called a post-certification review and there is a long tradition of seeing an appeal as something that happens post-certification, it was inevitable that that would be seen as a separate part of the moderation system, rather than as integral to it.
Perhaps the SQA could have done more, but there is evidence in the communications that we saw that that was clearly understood at the SQA and was communicated. The messaging could have been clearer. Perhaps the terminology could have been a little different—for example, not calling it a “post-certification” review.
However, it is easy to say that with the benefit of hindsight. A lot of the assumptions that were made during the awards process and the pandemic were based on systems that work perfectly well in normal years, but which, for various reasons, have come up as lacking in the face of the restricted timeframes and the ever-changing situation during the pandemic.
As we said in our report, it is important not to attribute blame but to think about how it can be done differently in future. Yes, we could have done things better, but perhaps we could not have done them better given what we knew at the time—that is important to stress.
On the appeals process that was put in place, we indicated in our report that we felt that it was unnecessary and counterproductive to restrict the grounds for appeal to take out what was, effectively, academic judgment. If a student is unhappy with a grade that has been awarded by a school, they can appeal only on the grounds that there was an administrative error, either by the SQA or the school, or that there has been discrimination.
It is not possible—we have seen examples of this—to correct errors that are due to insufficient use of the available evidence or where a mistake in academic judgment was made. My understanding is that there is still a small number of people in that situation—they are fighting vigorously to get those wrongs addressed. It is a small number of students, but a significant problem, in terms of the publicity that is being generated. My view, which is not necessarily the view of the team, is that it would not do any harm to have another look at those cases, because it is about the life chances of individual young people.
That is helpful. Perhaps Dr Shapira would like to come in.
That is helpful. Perhaps Dr Shapira would like to come in.
No.
No.
My question is the same. The situation that was just referred to by Professor Priestley has been raised in previous evidence sessions—young people who feel that they were assessed incorrectly and who have been unable to appeal because the SQA will accept appeals from schools only, yet, in essence, their appeals are against their school’s judgment.
I was struck by something that you said in your answer to Daniel Johnson, Professor Priestley. You talked about the impact on young people of the post-certification review process. Will you elaborate on what you meant?
My question is the same. The situation that was just referred to by Professor Priestley has been raised in previous evidence sessions—young people who feel that they were assessed incorrectly and who have been unable to appeal because the SQA will accept appeals from schools only, yet, in essence, their appeals are against their school’s judgment.
I was struck by something that you said in your answer to Daniel Johnson, Professor Priestley. You talked about the impact on young people of the post-certification review process. Will you elaborate on what you meant?
The logic is that the post-certification review is seen widely as an appeal, and that people are having to appeal against a judgment that has already been made. That is quite different from saying that the provisional grade that has been awarded will be looked at on the basis of further evidence.
The psychological impact on young people is significant. There are young people who are waiting on university places, so their exams are high stakes. Suddenly, they find that their grades are not what they expected, and that they will have to go through what they see as an appeals process—they see it that way, regardless of whether it is an integral part of the system. It is a stressful time for those young people.
There are different ways in which we might look at that for the future. One is to ensure that the post-certification review process happens before certification. Given the timeframes, I do not know whether that is possible; this year, it probably was not possible. However, when the estimations were in at the end of May, it perhaps would have been helpful—we stressed this in the report—to have made absolutely clear what the moderation process was going to look like technically, and what its likely impact would be, including its implications for what would be large numbers of young people having to put in appeals, regardless of whether those appeals were going expedited by schools, subject to a fast-track process and free of charge. Part of this is about messaging, and another part is about the impact on young people, who expect to receive their results in early August, but instead are having to wait while results are challenged, even if that is integral to the process.
The logic is that the post-certification review is seen widely as an appeal, and that people are having to appeal against a judgment that has already been made. That is quite different from saying that the provisional grade that has been awarded will be looked at on the basis of further evidence.
The psychological impact on young people is significant. There are young people who are waiting on university places, so their exams are high stakes. Suddenly, they find that their grades are not what they expected, and that they will have to go through what they see as an appeals process—they see it that way, regardless of whether it is an integral part of the system. It is a stressful time for those young people.
There are different ways in which we might look at that for the future. One is to ensure that the post-certification review process happens before certification. Given the timeframes, I do not know whether that is possible; this year, it probably was not possible. However, when the estimations were in at the end of May, it perhaps would have been helpful—we stressed this in the report—to have made absolutely clear what the moderation process was going to look like technically, and what its likely impact would be, including its implications for what would be large numbers of young people having to put in appeals, regardless of whether those appeals were going expedited by schools, subject to a fast-track process and free of charge. Part of this is about messaging, and another part is about the impact on young people, who expect to receive their results in early August, but instead are having to wait while results are challenged, even if that is integral to the process.
Why has the SQA been so resistant to the idea of young people being able to appeal their assessment directly?
Why has the SQA been so resistant to the idea of young people being able to appeal their assessment directly?
I think that it boils down to a fear that that would open the floodgates and result in thousands of appeals. However, it is quite feasible for young people to put in appeals that are supported by their school and to not have to wait for the school to agree to that. It is quite possible to specify tight criteria for that. We have not recommended in the report that young people should be able to put in direct appeals; we have recommended that the situation be reviewed and that evidence be taken on it. I do not want to say much more on that, although I have personal views on it.
I think that it boils down to a fear that that would open the floodgates and result in thousands of appeals. However, it is quite feasible for young people to put in appeals that are supported by their school and to not have to wait for the school to agree to that. It is quite possible to specify tight criteria for that. We have not recommended in the report that young people should be able to put in direct appeals; we have recommended that the situation be reviewed and that evidence be taken on it. I do not want to say much more on that, although I have personal views on it.
With your forbearance, convener, I have one more question, which is on a slightly different topic. It is really just a yes/no question, and it relates to Dr Shapira’s previous answer. When we took evidence from Professor Priestley and Dr Shapira in the course of the review, they both said that there was data that they hoped to be able to get from the SQA but which it was unwilling to provide. Was that data ever forthcoming? Were you provided with everything that you needed to carry out the review in full?
With your forbearance, convener, I have one more question, which is on a slightly different topic. It is really just a yes/no question, and it relates to Dr Shapira’s previous answer. When we took evidence from Professor Priestley and Dr Shapira in the course of the review, they both said that there was data that they hoped to be able to get from the SQA but which it was unwilling to provide. Was that data ever forthcoming? Were you provided with everything that you needed to carry out the review in full?
I would like to say more than yes or no on that, because I would like to slightly challenge the question. We were not denied information by the SQA; the issue is a little more nuanced than that.
When we were setting up the terms of reference for the review, we said that, if possible, we would like to look at data and the algorithm. Scottish Government officials made clear to us at the outset that that would be investigated but that it was likely to be problematic. Subsequently, as we said to the committee when we met back in August or September, we also explored the processes and resources that would be necessary in order to do that sort of analysis.
One process that was very problematic was ethical approval. We had fast-track ethical approval from the university to do the review, but it would have been much more complicated to get ethical approval for that sort of work. A second issue was that we were working in a short timeframe with quite a small team. As was pointed out at the start, we produced quite a detailed report, and we simply did not have the time and resources to do that analysis in the time required. Hence, our report recommends that the analysis should happen, and that it should be done over a longer period, because it is important that we understand exactly how the approach works and its implications.
I would like to say more than yes or no on that, because I would like to slightly challenge the question. We were not denied information by the SQA; the issue is a little more nuanced than that.
When we were setting up the terms of reference for the review, we said that, if possible, we would like to look at data and the algorithm. Scottish Government officials made clear to us at the outset that that would be investigated but that it was likely to be problematic. Subsequently, as we said to the committee when we met back in August or September, we also explored the processes and resources that would be necessary in order to do that sort of analysis.
One process that was very problematic was ethical approval. We had fast-track ethical approval from the university to do the review, but it would have been much more complicated to get ethical approval for that sort of work. A second issue was that we were working in a short timeframe with quite a small team. As was pointed out at the start, we produced quite a detailed report, and we simply did not have the time and resources to do that analysis in the time required. Hence, our report recommends that the analysis should happen, and that it should be done over a longer period, because it is important that we understand exactly how the approach works and its implications.
Thank you—that is helpful.
Thank you—that is helpful.
My first question is really just to open up a discussion with the panel. I appreciate that the review was very short and that there were time limitations in turning it round. I thank you for producing it—it is an excellent report. Were there any stakeholders with whom you would have liked to have engaged more in formulating the report? Did you simply run out of time? If you had had the benefit of more time, as academics, could you have expanded the report? In what areas is there still scope for analysis?
My first question is really just to open up a discussion with the panel. I appreciate that the review was very short and that there were time limitations in turning it round. I thank you for producing it—it is an excellent report. Were there any stakeholders with whom you would have liked to have engaged more in formulating the report? Did you simply run out of time? If you had had the benefit of more time, as academics, could you have expanded the report? In what areas is there still scope for analysis?
When Marina Shapira and I were doing the review, we commented frequently on the fact that we were squeezing a one-year research project into a six-week period. At times, it felt like it sort of took over our lives.
There certainly were groups with which we would have liked to have spent more time. For example, in the timeframe, it was not possible to explore the full range of young people’s groups that we would have liked to have talked to. We tried to engage with groups representing looked-after young children, for example, and youth work, so we worked outside schools a little. We engaged with a range of young people’s groups. For example, the Children and Young People’s Commissioner Scotland had a panel of young people for us, we had representatives from the Scottish Youth Parliament and we talked to the SQA: Where’s Our Say? campaign group. However, we were not able to engage with the full range of groups, simply because some did not respond in time and because of the limited resources that we had for the review.
I would certainly welcome research that looked in more detail at the experiences of young people, because ultimately they were the people who were most affected by the whole experience. However, I am afraid that that would be a funded research project; it will not happen without funding. We could learn an awful lot about the whole process from that.
As I alluded to in my answer to Mr Gray, we believe strongly that it is important for independent researchers to look in detail at the algorithm that was used and to explore and evaluate alternative statistical approaches, because there was some merit in the approach that was taken. Researchers could look at how we might do that sort of thing better in the future.
It is also important to consider how the application of a statistical model had particular effects on particular groups—for example, those with demographic or protected characteristics—and to understand for the future how we can avoid such effects. Fundamentally, those are equity issues, as well as being issues of understanding how a technical process works.
09:45
When Marina Shapira and I were doing the review, we commented frequently on the fact that we were squeezing a one-year research project into a six-week period. At times, it felt like it sort of took over our lives.
There certainly were groups with which we would have liked to have spent more time. For example, in the timeframe, it was not possible to explore the full range of young people’s groups that we would have liked to have talked to. We tried to engage with groups representing looked-after young children, for example, and youth work, so we worked outside schools a little. We engaged with a range of young people’s groups. For example, the Children and Young People’s Commissioner Scotland had a panel of young people for us, we had representatives from the Scottish Youth Parliament and we talked to the SQA: Where’s Our Say? campaign group. However, we were not able to engage with the full range of groups, simply because some did not respond in time and because of the limited resources that we had for the review.
I would certainly welcome research that looked in more detail at the experiences of young people, because ultimately they were the people who were most affected by the whole experience. However, I am afraid that that would be a funded research project; it will not happen without funding. We could learn an awful lot about the whole process from that.
As I alluded to in my answer to Mr Gray, we believe strongly that it is important for independent researchers to look in detail at the algorithm that was used and to explore and evaluate alternative statistical approaches, because there was some merit in the approach that was taken. Researchers could look at how we might do that sort of thing better in the future.
It is also important to consider how the application of a statistical model had particular effects on particular groups—for example, those with demographic or protected characteristics—and to understand for the future how we can avoid such effects. Fundamentally, those are equity issues, as well as being issues of understanding how a technical process works.
09:45
That is helpful, thank you. I agree that it would be interesting to hear about those two areas.
I will lump my next two questions together, to save time. In appendix A, you list the position papers that were submitted to the review. My question is a technical one. Will you explain why six out of 17 of the papers that were submitted to the review are available publicly? Was that a request of the organisations that submitted the papers? Did they want their submissions to be kept anonymous, or were they asked to keep them anonymous? I am wondering how much we will get to dig behind the report and the papers that were submitted by the various organisations.
My next question is about the SQA. At various points in the report, there is a substantial amount of criticism of the SQA. You spoke to a lot of stakeholders. What is the general view of how the SQA handled the matter? For example, in the report, you say:
“There is widespread criticism by respondents of SQA for a perceived lack of transparency and a failure to engage”.
The report also says:
“It was extremely disappointing, but not unexpected, that the SQA chose not to engage with any professional organisations”.
In a report of this nature, those comments strike me as worrying. What is the general feeling in relation to how the SQA was perceived throughout the process?
That is helpful, thank you. I agree that it would be interesting to hear about those two areas.
I will lump my next two questions together, to save time. In appendix A, you list the position papers that were submitted to the review. My question is a technical one. Will you explain why six out of 17 of the papers that were submitted to the review are available publicly? Was that a request of the organisations that submitted the papers? Did they want their submissions to be kept anonymous, or were they asked to keep them anonymous? I am wondering how much we will get to dig behind the report and the papers that were submitted by the various organisations.
My next question is about the SQA. At various points in the report, there is a substantial amount of criticism of the SQA. You spoke to a lot of stakeholders. What is the general view of how the SQA handled the matter? For example, in the report, you say:
“There is widespread criticism by respondents of SQA for a perceived lack of transparency and a failure to engage”.
The report also says:
“It was extremely disappointing, but not unexpected, that the SQA chose not to engage with any professional organisations”.
In a report of this nature, those comments strike me as worrying. What is the general feeling in relation to how the SQA was perceived throughout the process?
Starting with the position papers, the answer is brief. We treated the review as a research project with data, and we made undertakings of confidentiality. For example, we have not published the transcripts of the focus groups that we had, because we respected people’s confidentiality. We tried to provide safe spaces for people to talk frankly to us. With other reviews, I have seen that there can be a tendency for people to be nervous about speaking in such forums.
In terms of the position papers, we have not deliberately kept them private; we have just indicated where the organisation concerned has already published the paper itself, or where they have not done so. We had no intention of publishing the papers ourselves. We had some brief discussions about whether we could include them as appendices, but some of the organisations had indicated that they would be less frank in their appraisals if that were to happen. We felt that it was important to have appraisals that were honest, as those groups saw it. It is really up to the organisations, not us, whether they publish the papers.
On the perceptions of the SQA, I will preface my answer by saying that there is ample evidence that SQA acted with complete integrity and professionalism, and worked extremely hard. There is plenty of evidence that it was an all-consuming, 24/7 job for many people in the SQA over the summer. The organisation lacked the resources and, in some ways, the experience, to deal with very changed situations. For example, it had to operate new systems and deal with new approaches.
We do not want to be seen to be criticising the SQA’s professionalism, but there are comments from stakeholders that indicate that there is a lack of trust in the SQA across the system. People can make their own judgments on whether that view is merited.
It seems that there is a perception that the SQA lacks transparency. Throughout the review, we got the impression that the SQA was reluctant to share technical details, which might stem from a cultural expectation within the organisation that the technical expertise resides with it, and not elsewhere. That may work perfectly well in normal years, but in the year of a pandemic, with extraordinary circumstances and extraordinary measures, perhaps there was a need for a more open, collaborative working approach, particularly around collaborative decision making, for which different types of expertise would have been more appropriate, such as the contextual knowledge that local authorities bring to the table. It may be that the SQA’s technical expertise does not need to be shared in an open way in normal years. However, this year, it would have been helpful to have had a more transparent approach.
That was evident in a couple of cases. In one case, right up until the technical report was published, there was a reluctance to share the full technical details of the statistical approach that was used. We are not advocating that those details should be published—let us face it: most people would have no interest in or understanding of the full details of statistical approaches. However, at the end of May, it would have been helpful for the SQA to have articulated explicitly that it was going to work through such an approach and that that would have certain implications, including the likelihood of—indeed, the necessity for—the process that involved large numbers of post-certification reviews.
The other example boils down to collaborative working. Our view is that an approach that involved co-construction of solutions, bringing in expertise from different organisations, would have been helpful, regardless of whether that would have been feasible—which, in some ways, it would not have been, because of factors such as pressured timeframes. We are very much of the view that, for next year, there needs to be collaborative working, because organisations bring different types of expertise to the table. Simply by involving a range of voices—rather than having a particular voice from one organisation making decisions—we could avoid some of the issues that arose this year.
I hope that that is clear and helpful.
Starting with the position papers, the answer is brief. We treated the review as a research project with data, and we made undertakings of confidentiality. For example, we have not published the transcripts of the focus groups that we had, because we respected people’s confidentiality. We tried to provide safe spaces for people to talk frankly to us. With other reviews, I have seen that there can be a tendency for people to be nervous about speaking in such forums.
In terms of the position papers, we have not deliberately kept them private; we have just indicated where the organisation concerned has already published the paper itself, or where they have not done so. We had no intention of publishing the papers ourselves. We had some brief discussions about whether we could include them as appendices, but some of the organisations had indicated that they would be less frank in their appraisals if that were to happen. We felt that it was important to have appraisals that were honest, as those groups saw it. It is really up to the organisations, not us, whether they publish the papers.
On the perceptions of the SQA, I will preface my answer by saying that there is ample evidence that SQA acted with complete integrity and professionalism, and worked extremely hard. There is plenty of evidence that it was an all-consuming, 24/7 job for many people in the SQA over the summer. The organisation lacked the resources and, in some ways, the experience, to deal with very changed situations. For example, it had to operate new systems and deal with new approaches.
We do not want to be seen to be criticising the SQA’s professionalism, but there are comments from stakeholders that indicate that there is a lack of trust in the SQA across the system. People can make their own judgments on whether that view is merited.
It seems that there is a perception that the SQA lacks transparency. Throughout the review, we got the impression that the SQA was reluctant to share technical details, which might stem from a cultural expectation within the organisation that the technical expertise resides with it, and not elsewhere. That may work perfectly well in normal years, but in the year of a pandemic, with extraordinary circumstances and extraordinary measures, perhaps there was a need for a more open, collaborative working approach, particularly around collaborative decision making, for which different types of expertise would have been more appropriate, such as the contextual knowledge that local authorities bring to the table. It may be that the SQA’s technical expertise does not need to be shared in an open way in normal years. However, this year, it would have been helpful to have had a more transparent approach.
That was evident in a couple of cases. In one case, right up until the technical report was published, there was a reluctance to share the full technical details of the statistical approach that was used. We are not advocating that those details should be published—let us face it: most people would have no interest in or understanding of the full details of statistical approaches. However, at the end of May, it would have been helpful for the SQA to have articulated explicitly that it was going to work through such an approach and that that would have certain implications, including the likelihood of—indeed, the necessity for—the process that involved large numbers of post-certification reviews.
The other example boils down to collaborative working. Our view is that an approach that involved co-construction of solutions, bringing in expertise from different organisations, would have been helpful, regardless of whether that would have been feasible—which, in some ways, it would not have been, because of factors such as pressured timeframes. We are very much of the view that, for next year, there needs to be collaborative working, because organisations bring different types of expertise to the table. Simply by involving a range of voices—rather than having a particular voice from one organisation making decisions—we could avoid some of the issues that arose this year.
I hope that that is clear and helpful.
It is very helpful—thank you. It is difficult for anyone to answer such a question without giving an opinion, because it is a theoretical question that involves perception.
Your report said:
“SQA has stated to us that there is no regret in respect of the moderation approach used this year”.
Its only regret is that the process was not allowed to run its course because of ministerial intervention, which changed it quite dramatically. Therefore, the SQA’s regret lies in its not having been able to complete the process that it had started, rather than in the fact that there was so much criticism of the process.
However, as you have pointed out, we know that the process led to a situation in which results for schools with high levels of disadvantage were downgraded more than those for schools in more advantaged areas. Were you surprised that the SQA has no regret over that moderation process and its outcomes? What could it have done differently?
It is very helpful—thank you. It is difficult for anyone to answer such a question without giving an opinion, because it is a theoretical question that involves perception.
Your report said:
“SQA has stated to us that there is no regret in respect of the moderation approach used this year”.
Its only regret is that the process was not allowed to run its course because of ministerial intervention, which changed it quite dramatically. Therefore, the SQA’s regret lies in its not having been able to complete the process that it had started, rather than in the fact that there was so much criticism of the process.
However, as you have pointed out, we know that the process led to a situation in which results for schools with high levels of disadvantage were downgraded more than those for schools in more advantaged areas. Were you surprised that the SQA has no regret over that moderation process and its outcomes? What could it have done differently?
One thing that became evident to us was that the SQA saw the process as a technical one. However, as a social researcher, and particularly because I work in a relational field such as education, it is important to me that technical processes should also be understood in terms of their social impacts. Perhaps that was not well enough appreciated, and adopting the type of collaborative approach that we have been discussing might have mitigated that to an extent.
One thing that became evident to us was that the SQA saw the process as a technical one. However, as a social researcher, and particularly because I work in a relational field such as education, it is important to me that technical processes should also be understood in terms of their social impacts. Perhaps that was not well enough appreciated, and adopting the type of collaborative approach that we have been discussing might have mitigated that to an extent.
I want to follow up Jamie Greene’s point. Given the limitations on time and resources that you have highlighted, and the Government’s response, are you confident that, ahead of next year’s process, the lessons that need to be learned have been learned?
I want to follow up Jamie Greene’s point. Given the limitations on time and resources that you have highlighted, and the Government’s response, are you confident that, ahead of next year’s process, the lessons that need to be learned have been learned?
My understanding is that a great deal of work is already being done. Our involvement ended with the publication of the review report but, since then, I have had several conversations about moderation systems. For example, I have had conversations with local authorities. The optimist in me hopes that lessons have been learned.
It is important to stress that, rather than pointing the finger of blame at what was not done correctly last year—in what, let us face it, were difficult if not impossible situations—looking critically at what happened should enable us to do things better this year, given the benefit of the hindsight that we now have and the extended timeframes that are available this year.
However, my fear is that, due in part to the culture of the system, the solutions that will be put in place this year will be bureaucratic, will involve vast workloads for teachers and will be based on quite narrow approaches to assessment. I will give an example of that. I had quite a few conversations with teachers following the publication of the report. The report mentioned “validated assessments”, and it became clear that many teachers understood that to mean the return of NABs—national assessment bank tests.
I hope that, this year, we will see a move away from a reliance on pencil and paper testing, including in exams, and to a more eclectic range of assessment methodologies. We work in universities, and university degrees are awarded through a range of methodologies, not just exams. In fact, many courses do not have exams. I hope that, this year, we see the development of a mixed economy of assessment methods. There is also a strong message here for the Organisation for Economic Co-operation and Development review about the future of qualifications and how they are certificated and awarded.
My understanding is that a great deal of work is already being done. Our involvement ended with the publication of the review report but, since then, I have had several conversations about moderation systems. For example, I have had conversations with local authorities. The optimist in me hopes that lessons have been learned.
It is important to stress that, rather than pointing the finger of blame at what was not done correctly last year—in what, let us face it, were difficult if not impossible situations—looking critically at what happened should enable us to do things better this year, given the benefit of the hindsight that we now have and the extended timeframes that are available this year.
However, my fear is that, due in part to the culture of the system, the solutions that will be put in place this year will be bureaucratic, will involve vast workloads for teachers and will be based on quite narrow approaches to assessment. I will give an example of that. I had quite a few conversations with teachers following the publication of the report. The report mentioned “validated assessments”, and it became clear that many teachers understood that to mean the return of NABs—national assessment bank tests.
I hope that, this year, we will see a move away from a reliance on pencil and paper testing, including in exams, and to a more eclectic range of assessment methodologies. We work in universities, and university degrees are awarded through a range of methodologies, not just exams. In fact, many courses do not have exams. I hope that, this year, we see the development of a mixed economy of assessment methods. There is also a strong message here for the Organisation for Economic Co-operation and Development review about the future of qualifications and how they are certificated and awarded.
It is good to be optimistic. The cabinet secretary will be speaking to the committee as well, and it will be interesting to hear about his position on that.
I have a fairly brief question about the local authority role. Your report says that you
“found evidence of highly variable approaches to local moderation”.
You found that at least one local authority adjusted grades directly before they were submitted. Does that imply that, after teachers’ estimates were made, there were actually three potential adjustments—at centre level, local authority level and then by the SQA in the moderation process? Was there authority for that adjustment in the guidance that was issued to centres?
It is good to be optimistic. The cabinet secretary will be speaking to the committee as well, and it will be interesting to hear about his position on that.
I have a fairly brief question about the local authority role. Your report says that you
“found evidence of highly variable approaches to local moderation”.
You found that at least one local authority adjusted grades directly before they were submitted. Does that imply that, after teachers’ estimates were made, there were actually three potential adjustments—at centre level, local authority level and then by the SQA in the moderation process? Was there authority for that adjustment in the guidance that was issued to centres?
That is an interesting question. It is not surprising that we see variability in practice, because Scotland has 32 local authorities with very different characteristics. For example, some are very large and some are very small, and they have different approaches and levels of resourcing. It is not surprising at all that we see different approaches to this particular problem, because we see different approaches anyway. Some local authorities are much more directive than others.
The implications that you mention are possible. I will ask Marina Shapira to expand on this, but I can say that it is entirely possible that grades were adjusted multiple times instead of simply being awarded by the school and then adjusted using the statistics. We talked to some respondents who feared that, because they had already been rigorous in adjusting grades downwards in response to the local authority input, the grades were being hit with a double whammy. I do not have the statistical background or access to the analysis of the data that would be needed to comment on that, but Marina can say more.
Nothing in the guidance precluded local authority input in that way. The SQA made it clear to us that local authorities were expected to have rigorous moderation procedures in place, although it did not necessarily specify what those were. One recommendation that we have made to local authorities, subsequently to the report, is about the development of a national system that is locally applied. That would give a greater degree of consistency of processes across the country, instead of having local solutions that can be variable.
Marina might want to chip in on the question of adjustment.
That is an interesting question. It is not surprising that we see variability in practice, because Scotland has 32 local authorities with very different characteristics. For example, some are very large and some are very small, and they have different approaches and levels of resourcing. It is not surprising at all that we see different approaches to this particular problem, because we see different approaches anyway. Some local authorities are much more directive than others.
The implications that you mention are possible. I will ask Marina Shapira to expand on this, but I can say that it is entirely possible that grades were adjusted multiple times instead of simply being awarded by the school and then adjusted using the statistics. We talked to some respondents who feared that, because they had already been rigorous in adjusting grades downwards in response to the local authority input, the grades were being hit with a double whammy. I do not have the statistical background or access to the analysis of the data that would be needed to comment on that, but Marina can say more.
Nothing in the guidance precluded local authority input in that way. The SQA made it clear to us that local authorities were expected to have rigorous moderation procedures in place, although it did not necessarily specify what those were. One recommendation that we have made to local authorities, subsequently to the report, is about the development of a national system that is locally applied. That would give a greater degree of consistency of processes across the country, instead of having local solutions that can be variable.
Marina might want to chip in on the question of adjustment.
One of the problems was that the moderation that was undertaken this year was a purely statistical process. Realistically, that kind of process can achieve only one thing: it can try to make the estimates that are produced look more reliable in that they are more like previous grades. That is basically the only thing that can be achieved by using a statistical approach to moderation.
However, making grades look reliable is only one task in the moderation process, and the moderation process is much more than that. The aim of the moderation is to ensure that academic standards were appropriate, that robust criteria were used to produce the grades and estimates across all courses at all levels, and that the grades were fair to individuals in the sense that they reflect what the individuals know and their skills. That could not be achieved through statistical moderation, and that is where the role of local authorities is very important. Different local authorities had different moderation processes. It depended on the level of resources and, sometimes, on the overall approach of the local authority. Some local authorities told us that they deliberately did not interfere in the estimation process because they considered that that would put into question the teachers’ expertise and professional judgement.
10:00However, it was clear that, once the statistical moderation had been done, there was an opportunity to go back to the local authorities. Professor Priestley mentioned the possibility of letting the local authority look at the outliers to try to understand the discrepancies that were produced as a result of moderation. That was not done. Our impression was that it could have been done, although not by all local authorities.
Some local authorities have excellent resources and are capable of doing such work. Those local authorities did that work after the grades were published on 4 August and they quickly produced excellent analyses of the trends and differences between the estimated and moderated grades. They were able to analyse that by different schools with different characteristics and were quickly able to produce evidence of injustices at the levels of entire courses, cohorts or individual students. However, as I said, that was not the case for all local authorities. Not all local authorities have the appropriate resources. That needs consideration to ensure that all local authorities have the appropriate resources in place.
The approach was also different across different schools. Some schools had the capacity to carry out such analysis after the publication of the grades. It is important to consider the resource ability of local authorities and schools to carry out such moderation to identify outliers and decide how to deal with them.
One of the problems was that the moderation that was undertaken this year was a purely statistical process. Realistically, that kind of process can achieve only one thing: it can try to make the estimates that are produced look more reliable in that they are more like previous grades. That is basically the only thing that can be achieved by using a statistical approach to moderation.
However, making grades look reliable is only one task in the moderation process, and the moderation process is much more than that. The aim of the moderation is to ensure that academic standards were appropriate, that robust criteria were used to produce the grades and estimates across all courses at all levels, and that the grades were fair to individuals in the sense that they reflect what the individuals know and their skills. That could not be achieved through statistical moderation, and that is where the role of local authorities is very important. Different local authorities had different moderation processes. It depended on the level of resources and, sometimes, on the overall approach of the local authority. Some local authorities told us that they deliberately did not interfere in the estimation process because they considered that that would put into question the teachers’ expertise and professional judgement.
10:00However, it was clear that, once the statistical moderation had been done, there was an opportunity to go back to the local authorities. Professor Priestley mentioned the possibility of letting the local authority look at the outliers to try to understand the discrepancies that were produced as a result of moderation. That was not done. Our impression was that it could have been done, although not by all local authorities.
Some local authorities have excellent resources and are capable of doing such work. Those local authorities did that work after the grades were published on 4 August and they quickly produced excellent analyses of the trends and differences between the estimated and moderated grades. They were able to analyse that by different schools with different characteristics and were quickly able to produce evidence of injustices at the levels of entire courses, cohorts or individual students. However, as I said, that was not the case for all local authorities. Not all local authorities have the appropriate resources. That needs consideration to ensure that all local authorities have the appropriate resources in place.
The approach was also different across different schools. Some schools had the capacity to carry out such analysis after the publication of the grades. It is important to consider the resource ability of local authorities and schools to carry out such moderation to identify outliers and decide how to deal with them.
Thank you, both, for your answers.
Thank you, both, for your answers.
I have a brief supplementary question, although it is probably more of a request than a question. Professor Priestley mentioned that there were alternative approaches, both statistical and qualitative, that could have been considered. I expect that they are quite technical. Could you get back to the committee with an assessment of what those alternative approaches might have been? I am thinking of those that were not considered by the SQA, noting the options that it considered and that are listed on page 19 of the report.
I have a brief supplementary question, although it is probably more of a request than a question. Professor Priestley mentioned that there were alternative approaches, both statistical and qualitative, that could have been considered. I expect that they are quite technical. Could you get back to the committee with an assessment of what those alternative approaches might have been? I am thinking of those that were not considered by the SQA, noting the options that it considered and that are listed on page 19 of the report.
We are happy to communicate some ideas about that. On the qualitative side, we were referring to a process of sense checking, which would have taken place in June and that we understand that the SQA had planned to do initially by going back to centres. It was the scale of the estimation discrepancies and the way in which they did not fit with previous patterns that made that impossible in the view of the SQA.
However, we are of the view that, for example, the SQA could have talked to local authorities about cohorts, subjects and schools where there was a rationale for the variance. Many schools and local authorities have produced such rationales, saying, for example, that a cohort has done better in physics this year than in previous years because there is a new teacher who has approached the teaching of the subject in a different way.
Of course, that is much more difficult to do at an individual level. The main issue at an individual level relates to students who are high performers in schools where there are not normally high performers. They may have found their grades being adjusted much more because of the historical performance of the centre. Conversely, low-performing students in high-performing centres historically might have found their grades being increased. Those are issues that, presumably, would have to have been picked up by the post-certification review process, because the sort of qualitative sense making that we are talking about at local authority level would not have addressed that. However, as I said, we think that it could feasibly have addressed anomalies and outliers at the cohort and subject level.
We are happy to communicate some ideas about that. On the qualitative side, we were referring to a process of sense checking, which would have taken place in June and that we understand that the SQA had planned to do initially by going back to centres. It was the scale of the estimation discrepancies and the way in which they did not fit with previous patterns that made that impossible in the view of the SQA.
However, we are of the view that, for example, the SQA could have talked to local authorities about cohorts, subjects and schools where there was a rationale for the variance. Many schools and local authorities have produced such rationales, saying, for example, that a cohort has done better in physics this year than in previous years because there is a new teacher who has approached the teaching of the subject in a different way.
Of course, that is much more difficult to do at an individual level. The main issue at an individual level relates to students who are high performers in schools where there are not normally high performers. They may have found their grades being adjusted much more because of the historical performance of the centre. Conversely, low-performing students in high-performing centres historically might have found their grades being increased. Those are issues that, presumably, would have to have been picked up by the post-certification review process, because the sort of qualitative sense making that we are talking about at local authority level would not have addressed that. However, as I said, we think that it could feasibly have addressed anomalies and outliers at the cohort and subject level.
Before we move on to the issue of equalities, I have a question about narrowing the attainment gap, which is one of the Scottish Government’s big policy objectives. Many schools, particularly in deprived areas, will have been engaged in work to reduce the attainment gap. Is there a danger that what happened with regard to moderation would have worked against schools that were successful in improving the attainment of young people, particularly in the lower grades? That is, would that improvement have been moderated out because the historical evidence was not there to support the schools and those achievements?
Before we move on to the issue of equalities, I have a question about narrowing the attainment gap, which is one of the Scottish Government’s big policy objectives. Many schools, particularly in deprived areas, will have been engaged in work to reduce the attainment gap. Is there a danger that what happened with regard to moderation would have worked against schools that were successful in improving the attainment of young people, particularly in the lower grades? That is, would that improvement have been moderated out because the historical evidence was not there to support the schools and those achievements?
Potentially, yes. However, we are not looking at a sudden rise in attainment; we are looking at a trajectory of gradual change. A statistical model that looked at trends over the previous four years as opposed to averages could, in theory, have picked that up.
Your point raises interesting questions about whether it is appropriate to measure performance year on year in a way that assumes that it will be stable over the years. One of the aims of Scottish policy is to raise attainment, and one could argue that good teaching is about raising attainment. To an extent, the assumption that qualifications will stay the same year on year denies the possibility that good teaching and interventions such as the attainment challenge might actually improve attainment and result in better accreditation over time.
Potentially, yes. However, we are not looking at a sudden rise in attainment; we are looking at a trajectory of gradual change. A statistical model that looked at trends over the previous four years as opposed to averages could, in theory, have picked that up.
Your point raises interesting questions about whether it is appropriate to measure performance year on year in a way that assumes that it will be stable over the years. One of the aims of Scottish policy is to raise attainment, and one could argue that good teaching is about raising attainment. To an extent, the assumption that qualifications will stay the same year on year denies the possibility that good teaching and interventions such as the attainment challenge might actually improve attainment and result in better accreditation over time.
I thank both of our witnesses for conducting the review. Professor Priestley, I am sorry that some unfounded accusations have been made with regard to your professionalism during the process. Frankly, I think that some members of the Parliament owe you an apology.
The review highlights the view of the Equality and Human Rights Commission that there was a lack of capacity and experience in the SQA in relation to equalities issues, and, specifically, with regard to complying with the public sector equality duty. That is alarming but, unfortunately, it is not surprising. Will you expand on that and say whether the SQA perceived that to be a weakness in its capacities or whether its internal culture did not identify the issue as a major problem?
I thank both of our witnesses for conducting the review. Professor Priestley, I am sorry that some unfounded accusations have been made with regard to your professionalism during the process. Frankly, I think that some members of the Parliament owe you an apology.
The review highlights the view of the Equality and Human Rights Commission that there was a lack of capacity and experience in the SQA in relation to equalities issues, and, specifically, with regard to complying with the public sector equality duty. That is alarming but, unfortunately, it is not surprising. Will you expand on that and say whether the SQA perceived that to be a weakness in its capacities or whether its internal culture did not identify the issue as a major problem?
We have taken evidence from organisations that have a strong interest in equalities, such as the Equality and Human Rights Commission and the Children and Young People’s Commissioner Scotland. Those groups are highly critical of the SQA for what they perceive as a failure to attend to equalities issues.
I would qualify that to an extent. As you suggest, part of it is a cultural issue. The SQA primarily sees its role as technical. We had conversations with people in the organisation and one comment, which did not go in the report but which sticks in my mind, was that the SQA’s job is not to close the attainment gap but to issue qualifications and awards.
That notion that the SQA is an awarding body with a technical role could potentially get in the way of thinking about equalities. To take that in isolation would be unfair on the SQA, because there is ample evidence that it takes its equalities duties seriously. For example, at the start of the process, it ran substantial training on unconscious bias. An equalities impact assessment and children’s rights impact assessment were planned, if not in place, from the outset.
I suppose that our critique is that the impacts on particular groups of some elements of the process were not necessarily considered, because those were technical issues. For example, more work could have been done to look at the impact of applying a statistical algorithm on particular protected groups and on demographic characteristics such as social class. That did not happen, for a range of reasons, some of which are related to procedures that are perfectly acceptable in normal years but are perhaps more dubious when an exceptional process is put in place.
For example, working with the Scottish Government, it would have been possible to run some statistical analysis of the extent of the model’s impact on various groups. That did not happen, partly because the SQA, as it said to us, did not have a legal basis for working with that data, but also because the Scottish Government, which holds data on protected characteristics at an individual level, felt that it was unable to be involved in the process.
Various procedures and ways of doing things perhaps needed to be looked at a little differently this year, given the likely impact of the process. Our view is that, although those might have been considered—they were certainly raised and pointed out to the SQA—there was not sufficient action to mitigate the issues, or at least to understand them more fully.
We have taken evidence from organisations that have a strong interest in equalities, such as the Equality and Human Rights Commission and the Children and Young People’s Commissioner Scotland. Those groups are highly critical of the SQA for what they perceive as a failure to attend to equalities issues.
I would qualify that to an extent. As you suggest, part of it is a cultural issue. The SQA primarily sees its role as technical. We had conversations with people in the organisation and one comment, which did not go in the report but which sticks in my mind, was that the SQA’s job is not to close the attainment gap but to issue qualifications and awards.
That notion that the SQA is an awarding body with a technical role could potentially get in the way of thinking about equalities. To take that in isolation would be unfair on the SQA, because there is ample evidence that it takes its equalities duties seriously. For example, at the start of the process, it ran substantial training on unconscious bias. An equalities impact assessment and children’s rights impact assessment were planned, if not in place, from the outset.
I suppose that our critique is that the impacts on particular groups of some elements of the process were not necessarily considered, because those were technical issues. For example, more work could have been done to look at the impact of applying a statistical algorithm on particular protected groups and on demographic characteristics such as social class. That did not happen, for a range of reasons, some of which are related to procedures that are perfectly acceptable in normal years but are perhaps more dubious when an exceptional process is put in place.
For example, working with the Scottish Government, it would have been possible to run some statistical analysis of the extent of the model’s impact on various groups. That did not happen, partly because the SQA, as it said to us, did not have a legal basis for working with that data, but also because the Scottish Government, which holds data on protected characteristics at an individual level, felt that it was unable to be involved in the process.
Various procedures and ways of doing things perhaps needed to be looked at a little differently this year, given the likely impact of the process. Our view is that, although those might have been considered—they were certainly raised and pointed out to the SQA—there was not sufficient action to mitigate the issues, or at least to understand them more fully.
On internal scrutiny, did you see any evidence of a culture in the SQA’s board of scrutinising decisions from an equalities perspective? You have said that the culture in the organisation is one that largely views its responsibilities as technical. If that is the case, it should really be the board’s responsibility to ensure that equalities duties are being met.
I am conscious of the time, so I will roll that up with a second question. First, did the board see ensuring equalities as part of its role? Secondly, given that the equalities impact assessment and children’s rights impact assessment were completed and presented to the board so late in the day, was there any possibility that the outcomes of those assessments could have altered the process?
On internal scrutiny, did you see any evidence of a culture in the SQA’s board of scrutinising decisions from an equalities perspective? You have said that the culture in the organisation is one that largely views its responsibilities as technical. If that is the case, it should really be the board’s responsibility to ensure that equalities duties are being met.
I am conscious of the time, so I will roll that up with a second question. First, did the board see ensuring equalities as part of its role? Secondly, given that the equalities impact assessment and children’s rights impact assessment were completed and presented to the board so late in the day, was there any possibility that the outcomes of those assessments could have altered the process?
My understanding is that the board discussed those issues from an early stage in the process, but we have to put that in the context of what boards do, and what they can do. Necessarily, a board looks only at high-level messages. I am involved in the curriculum and assessment board for the Scottish Government, and it does not get into the sort of detail that is perhaps necessary. The board may send a steer and discuss issues at a high level, but it will not necessarily have an impact on the practices. I am speculating to an extent, but we have certainly seen evidence that those issues were discussed at board level.
On whether the assessments might have had an impact, we know that, once the process was set in place, there were going to be certain implications and consequences, one of which was that certain groups or individuals were going to be disadvantaged by the process. Our assessment of that is that it might not have been possible to change those outcomes in the short term, but that a better understanding of the implications—which could have been done through, for example, a statistical analysis of patterns and trends in June—might have led to a greater appreciation of those impacts and could at least have changed the messaging around the system, so that, for example, at the end of the process, it did not come as a complete shock to the young people involved.
10:15I am aware that I am not going into minute detail. We do not have that much insight into the internal workings of the SQA, but I stress that the SQA certainly considered and discussed the issues internally and at board level. We feel that some of the issues were missed because—this relates to the Equality and Human Rights Commission’s assessment—the processes are not deeply embedded enough in the SQA. That needs to be looked at in much more detail for next year, as we recommend in the report.
I ask Marina Shapira whether she wants to add anything to that.
My understanding is that the board discussed those issues from an early stage in the process, but we have to put that in the context of what boards do, and what they can do. Necessarily, a board looks only at high-level messages. I am involved in the curriculum and assessment board for the Scottish Government, and it does not get into the sort of detail that is perhaps necessary. The board may send a steer and discuss issues at a high level, but it will not necessarily have an impact on the practices. I am speculating to an extent, but we have certainly seen evidence that those issues were discussed at board level.
On whether the assessments might have had an impact, we know that, once the process was set in place, there were going to be certain implications and consequences, one of which was that certain groups or individuals were going to be disadvantaged by the process. Our assessment of that is that it might not have been possible to change those outcomes in the short term, but that a better understanding of the implications—which could have been done through, for example, a statistical analysis of patterns and trends in June—might have led to a greater appreciation of those impacts and could at least have changed the messaging around the system, so that, for example, at the end of the process, it did not come as a complete shock to the young people involved.
10:15I am aware that I am not going into minute detail. We do not have that much insight into the internal workings of the SQA, but I stress that the SQA certainly considered and discussed the issues internally and at board level. We feel that some of the issues were missed because—this relates to the Equality and Human Rights Commission’s assessment—the processes are not deeply embedded enough in the SQA. That needs to be looked at in much more detail for next year, as we recommend in the report.
I ask Marina Shapira whether she wants to add anything to that.
I agree with that assessment. Consideration was given to many of the issues, but it was not enough. Sometimes, the issues were considered in a far more technical way without there being a proper realisation of the implications of many of the issues for the lives of young people. It would certainly be helpful to not treat the process as a purely technical exercise in the future.
I agree with that assessment. Consideration was given to many of the issues, but it was not enough. Sometimes, the issues were considered in a far more technical way without there being a proper realisation of the implications of many of the issues for the lives of young people. It would certainly be helpful to not treat the process as a purely technical exercise in the future.
I have one final question, convener, if there is time.
I have one final question, convener, if there is time.
That is fine.
That is fine.
Thank you.
Professor Priestley mentioned the equalities work that was done on unconscious bias training and on supporting teachers in making estimates and so on. The report seems to indicate that the area in which the greatest effort was made to meet equalities duties related to the role that teachers played in the process. The conclusion that I come to from that—fairly or not—is that the SQA had less trust in teachers’ abilities to meet the equalities duties than it had in its own moderation process, and that the SQA’s view was that the equalities issues would be encountered when teachers were making estimates rather than in its own moderation process. Is it fair to say that far more emphasis was put on addressing equalities issues at that stage than at the moderation stage?
Thank you.
Professor Priestley mentioned the equalities work that was done on unconscious bias training and on supporting teachers in making estimates and so on. The report seems to indicate that the area in which the greatest effort was made to meet equalities duties related to the role that teachers played in the process. The conclusion that I come to from that—fairly or not—is that the SQA had less trust in teachers’ abilities to meet the equalities duties than it had in its own moderation process, and that the SQA’s view was that the equalities issues would be encountered when teachers were making estimates rather than in its own moderation process. Is it fair to say that far more emphasis was put on addressing equalities issues at that stage than at the moderation stage?
Yes, that is probably a fair assessment. In fact, there is some evidence that the focus on bias impeded the subsequent possibility of analysis. For example, we think that ensuring that the data were anonymised precluded the analysis that we are talking about at a system level.
You are absolutely right that there is an issue about trust in teachers. There is a lot of literature that suggests that teacher estimates are “inaccurate”—I put that word in inverted commas. However, we raise the possibility that teacher estimates measure something differently. If we measure teacher judgments about course work, we are measuring something that is quite different from measuring performance on the day of an exam, so we will get different results.
The issue is not so much about how teacher estimates can predict an exam performance; it is about whether they provide a more valid measure of student performance over time. I recommend a more mixed economy. I am not recommending that we get rid of exams—they have their place—but it is really important that we look at a wider range of evidence, including teacher judgments, for future qualifications.
The issue of trust is a major factor. It is easy to say that teachers will be biased towards their students, because they are under pressure from parents and under pressure to perform and so on. We saw no evidence at all in our review that grades were inflated due to pressures relating to accountability, for example. Teachers made what they saw as honest assessments based on often quite limited evidence. We have seen inaccuracies because of the difficulties of collating enough evidence, and that raises for us the strong possibility that this year we will have to think carefully about how we create a robust evidence base if we are going to rely on teacher judgments.
The SQA is not alone in being sceptical about teachers. There is a systemic issue in Scottish education. To my mind, teachers are the heart of Scottish education, and we need to put far more trust in them generally not just from the point of view of assessment, but when it comes to how they run their professional lives and their jobs and how they teach their students.
Yes, that is probably a fair assessment. In fact, there is some evidence that the focus on bias impeded the subsequent possibility of analysis. For example, we think that ensuring that the data were anonymised precluded the analysis that we are talking about at a system level.
You are absolutely right that there is an issue about trust in teachers. There is a lot of literature that suggests that teacher estimates are “inaccurate”—I put that word in inverted commas. However, we raise the possibility that teacher estimates measure something differently. If we measure teacher judgments about course work, we are measuring something that is quite different from measuring performance on the day of an exam, so we will get different results.
The issue is not so much about how teacher estimates can predict an exam performance; it is about whether they provide a more valid measure of student performance over time. I recommend a more mixed economy. I am not recommending that we get rid of exams—they have their place—but it is really important that we look at a wider range of evidence, including teacher judgments, for future qualifications.
The issue of trust is a major factor. It is easy to say that teachers will be biased towards their students, because they are under pressure from parents and under pressure to perform and so on. We saw no evidence at all in our review that grades were inflated due to pressures relating to accountability, for example. Teachers made what they saw as honest assessments based on often quite limited evidence. We have seen inaccuracies because of the difficulties of collating enough evidence, and that raises for us the strong possibility that this year we will have to think carefully about how we create a robust evidence base if we are going to rely on teacher judgments.
The SQA is not alone in being sceptical about teachers. There is a systemic issue in Scottish education. To my mind, teachers are the heart of Scottish education, and we need to put far more trust in them generally not just from the point of view of assessment, but when it comes to how they run their professional lives and their jobs and how they teach their students.
The SQA probably underestimated the extent to which an attempt to produce statistical distributions that were not very different from the historical statistical distributions could distort the individual-level grades. That was a crucial part of the process. It is possible to make the distribution similar to the historical pattern—there are lots of statistical tools for doing that—but that happens at the expense of changing individual-level grades. The extent of that was significantly underestimated.
There were many insights that could probably have been borrowed from social sciences and educational research, especially from educational research that looks at how teachers’ estimates and predictions are produced in schools of different characteristics and for students from different socioeconomic backgrounds. The borrowing of such information and expertise from educational science and social sciences will be crucial, because we will still face the situation in which we will need to think about how the moderation process will be organised. It is inevitable that part of the moderation process will include consideration of statistical approaches to the moderation, and it is important that more interdisciplinary, holistic thinking is done about such processes.
The SQA probably underestimated the extent to which an attempt to produce statistical distributions that were not very different from the historical statistical distributions could distort the individual-level grades. That was a crucial part of the process. It is possible to make the distribution similar to the historical pattern—there are lots of statistical tools for doing that—but that happens at the expense of changing individual-level grades. The extent of that was significantly underestimated.
There were many insights that could probably have been borrowed from social sciences and educational research, especially from educational research that looks at how teachers’ estimates and predictions are produced in schools of different characteristics and for students from different socioeconomic backgrounds. The borrowing of such information and expertise from educational science and social sciences will be crucial, because we will still face the situation in which we will need to think about how the moderation process will be organised. It is inevitable that part of the moderation process will include consideration of statistical approaches to the moderation, and it is important that more interdisciplinary, holistic thinking is done about such processes.
Mr Greer, do you want to come back in?
Mr Greer, do you want to come back in?
I am content at this point, convener.
I am content at this point, convener.
We move to questions from Ms Wishart.
We move to questions from Ms Wishart.
I would like to follow up on the lines of questioning about the transparency and accountability of the SQA. My impression of the evidence that the committee received from the SQA was that it believed that it was a job well done because it had completed the education secretary’s brief. We have had apologies from the Deputy First Minister and the First Minister, but not from the SQA.
I want to understand the psyche of the SQA in the circumstances. Did the fact that the SQA had been given its instructions play a part in it not taking up offers of partnership working? It must have become clear that there were serious inequity issues after the results were delivered, so it is reasonable to assume that those issues would have been apparent to people in the SQA beforehand. Do you think that following the ministerial brief was its primary aim?
I would like to follow up on the lines of questioning about the transparency and accountability of the SQA. My impression of the evidence that the committee received from the SQA was that it believed that it was a job well done because it had completed the education secretary’s brief. We have had apologies from the Deputy First Minister and the First Minister, but not from the SQA.
I want to understand the psyche of the SQA in the circumstances. Did the fact that the SQA had been given its instructions play a part in it not taking up offers of partnership working? It must have become clear that there were serious inequity issues after the results were delivered, so it is reasonable to assume that those issues would have been apparent to people in the SQA beforehand. Do you think that following the ministerial brief was its primary aim?
Yes, and I will qualify that further by referring back to what we said earlier. The process is seen as a technical one. If we strip out the social impact of that technical process—which, of course, we should not do—we can see that the SQA put in place a pretty good technical solution, given the resources that it had and the time constraints that it faced. However, the technical solution could have been improved by doing the qualitative sense checking in June, based on analysis of patterns of variance in the data.
From a purely technical point of view, there was clear communication, the system was run effectively and efficiently, and it produced a set of results in line with the principles, although, as we have indicated, there is some tension between the principles of maintaining standards and of fairness to learners.
Had the process been allowed to run its course, a good deal of the anomalies would have been picked up through an appeals-type process. Therefore, I think that the SQA’s view is at least partially justified. That feeds into the idea that the process was a technical solution. However, the process did not take into account the social impacts of that solution. That is where it becomes important to use things like equality impact assessments and to work collaboratively with people on the ground who have contextual knowledge. As Marina Shapira said, having a multidisciplinary approach is really important.
One of the witnesses we spoke to pointed to literature around big disasters in the world and how they are invariably associated with monocultural thinking. The best way to avoid that is to bring in and take account of different perspectives. At some point, the SQA has to make the decision, since it is the legal body that is tasked with doing so. However, in this year’s exceptional circumstances, we believe that a more collaborative decision-making approach to co-construct a system would have been more effective and equitable. That is simply because it would have been owned by the profession more, it would have mitigated a lot of the subsequent criticism, it would have been understood better and it would have taken into account some of the nuances that a purely technical solution missed.
Yes, and I will qualify that further by referring back to what we said earlier. The process is seen as a technical one. If we strip out the social impact of that technical process—which, of course, we should not do—we can see that the SQA put in place a pretty good technical solution, given the resources that it had and the time constraints that it faced. However, the technical solution could have been improved by doing the qualitative sense checking in June, based on analysis of patterns of variance in the data.
From a purely technical point of view, there was clear communication, the system was run effectively and efficiently, and it produced a set of results in line with the principles, although, as we have indicated, there is some tension between the principles of maintaining standards and of fairness to learners.
Had the process been allowed to run its course, a good deal of the anomalies would have been picked up through an appeals-type process. Therefore, I think that the SQA’s view is at least partially justified. That feeds into the idea that the process was a technical solution. However, the process did not take into account the social impacts of that solution. That is where it becomes important to use things like equality impact assessments and to work collaboratively with people on the ground who have contextual knowledge. As Marina Shapira said, having a multidisciplinary approach is really important.
One of the witnesses we spoke to pointed to literature around big disasters in the world and how they are invariably associated with monocultural thinking. The best way to avoid that is to bring in and take account of different perspectives. At some point, the SQA has to make the decision, since it is the legal body that is tasked with doing so. However, in this year’s exceptional circumstances, we believe that a more collaborative decision-making approach to co-construct a system would have been more effective and equitable. That is simply because it would have been owned by the profession more, it would have mitigated a lot of the subsequent criticism, it would have been understood better and it would have taken into account some of the nuances that a purely technical solution missed.
Is there enough autonomy? Were those at the SQA empowered to raise problems as they saw them? I realise that this is an exceptional year.
Is there enough autonomy? Were those at the SQA empowered to raise problems as they saw them? I realise that this is an exceptional year.
Do you mean the autonomy of the SQA as an organisation, or autonomy within the SQA for people to raise issues?
Do you mean the autonomy of the SQA as an organisation, or autonomy within the SQA for people to raise issues?
Both, actually.
Both, actually.
We saw no evidence that the Government interfered unduly with the working of the SQA. If anything, we considered that there should have been more co-construction between the SQA and the Government, particularly around statistical analysis. That was really important. We do not think that the SQA lacked autonomy in that respect.
I do not know the SQA, as an organisation, well enough to comment on the autonomy within it. However, it gives the impression of being a hierarchical organisation that has a lot of technical expertise and that sees itself as somehow not needing to work with other people. That is the impression that we got from a great many of the respondents to whom we spoke during the review.
Marina, do you want to add anything?
We saw no evidence that the Government interfered unduly with the working of the SQA. If anything, we considered that there should have been more co-construction between the SQA and the Government, particularly around statistical analysis. That was really important. We do not think that the SQA lacked autonomy in that respect.
I do not know the SQA, as an organisation, well enough to comment on the autonomy within it. However, it gives the impression of being a hierarchical organisation that has a lot of technical expertise and that sees itself as somehow not needing to work with other people. That is the impression that we got from a great many of the respondents to whom we spoke during the review.
Marina, do you want to add anything?
Only that I completely agree with that assessment. The organisational culture is one of the things that needs to be changed in order to make progress.
Only that I completely agree with that assessment. The organisational culture is one of the things that needs to be changed in order to make progress.
Let us turn to the erosion of trust and confidence—and damaged relations, in some cases—among teachers and young people that you referred to in the review. The perception of the SQA as being remote from and lacking in trust in teachers was also mentioned. Can you offer any more evidence that led you to those conclusions about the teacher relationship with the SQA?
Let us turn to the erosion of trust and confidence—and damaged relations, in some cases—among teachers and young people that you referred to in the review. The perception of the SQA as being remote from and lacking in trust in teachers was also mentioned. Can you offer any more evidence that led you to those conclusions about the teacher relationship with the SQA?
We spoke to around 30 teachers, plus headteachers, in different sectors; college lecturers; an independent, non-affiliated group of teachers; teacher unions; subject association groups; and a group of headteachers representing two of the key organisations.
There was a fairly consistent message from respondents that the SQA did not trust them. That was seen as, I suppose, a questioning of the professionalism of teachers. That came through fairly clearly in a lot of the evidence. That is what we were told by the teaching profession and college lecturers.
On the damaging of relations between schools and young people, the primary cause of that is, I suppose, the lack of an appeals process and the fact that many young people are unable to appeal against grades given by the school, which we have already discussed.
10:30There is also a broader issue about the way in which young people see the SQA, which is clearly articulated in the evidence submitted to the committee by Dr Tracy Kirk and by the SQA: Where’s Our Say? campaign group. There has been an erosion of trust in the SQA as an organisation—we saw that across the piece—and work must be done to restore that trust. To its credit, the SQA has been working extensively with young people since about 2017, which was apparent to us in our conversations with the SQA board and various groups.
That work needs to be expanded, particularly around communication, because it is clear that a lot of the issues were to do with messaging and the way in which messages were perceived. The SQA has stated that it believes its messaging to be comprehensive, which it certainly is. However, that is not how it is experienced by young people. A stronger voice of young people in co-constructing a messaging system would go some way towards not only restoring that trust, but establishing channels of communication in which young people have confidence.
We spoke to around 30 teachers, plus headteachers, in different sectors; college lecturers; an independent, non-affiliated group of teachers; teacher unions; subject association groups; and a group of headteachers representing two of the key organisations.
There was a fairly consistent message from respondents that the SQA did not trust them. That was seen as, I suppose, a questioning of the professionalism of teachers. That came through fairly clearly in a lot of the evidence. That is what we were told by the teaching profession and college lecturers.
On the damaging of relations between schools and young people, the primary cause of that is, I suppose, the lack of an appeals process and the fact that many young people are unable to appeal against grades given by the school, which we have already discussed.
10:30There is also a broader issue about the way in which young people see the SQA, which is clearly articulated in the evidence submitted to the committee by Dr Tracy Kirk and by the SQA: Where’s Our Say? campaign group. There has been an erosion of trust in the SQA as an organisation—we saw that across the piece—and work must be done to restore that trust. To its credit, the SQA has been working extensively with young people since about 2017, which was apparent to us in our conversations with the SQA board and various groups.
That work needs to be expanded, particularly around communication, because it is clear that a lot of the issues were to do with messaging and the way in which messages were perceived. The SQA has stated that it believes its messaging to be comprehensive, which it certainly is. However, that is not how it is experienced by young people. A stronger voice of young people in co-constructing a messaging system would go some way towards not only restoring that trust, but establishing channels of communication in which young people have confidence.
That is very helpful.
That is very helpful.
I will raise an issue that was briefly touched on in your answer to Beatrice Wishart. Pages 27 and 28 of your report describe email correspondence between the SQA and the Government. I will quote some text from one of those emails:
“The DFM has asked that we do lots of digging in the stats to show how young people from deprived backgrounds have not been disadvantaged by the results.”
When I initially read that, I was slightly concerned about what it said about the relationship between the Government and the SQA. You have just said that there was almost too little input from the Government in relation to the setting of the process. That email, which is about looking at the results afterwards, suggests a level of communication that certainly requires questioning. I wonder whether you can provide some insight on and context to that email?
I will raise an issue that was briefly touched on in your answer to Beatrice Wishart. Pages 27 and 28 of your report describe email correspondence between the SQA and the Government. I will quote some text from one of those emails:
“The DFM has asked that we do lots of digging in the stats to show how young people from deprived backgrounds have not been disadvantaged by the results.”
When I initially read that, I was slightly concerned about what it said about the relationship between the Government and the SQA. You have just said that there was almost too little input from the Government in relation to the setting of the process. That email, which is about looking at the results afterwards, suggests a level of communication that certainly requires questioning. I wonder whether you can provide some insight on and context to that email?
If I remember correctly, that was not an email from the SQA to the Government or vice versa; it was an internal Government email. As to its context, it had been noticed that, in the course of looking at the results when they came in at the end of July, there were possible implications for low socioeconomic groups.
Basically, the email asks civil servants to have a look at the data. The phrasing is perhaps a little unfortunate, because it conveys the impression of looking for a positive story rather than looking for patterns and understanding. That might have contributed to the debate that subsequently took place in Government and in the media, which perhaps missed the point slightly. The real issue was not that schools in low socioeconomic areas had been downgraded more. That was inevitable, because, if we take historical patterns of attainment, we see that there has been more “overestimation”—again, I use the term in inverted commas—in those areas.
On what that indicates about SQA and Government communications, it seems to me entirely proper that the Government did not want to become involved until it was appropriate to do so, shortly before the results were released, and that analysis started to take place only at the point at which the SQA confirmed the grades. The Government was clear about that in the emails that we saw. That is my understanding of that. However, looking for a positive message in the data perhaps obscured some of the real patterns in the form of the equity issues around particular cohorts, individuals, protected characteristics and so on that occurred as a result of the application of the statistical approach. Does Marina Shapira want to add anything?
If I remember correctly, that was not an email from the SQA to the Government or vice versa; it was an internal Government email. As to its context, it had been noticed that, in the course of looking at the results when they came in at the end of July, there were possible implications for low socioeconomic groups.
Basically, the email asks civil servants to have a look at the data. The phrasing is perhaps a little unfortunate, because it conveys the impression of looking for a positive story rather than looking for patterns and understanding. That might have contributed to the debate that subsequently took place in Government and in the media, which perhaps missed the point slightly. The real issue was not that schools in low socioeconomic areas had been downgraded more. That was inevitable, because, if we take historical patterns of attainment, we see that there has been more “overestimation”—again, I use the term in inverted commas—in those areas.
On what that indicates about SQA and Government communications, it seems to me entirely proper that the Government did not want to become involved until it was appropriate to do so, shortly before the results were released, and that analysis started to take place only at the point at which the SQA confirmed the grades. The Government was clear about that in the emails that we saw. That is my understanding of that. However, looking for a positive message in the data perhaps obscured some of the real patterns in the form of the equity issues around particular cohorts, individuals, protected characteristics and so on that occurred as a result of the application of the statistical approach. Does Marina Shapira want to add anything?
I agree that there was some distortion of the real issue, which was that there was no attempt at validation of the moderation of results until the publication of the results.
At some point in July, the SQA became concerned that there might be serious issues related to the downgrading of students’ grades in more disadvantaged schools. However it did not have the ability to check that because its data is anonymised, as far as I understand. The SQA’s data does not include students’ identity; neither does it include, I think, the identity of schools. That data is with the Scottish Government. The communication resulted from the SQA’s attempts to better understand if there were any real issues, because it had an inkling that there might be an issue—it was not unexpected that there would be an issue—if the moderation was applied in the way that it was.
However, we saw a reluctance from the Scottish Government to engage with such analysis before the data was published. The Government suggested that the SQA be provided with individual identifiers so that it could analyse the data itself. However, the SQA’s view was that it did not have the ethical clearance and capacity to store individual data of that kind.
Creating the capacity to analyse the data prior to the publication of the results was not thought about in advance, and that was part of the issue. Because of the technicalities on both sides, the first time that the data was properly looked at was on the day of its publication. Clearly, that should have been done before it was published. Two issues that should be thought about are how to create the capacity to analyse the entire dataset by applying the personal identifiers to the results that were produced by the SQA and how to analyse the data comprehensively in order to identify the problems prior to the results’ publication.
I agree that there was some distortion of the real issue, which was that there was no attempt at validation of the moderation of results until the publication of the results.
At some point in July, the SQA became concerned that there might be serious issues related to the downgrading of students’ grades in more disadvantaged schools. However it did not have the ability to check that because its data is anonymised, as far as I understand. The SQA’s data does not include students’ identity; neither does it include, I think, the identity of schools. That data is with the Scottish Government. The communication resulted from the SQA’s attempts to better understand if there were any real issues, because it had an inkling that there might be an issue—it was not unexpected that there would be an issue—if the moderation was applied in the way that it was.
However, we saw a reluctance from the Scottish Government to engage with such analysis before the data was published. The Government suggested that the SQA be provided with individual identifiers so that it could analyse the data itself. However, the SQA’s view was that it did not have the ethical clearance and capacity to store individual data of that kind.
Creating the capacity to analyse the data prior to the publication of the results was not thought about in advance, and that was part of the issue. Because of the technicalities on both sides, the first time that the data was properly looked at was on the day of its publication. Clearly, that should have been done before it was published. Two issues that should be thought about are how to create the capacity to analyse the entire dataset by applying the personal identifiers to the results that were produced by the SQA and how to analyse the data comprehensively in order to identify the problems prior to the results’ publication.
It sounds rather like you are saying that the SQA put whether it could do something before the question of whether it should do it.
Where appropriate decision making lies and is accounted for is key for transparency. Although the SQA, quite properly, is responsible for the administration of the examination system, its purposes and effects are squarely a decision for the Government. For example, reviews of our examination system have been done and the process was undertaken as a collaborative effort with stakeholders, rather than solely by the SQA. Ultimately, that is a political decision for the Government.
It strikes me that some of those decisions have had a profound impact on precisely what the grades that were awarded were accrediting. We have already talked about the decision to potentially not do the qualitative analysis at the end.
Given that that is fundamentally a strategic decision, what evidence is there that the SQA sought to inform and get input from the Government on key issues that it thought might be of relevance? Was that happening in an appropriate and transparent way?
It sounds rather like you are saying that the SQA put whether it could do something before the question of whether it should do it.
Where appropriate decision making lies and is accounted for is key for transparency. Although the SQA, quite properly, is responsible for the administration of the examination system, its purposes and effects are squarely a decision for the Government. For example, reviews of our examination system have been done and the process was undertaken as a collaborative effort with stakeholders, rather than solely by the SQA. Ultimately, that is a political decision for the Government.
It strikes me that some of those decisions have had a profound impact on precisely what the grades that were awarded were accrediting. We have already talked about the decision to potentially not do the qualitative analysis at the end.
Given that that is fundamentally a strategic decision, what evidence is there that the SQA sought to inform and get input from the Government on key issues that it thought might be of relevance? Was that happening in an appropriate and transparent way?
It is important to stress that we probably have not seen the full range of evidence on that. We have had access to a range of emails that were sent between the Government and the SQA over the summer. It seems to us that there was regular communication as the process unfolded, and we have not seen anything improper in terms of transparency.
If I am to be critical at all, it would be about the decision not to do that qualitative step and about the lack of appreciation of the implications of using a purely statistical approach. Those aspects were perhaps not fully appreciated at an early enough stage. When the implications started to become apparent in July, it was too late to do anything about it. Furthermore, the procedures and the relationship that existed between Government and the SQA potentially impeded the analysis that might have led to an earlier realisation of the issues. Therefore, it was much more of a sledgehammer blow when we came to results day.
We think that the subsequent furore about that was, unfortunately, focused on the wrong issue. The focus was entirely on what was easy to characterise as two stereotypical positions. First, that students in disadvantaged schools had been downgraded more. Secondly, that the attainment gap had closed, because students did better under the estimation and moderation system then they had done previously under exams. Both positions are right. In a highly politicised situation, it becomes easy to dig in and defend one position or the other. The email that you mentioned is possibly indicative of that.
Had a more comprehensive analysis been done at an earlier stage, those polarised positions might not have been relevant. In fact, we would have been talking about quite different issues, such as how the statistical approach impacts differentially on groups in society in a much more nuanced way.
Do you want to add anything, Marina Shapira? The question is fundamentally about statistics.
It is important to stress that we probably have not seen the full range of evidence on that. We have had access to a range of emails that were sent between the Government and the SQA over the summer. It seems to us that there was regular communication as the process unfolded, and we have not seen anything improper in terms of transparency.
If I am to be critical at all, it would be about the decision not to do that qualitative step and about the lack of appreciation of the implications of using a purely statistical approach. Those aspects were perhaps not fully appreciated at an early enough stage. When the implications started to become apparent in July, it was too late to do anything about it. Furthermore, the procedures and the relationship that existed between Government and the SQA potentially impeded the analysis that might have led to an earlier realisation of the issues. Therefore, it was much more of a sledgehammer blow when we came to results day.
We think that the subsequent furore about that was, unfortunately, focused on the wrong issue. The focus was entirely on what was easy to characterise as two stereotypical positions. First, that students in disadvantaged schools had been downgraded more. Secondly, that the attainment gap had closed, because students did better under the estimation and moderation system then they had done previously under exams. Both positions are right. In a highly politicised situation, it becomes easy to dig in and defend one position or the other. The email that you mentioned is possibly indicative of that.
Had a more comprehensive analysis been done at an earlier stage, those polarised positions might not have been relevant. In fact, we would have been talking about quite different issues, such as how the statistical approach impacts differentially on groups in society in a much more nuanced way.
Do you want to add anything, Marina Shapira? The question is fundamentally about statistics.
No, I do not think that this is just about statistics. Overall, I consider that the expression “statistical modelling” is being used too loosely not just in this conversation, but in the entire conversation about what happened.
I want to emphasise a point about statistical models. Our impression from the review process is, in part, that statistical modelling started to be treated as something evil and that all the problems resulted from applying statistical algorithms. Statistical algorithms do what they are told to do. The result of a statistical algorithm is a result of the definitions of the problem that has been identified and the choices that have been made. The choices are not linked to the statistical procedure itself; they are linked to political decisions and the definition of the tasks. That is where the focus should be. It should be on how we define what should be done and what we want to obtain as the result of the statistical process. Once that is done, we can discuss how much this or that statistical procedure would be suitable and how well it can do the task.
I think that we focus too much on that discussion, rather than on discussing what we want to achieve by applying this or another statistical procedure.
No, I do not think that this is just about statistics. Overall, I consider that the expression “statistical modelling” is being used too loosely not just in this conversation, but in the entire conversation about what happened.
I want to emphasise a point about statistical models. Our impression from the review process is, in part, that statistical modelling started to be treated as something evil and that all the problems resulted from applying statistical algorithms. Statistical algorithms do what they are told to do. The result of a statistical algorithm is a result of the definitions of the problem that has been identified and the choices that have been made. The choices are not linked to the statistical procedure itself; they are linked to political decisions and the definition of the tasks. That is where the focus should be. It should be on how we define what should be done and what we want to obtain as the result of the statistical process. Once that is done, we can discuss how much this or that statistical procedure would be suitable and how well it can do the task.
I think that we focus too much on that discussion, rather than on discussing what we want to achieve by applying this or another statistical procedure.
Thank you. I want to ask you to briefly clarify—
Thank you. I want to ask you to briefly clarify—
I am really sorry, but I have two other members wanting to come in. If you are quick, and if the answers are brief, I will let you come in.
I am really sorry, but I have two other members wanting to come in. If you are quick, and if the answers are brief, I will let you come in.
Will Professor Priestley confirm whether he is saying that the SQA did not seek to inform, or seek approval from, the Government when it decided not to do that last qualification? Do I understand you correctly?
Will Professor Priestley confirm whether he is saying that the SQA did not seek to inform, or seek approval from, the Government when it decided not to do that last qualification? Do I understand you correctly?
Sorry—can you say that again?
Sorry—can you say that again?
Are you saying that the SQA did not seek to inform the Government when it decided not to go ahead with the final qualitative step in June? Is that correct?
Are you saying that the SQA did not seek to inform the Government when it decided not to go ahead with the final qualitative step in June? Is that correct?
It certainly informed the Government. I have no idea whether that was agreed to in advance. The decision was certainly made, it was on public record and the Government was aware of it.
It certainly informed the Government. I have no idea whether that was agreed to in advance. The decision was certainly made, it was on public record and the Government was aware of it.
Fine. Thank you.
Fine. Thank you.
I will bring in Mr Greer, if he is very quick.
10:45
I will bring in Mr Greer, if he is very quick.
10:45
The 2021 exam diet and the alternative arrangements for national 5 are huge topics, but I will burrow down to one specific issue. A lot of concerns have been raised that the national 5 exam has been replaced with, in essence, an exam in all but name, which is to be delivered by teachers in class. Have you had a chance to look at the guidance that the SQA has produced for the national 5 assessments next year? Do you have any concerns that some of the mistakes that were made in the 2020 arrangements will be repeated?
The 2021 exam diet and the alternative arrangements for national 5 are huge topics, but I will burrow down to one specific issue. A lot of concerns have been raised that the national 5 exam has been replaced with, in essence, an exam in all but name, which is to be delivered by teachers in class. Have you had a chance to look at the guidance that the SQA has produced for the national 5 assessments next year? Do you have any concerns that some of the mistakes that were made in the 2020 arrangements will be repeated?
The simple answer is that I have not had the chance to look at the guidance. As a result of doing the review, we have rather large backlogs of work on which we are trying to catch up. I have not had time to look at anything other than very superficial details of what has been proposed, so I am not prepared to comment on that issue.
In relation to the coming year, it is important to stress that an examination is simply a method of assessment. Replacing an examination with an alternative form of assessment does not necessarily have an impact on the quality or robustness of the qualification. There are many ways to assess against qualifications.
Given that we will be relying on teacher judgments again in the coming year, I am concerned that we take the time to develop a systematic moderation system, which involves a number of stages such as, for example, the development and validation of assessments. My theory is that schools should be involved in that, where possible; they should not just be given alternative assessments that have been developed externally.
A moderation system involves sense making—for example, teachers have markers meetings at which they agree a standard. It involves processes such as the internal and external verification of marking, including quality checks. It could involve some sort of statistical approach and algorithm, in order to fit the grades that are awarded to previous patterns. All those things could happen next year, but there needs to be a nationally developed and locally applied system to do that.
The simple answer is that I have not had the chance to look at the guidance. As a result of doing the review, we have rather large backlogs of work on which we are trying to catch up. I have not had time to look at anything other than very superficial details of what has been proposed, so I am not prepared to comment on that issue.
In relation to the coming year, it is important to stress that an examination is simply a method of assessment. Replacing an examination with an alternative form of assessment does not necessarily have an impact on the quality or robustness of the qualification. There are many ways to assess against qualifications.
Given that we will be relying on teacher judgments again in the coming year, I am concerned that we take the time to develop a systematic moderation system, which involves a number of stages such as, for example, the development and validation of assessments. My theory is that schools should be involved in that, where possible; they should not just be given alternative assessments that have been developed externally.
A moderation system involves sense making—for example, teachers have markers meetings at which they agree a standard. It involves processes such as the internal and external verification of marking, including quality checks. It could involve some sort of statistical approach and algorithm, in order to fit the grades that are awarded to previous patterns. All those things could happen next year, but there needs to be a nationally developed and locally applied system to do that.
I ask Mr Greene to be very quick. If a written answer could be provided, that would be very helpful.
I ask Mr Greene to be very quick. If a written answer could be provided, that would be very helpful.
I absolutely do not mind if either of the witnesses wants to write to the committee with a more comprehensive answer. We have spent a lot of time looking at this year, but looking at what happens in 2021 and beyond is equally important. Unfortunately, we have run out of time.
The review’s recommendation 1 was to cancel national 5 exams, but there is the potential to proceed with exams for highers and advanced highers. The key question is what happens if exams for highers and advanced highers cannot proceed.
The review raised three primary concerns with the SQA’s draft proposals in August. The first was whether course work would or could be used as a back-up. The second was about the narrowing of courses, which is quite a profound concern. The third was about the negative impact on attainment, particularly for disadvantaged students. Do you still have those concerns? Are you confident that there are back-up plans for next year’s exam diet, if exams cannot proceed?
I absolutely do not mind if either of the witnesses wants to write to the committee with a more comprehensive answer. We have spent a lot of time looking at this year, but looking at what happens in 2021 and beyond is equally important. Unfortunately, we have run out of time.
The review’s recommendation 1 was to cancel national 5 exams, but there is the potential to proceed with exams for highers and advanced highers. The key question is what happens if exams for highers and advanced highers cannot proceed.
The review raised three primary concerns with the SQA’s draft proposals in August. The first was whether course work would or could be used as a back-up. The second was about the narrowing of courses, which is quite a profound concern. The third was about the negative impact on attainment, particularly for disadvantaged students. Do you still have those concerns? Are you confident that there are back-up plans for next year’s exam diet, if exams cannot proceed?
The simple answer is that I still have those concerns, but I am confident that, if a robust system of moderation and assessment is developed for national 5s, that will provide us with a basis for developing similar back-ups for highers and advanced highers. The work that is being done on national 5s will be of benefit elsewhere in the system if there needs to be a greater degree of exam cancellation.
The simple answer is that I still have those concerns, but I am confident that, if a robust system of moderation and assessment is developed for national 5s, that will provide us with a basis for developing similar back-ups for highers and advanced highers. The work that is being done on national 5s will be of benefit elsewhere in the system if there needs to be a greater degree of exam cancellation.
That is a helpfully short answer.
That is a helpfully short answer.
I thank Professor Priestley and Dr Shapira for their helpful contributions.
I will suspend the meeting briefly to allow the witnesses to leave and the cabinet secretary to join us. I remind members that we will pause for two minutes’ silence at 11 o’clock.
10:49 Meeting suspended.
I thank Professor Priestley and Dr Shapira for their helpful contributions.
I will suspend the meeting briefly to allow the witnesses to leave and the cabinet secretary to join us. I remind members that we will pause for two minutes’ silence at 11 o’clock.
10:49 Meeting suspended.
I welcome the Cabinet Secretary for Education and Skills, John Swinney, to the committee. As you know, Mr Swinney, we have to pause the meeting slightly before 11 o’clock for the two-minute silence. Before then, I invite you to make a brief opening statement.
I welcome the Cabinet Secretary for Education and Skills, John Swinney, to the committee. As you know, Mr Swinney, we have to pause the meeting slightly before 11 o’clock for the two-minute silence. Before then, I invite you to make a brief opening statement.
I welcome the opportunity to appear before the committee to set out the Scottish Government’s response to the Priestley review and to provide an update on the approach to awarding national qualifications in 2021.
To enable us to learn the lessons of the approach that was taken to awarding qualifications in 2020, I acted quickly to commission Professor Mark Priestley to conduct a rapid and independent review of the events that followed the cancellation of the 2020 examination diet. I reiterate my thanks to Professor Priestley and his team, from whom you have just heard, for the work that they did.
The review made nine recommendations, of which the Scottish Government has accepted eight. The ninth, which is to consider an independent review of the 2020 alternative certification model, will be considered as part of our future research plans.
One key recommendation is the
“Suspension of the 2021 National 5 exam diet, with qualifications awarded on the basis of centre estimation based upon validated assessments.”
As I set out to the Parliament on 7 October, that is what we have decided to do. The virus remains with us, and it cannot be business as usual. There is no easy solution, and I recognise that any approach that we might take will find favour with some and not with others. In coming to the decision to cancel national 5 exams, we spoke to a range of stakeholders, including young people, teachers, parents, colleges and universities.
Although there will be no N5 exams and courses will be assessed on the basis of teacher judgment, there will be a slightly delayed higher and advanced higher exam diet.
A national qualifications 2021 group consisting of representatives from local authorities, education unions, colleges, the Scottish Government and the SQA, which chairs the group, is working to deliver subject-specific guidance for N5 courses, to develop the approach to assessment of those courses, and to consider possible contingency measures should the higher and advanced higher diet not be able to go ahead.
There is a strong commitment from stakeholders who are involved in the group to work together to develop an appropriate alternative approach to assessment, which will be a fair recognition of the individual efforts of young people at its heart. Awards will not be given or taken away on the basis of a statistical model or a school’s past performance.
Last week, I informed Parliament that the start of the higher and advanced higher exam diet would be brought forward slightly to Monday 10 May to minimise the risk of excessive burden being placed on candidates who might otherwise have had to sit multiple exams on the same day.
Although our aim is to ensure that schools remain open, I am conscious that some individual pupils or groups of pupils might be adversely impacted by Covid-related absences. It is important that the awarding process is fair to all pupils, and that no pupil is disadvantaged by circumstances that are outwith their control.
I am committed to delivering on the other recommendations of the Priestley review. In particular, there is significant interest in how appeals will be conducted. The SQA will undertake a review of that and, in doing so, it must engage with stakeholders, including learners. That work must be undertaken compatibly with our commitment to incorporate into domestic law the United Nations Convention on the Rights of the Child.
There are many challenging issues in this area, but the Government is committed to addressing questions in partnership with our stakeholders and with learners. I look forward to discussing the issues with the committee.
I welcome the opportunity to appear before the committee to set out the Scottish Government’s response to the Priestley review and to provide an update on the approach to awarding national qualifications in 2021.
To enable us to learn the lessons of the approach that was taken to awarding qualifications in 2020, I acted quickly to commission Professor Mark Priestley to conduct a rapid and independent review of the events that followed the cancellation of the 2020 examination diet. I reiterate my thanks to Professor Priestley and his team, from whom you have just heard, for the work that they did.
The review made nine recommendations, of which the Scottish Government has accepted eight. The ninth, which is to consider an independent review of the 2020 alternative certification model, will be considered as part of our future research plans.
One key recommendation is the
“Suspension of the 2021 National 5 exam diet, with qualifications awarded on the basis of centre estimation based upon validated assessments.”
As I set out to the Parliament on 7 October, that is what we have decided to do. The virus remains with us, and it cannot be business as usual. There is no easy solution, and I recognise that any approach that we might take will find favour with some and not with others. In coming to the decision to cancel national 5 exams, we spoke to a range of stakeholders, including young people, teachers, parents, colleges and universities.
Although there will be no N5 exams and courses will be assessed on the basis of teacher judgment, there will be a slightly delayed higher and advanced higher exam diet.
A national qualifications 2021 group consisting of representatives from local authorities, education unions, colleges, the Scottish Government and the SQA, which chairs the group, is working to deliver subject-specific guidance for N5 courses, to develop the approach to assessment of those courses, and to consider possible contingency measures should the higher and advanced higher diet not be able to go ahead.
There is a strong commitment from stakeholders who are involved in the group to work together to develop an appropriate alternative approach to assessment, which will be a fair recognition of the individual efforts of young people at its heart. Awards will not be given or taken away on the basis of a statistical model or a school’s past performance.
Last week, I informed Parliament that the start of the higher and advanced higher exam diet would be brought forward slightly to Monday 10 May to minimise the risk of excessive burden being placed on candidates who might otherwise have had to sit multiple exams on the same day.
Although our aim is to ensure that schools remain open, I am conscious that some individual pupils or groups of pupils might be adversely impacted by Covid-related absences. It is important that the awarding process is fair to all pupils, and that no pupil is disadvantaged by circumstances that are outwith their control.
I am committed to delivering on the other recommendations of the Priestley review. In particular, there is significant interest in how appeals will be conducted. The SQA will undertake a review of that and, in doing so, it must engage with stakeholders, including learners. That work must be undertaken compatibly with our commitment to incorporate into domestic law the United Nations Convention on the Rights of the Child.
There are many challenging issues in this area, but the Government is committed to addressing questions in partnership with our stakeholders and with learners. I look forward to discussing the issues with the committee.
We might have time to squeeze in a question from one member.
We might have time to squeeze in a question from one member.
Professor Priestley was clear that the decision not to have a final qualitative check once moderation had taken place at centre level was a key change. In retrospect, do you agree that that was a significant change? Should there have been greater focus on that at the time?
Professor Priestley was clear that the decision not to have a final qualitative check once moderation had taken place at centre level was a key change. In retrospect, do you agree that that was a significant change? Should there have been greater focus on that at the time?
The difficulties, which have been explained to the committee, were the questions of the availability of time to do that exercise and the ability to ensure that such an approach to the process could be fair to all learners. The SQA’s judgment on the latter question was that it was difficult to conceive how fairness could be applied to conducting subjective conversations with members of the teaching staff that were not happening in all circumstances across all centres.
The point that Daniel Johnson put to me, which I think was discussed with Professor Priestley this morning, was more about examples that could be judged to be outliers. That concept means that not all centres could engage in doing so in that process. Obviously, we were dealing with acutely difficult circumstances, because we did not have the normal range of materials to be able to focus on those conversations. Indeed, when such discussions should have taken place, there would have been significant constraints in accessing evidence to enable the dialogues to happen. I recognise the significance of Daniel Johnson’s point, but there was no easy answer to the challenge that was involved.
The difficulties, which have been explained to the committee, were the questions of the availability of time to do that exercise and the ability to ensure that such an approach to the process could be fair to all learners. The SQA’s judgment on the latter question was that it was difficult to conceive how fairness could be applied to conducting subjective conversations with members of the teaching staff that were not happening in all circumstances across all centres.
The point that Daniel Johnson put to me, which I think was discussed with Professor Priestley this morning, was more about examples that could be judged to be outliers. That concept means that not all centres could engage in doing so in that process. Obviously, we were dealing with acutely difficult circumstances, because we did not have the normal range of materials to be able to focus on those conversations. Indeed, when such discussions should have taken place, there would have been significant constraints in accessing evidence to enable the dialogues to happen. I recognise the significance of Daniel Johnson’s point, but there was no easy answer to the challenge that was involved.
Convener, I am mindful of the time. There might be time for me to ask my question, although I might not get to the end of it. There is certainly not enough time to provide the cabinet secretary with the opportunity to answer it.
I will just put the point that I put to Professor Priestley at the end of the earlier session. Will the cabinet secretary reflect on the nature of the decision making on strategic aspects? We ended up with an exclusively statistical approach, which was a strategic decision. Were decisions made at the appropriate level, or should some of the decisions have been escalated?
Convener, I am mindful of the time. There might be time for me to ask my question, although I might not get to the end of it. There is certainly not enough time to provide the cabinet secretary with the opportunity to answer it.
I will just put the point that I put to Professor Priestley at the end of the earlier session. Will the cabinet secretary reflect on the nature of the decision making on strategic aspects? We ended up with an exclusively statistical approach, which was a strategic decision. Were decisions made at the appropriate level, or should some of the decisions have been escalated?
The question—
The question—
I am sorry to interrupt, cabinet secretary, but I must ask you to delay your answer, as the committee will now pause its meeting. We will reconvene at around 11:03.
10:57 Meeting suspended.
I am sorry to interrupt, cabinet secretary, but I must ask you to delay your answer, as the committee will now pause its meeting. We will reconvene at around 11:03.
10:57 Meeting suspended.
Welcome back to committee. I thank everyone for observing two minutes’ silence—it is a very different remembrance day this year compared with what we are used to.
The cabinet secretary will continue to answer Daniel Johnson’s question.
Welcome back to committee. I thank everyone for observing two minutes’ silence—it is a very different remembrance day this year compared with what we are used to.
The cabinet secretary will continue to answer Daniel Johnson’s question.
Mr Johnson’s question gets to the heart of some of the difficult issues here. If we look, first, at the governance arrangements around the running of exams, without particularly referring to the 2020 diet, the Government commissions the SQA to run an exam diet and assessment process around our national qualifications. The Government sets out some strategic parameters for that exercise, and, once those are set out, we leave it to the SQA to undertake that process. A few days before the results are announced, the Government is given access, under the pre-release access code, to the information that has been generated by the results.
There are elements of the decision making whereby the Government can set out guidance on what the parameters of the approach to exams should be. Generally, those parameters do not change from year to year, because we are interested in the application of consistent standards so that the performance of a young person in, for example, 2018 can be compared with the performance of a young person in 2019. In that way, the qualifications of both cohorts—in 2018 and in 2019—are viewed with equal esteem on the basis of the consistent application of standards. Generally, we would not revise those to any significant extent.
In 2020, we faced very different circumstances. When I instructed the cancellation of the exam diet, in circumstances with which we are all familiar, I indicated to the SQA that it should design an alternative certification model, which maintained standards. That set the parameters of the decision making that the SQA could and should undertake, and the SQA responded by putting in place a mechanism such that, at the end of the awarding process, we could answer the question about whether standards had been maintained and the qualifications of the candidates of 2020 could be judged equally with those of the candidates of 2019, 2018, 2017 and so on.
Where I unreservedly accept that there was a problem is that, in the application of a statistical approach to try to secure that outcome, performance varied significantly from candidate to candidate and from school to school. In the fulfilment of the strategic direction that I had set for the SQA, the awarding process set out the difficulties and issues with which we all became familiar in early August.
The answer to Mr Johnson’s fundamental question is that the governance arrangements are such that the Government sets out a strategic direction and the SQA then makes a significant number of decisions in that context. Those decisions, quite properly, do not require the approval of Government; their doing so would undermine the independent awarding authority of the SQA and a fundamental principle of our awarding process.
I am sorry that that was such a long answer, convener, but Mr Johnson’s question merited that level of detail if it was to be answered properly.
Mr Johnson’s question gets to the heart of some of the difficult issues here. If we look, first, at the governance arrangements around the running of exams, without particularly referring to the 2020 diet, the Government commissions the SQA to run an exam diet and assessment process around our national qualifications. The Government sets out some strategic parameters for that exercise, and, once those are set out, we leave it to the SQA to undertake that process. A few days before the results are announced, the Government is given access, under the pre-release access code, to the information that has been generated by the results.
There are elements of the decision making whereby the Government can set out guidance on what the parameters of the approach to exams should be. Generally, those parameters do not change from year to year, because we are interested in the application of consistent standards so that the performance of a young person in, for example, 2018 can be compared with the performance of a young person in 2019. In that way, the qualifications of both cohorts—in 2018 and in 2019—are viewed with equal esteem on the basis of the consistent application of standards. Generally, we would not revise those to any significant extent.
In 2020, we faced very different circumstances. When I instructed the cancellation of the exam diet, in circumstances with which we are all familiar, I indicated to the SQA that it should design an alternative certification model, which maintained standards. That set the parameters of the decision making that the SQA could and should undertake, and the SQA responded by putting in place a mechanism such that, at the end of the awarding process, we could answer the question about whether standards had been maintained and the qualifications of the candidates of 2020 could be judged equally with those of the candidates of 2019, 2018, 2017 and so on.
Where I unreservedly accept that there was a problem is that, in the application of a statistical approach to try to secure that outcome, performance varied significantly from candidate to candidate and from school to school. In the fulfilment of the strategic direction that I had set for the SQA, the awarding process set out the difficulties and issues with which we all became familiar in early August.
The answer to Mr Johnson’s fundamental question is that the governance arrangements are such that the Government sets out a strategic direction and the SQA then makes a significant number of decisions in that context. Those decisions, quite properly, do not require the approval of Government; their doing so would undermine the independent awarding authority of the SQA and a fundamental principle of our awarding process.
I am sorry that that was such a long answer, convener, but Mr Johnson’s question merited that level of detail if it was to be answered properly.
In short, the mechanisms and methodologies that are used to examine candidates—in normal times as well as on this occasion—are quite properly matters for the SQA. However, what is examined, and the purpose of the examination, are clearly political decisions. For example, the SQA could not credibly decide, in normal times, not to undertake examinations or fundamentally to alter what happens, for example by moving to a simple pass-fail system rather than an agreed methodology. That would be a political decision.
If that is the case, surely the point about the dropping of the qualitative final step is that the grades that were awarded to individuals broke the link between individuals’ effort and attainment, because the final award was made purely on the basis of the centre’s past performance and there was no final check to see whether the movement of the individual’s grade was warranted—or if an individual was exceptional, for example. That is a pretty fundamental change. Surely, in retrospect, that decision should have been taken at governmental level rather than by the SQA.
In short, the mechanisms and methodologies that are used to examine candidates—in normal times as well as on this occasion—are quite properly matters for the SQA. However, what is examined, and the purpose of the examination, are clearly political decisions. For example, the SQA could not credibly decide, in normal times, not to undertake examinations or fundamentally to alter what happens, for example by moving to a simple pass-fail system rather than an agreed methodology. That would be a political decision.
If that is the case, surely the point about the dropping of the qualitative final step is that the grades that were awarded to individuals broke the link between individuals’ effort and attainment, because the final award was made purely on the basis of the centre’s past performance and there was no final check to see whether the movement of the individual’s grade was warranted—or if an individual was exceptional, for example. That is a pretty fundamental change. Surely, in retrospect, that decision should have been taken at governmental level rather than by the SQA.
I do not think that that would be a decision for the Government, because ultimately that would mean that the Government was taking or potentially influencing awarding decisions about individual candidates, which is expressly outwith the scope of the operating processes that we have. The Government has absolutely no role in awarding on individual performance; that process must be carried out independently of Government under our current arrangements. I do not accept that the Government should take that decision.
We commissioned the model, in essence, to answer the question of how, in the absence of an examination diet, we could award qualifications, and the SQA was tasked with developing that model. To go back to one of my earlier answers, one of the issues that was a challenge in terms of the final stage of dialogue that you raise with me concerned the ability to exercise and apply that fairly across all candidates and centres across the country.
I do not think that that would be a decision for the Government, because ultimately that would mean that the Government was taking or potentially influencing awarding decisions about individual candidates, which is expressly outwith the scope of the operating processes that we have. The Government has absolutely no role in awarding on individual performance; that process must be carried out independently of Government under our current arrangements. I do not accept that the Government should take that decision.
We commissioned the model, in essence, to answer the question of how, in the absence of an examination diet, we could award qualifications, and the SQA was tasked with developing that model. To go back to one of my earlier answers, one of the issues that was a challenge in terms of the final stage of dialogue that you raise with me concerned the ability to exercise and apply that fairly across all candidates and centres across the country.
It is not a question of individual awards; it is a question, fundamentally, of what the methodology achieves. The question of whether we are awarding an individual or an examination centre is a pretty fundamental one, and that is a question of methodology, not of individual awards. I accept that the Government should not be involved in individual awards, but it should surely be involved in the purpose and effect of the examination system.
It is not a question of individual awards; it is a question, fundamentally, of what the methodology achieves. The question of whether we are awarding an individual or an examination centre is a pretty fundamental one, and that is a question of methodology, not of individual awards. I accept that the Government should not be involved in individual awards, but it should surely be involved in the purpose and effect of the examination system.
The core question concerns the extent to which the Government’s direction to the SQA to, in essence, develop an alternative certification model that maintained standards was answered. As I have previously said to Parliament and the committee, we clearly acknowledge that there were significant weaknesses in how that was undertaken.
You have asked me a number of questions about where the proper responsibility for those matters lies, and I am openly accepting that the Government sets the direction, but I am saying that how that direction is fulfilled is the responsibility of the SQA independently and at arm’s length from the Government. Our current arrangements require that to be the case.
The core question concerns the extent to which the Government’s direction to the SQA to, in essence, develop an alternative certification model that maintained standards was answered. As I have previously said to Parliament and the committee, we clearly acknowledge that there were significant weaknesses in how that was undertaken.
You have asked me a number of questions about where the proper responsibility for those matters lies, and I am openly accepting that the Government sets the direction, but I am saying that how that direction is fulfilled is the responsibility of the SQA independently and at arm’s length from the Government. Our current arrangements require that to be the case.
Cabinet secretary, in your opening statement, you said that the Government has agreed to the Priestley recommendation to review the appeals system in the context of the UNCRC being incorporated into Scottish law. This morning, Professor Priestley agreed that more could have been done by the SQA to communicate the difference between the post-certification review system and the appeals process. Do you agree with that?
The committee has received a letter from a parent of a pupil with additional support needs who wants to go to university but cannot appeal, which means that the school becomes the judge. Do you believe that the SQA should allow direct appeals from pupils who are disadvantaged this year?
Cabinet secretary, in your opening statement, you said that the Government has agreed to the Priestley recommendation to review the appeals system in the context of the UNCRC being incorporated into Scottish law. This morning, Professor Priestley agreed that more could have been done by the SQA to communicate the difference between the post-certification review system and the appeals process. Do you agree with that?
The committee has received a letter from a parent of a pupil with additional support needs who wants to go to university but cannot appeal, which means that the school becomes the judge. Do you believe that the SQA should allow direct appeals from pupils who are disadvantaged this year?
There are a number of points in there. First, the process that the SQA put in place this summer was a four-stage one, with the last stage being the post-certification review process. The SQA had built in the capacity to deal with a larger number of appeals than would normally be the case, because of the unique circumstances in which we were operating. Of course, we did not really get to the point where that was necessary. It was clear, in the aftermath of the results day, 4 August, that there was going to be a sizeable number of appeals and that schools were beginning to work on them. With my announcement on 11 August of the awarding of grades based on teacher estimates, the requirement to pursue that on the scale that was envisaged was removed. I am not sure that there is much more that the SQA needed to do in terms of the communication of that post-certification review process, because it was largely overtaken by events.
That leads to the basis on which appeals are made. That is a difficult question in relation to 2020, because we provided for appeals to be brought forward on the basis of administrative error in the schools and administrative error in the SQA, as well as in circumstances in which there had been evidence of discrimination against a particular candidate. Those were the three bases on which appeals could be brought forward. That was because of the challenge that we faced this year in that the awarding decisions were based, finally, on the estimates that were put forward by teachers.
11:15Traditionally, our appeals system has relied on the judgments and input of schools to the appeals process. That was fundamentally changed this year, because we were dealing with evidence that was not based on exam performance but that, in essence, would have informed teacher estimates, which would have underpinned the award decisions that were made. The 2020 circumstances make it more difficult for there to have been any sort of broader examination of evidence because, clearly, the evidence would have informed the judgments that were made by teachers in putting forward the estimates.
The question that Rona Mackay has raised about direct appeals addresses an issue that we will have to explore in relation to future practice, because the requirements that we will take upon ourselves in relation to the incorporation into domestic law of the UNCRC will, I think, pose some challenges for us in how appeals are undertaken—but then, of course, we will not be dealing with the same set of circumstances as in the 2020 process.
There are a number of points in there. First, the process that the SQA put in place this summer was a four-stage one, with the last stage being the post-certification review process. The SQA had built in the capacity to deal with a larger number of appeals than would normally be the case, because of the unique circumstances in which we were operating. Of course, we did not really get to the point where that was necessary. It was clear, in the aftermath of the results day, 4 August, that there was going to be a sizeable number of appeals and that schools were beginning to work on them. With my announcement on 11 August of the awarding of grades based on teacher estimates, the requirement to pursue that on the scale that was envisaged was removed. I am not sure that there is much more that the SQA needed to do in terms of the communication of that post-certification review process, because it was largely overtaken by events.
That leads to the basis on which appeals are made. That is a difficult question in relation to 2020, because we provided for appeals to be brought forward on the basis of administrative error in the schools and administrative error in the SQA, as well as in circumstances in which there had been evidence of discrimination against a particular candidate. Those were the three bases on which appeals could be brought forward. That was because of the challenge that we faced this year in that the awarding decisions were based, finally, on the estimates that were put forward by teachers.
11:15Traditionally, our appeals system has relied on the judgments and input of schools to the appeals process. That was fundamentally changed this year, because we were dealing with evidence that was not based on exam performance but that, in essence, would have informed teacher estimates, which would have underpinned the award decisions that were made. The 2020 circumstances make it more difficult for there to have been any sort of broader examination of evidence because, clearly, the evidence would have informed the judgments that were made by teachers in putting forward the estimates.
The question that Rona Mackay has raised about direct appeals addresses an issue that we will have to explore in relation to future practice, because the requirements that we will take upon ourselves in relation to the incorporation into domestic law of the UNCRC will, I think, pose some challenges for us in how appeals are undertaken—but then, of course, we will not be dealing with the same set of circumstances as in the 2020 process.
Good morning. When you last gave evidence to the committee on the issue, which I think was on 16 September, I raised with you the circumstances of those young people to whom Rona Mackay has referred, who were unhappy with the teacher assessment that had been made of their achievement, because they felt that something had not been taken account of—for example, illness at the time of their prelim—and who found themselves unable to appeal, because an appeal could be made only by the school, and the school did not accept that the assessment was wrong. In the exchange that we had, you said that avenues for appeal were open to those young people. However, that was not and is not the case, as I think that you have acknowledged this morning.
Earlier, Professor Priestley was asked whether, in his view, it was too late for that to be changed for those young people—it might be a relatively small number of young people—who are facing a significant impact as a result of their inability to appeal the awards that were made to them. Professor Priestley was of the view that it was not too late to change that and to allow a direct appeal from those young people for this year’s diet. Will you consider allowing those young people to appeal what they feel are unfair circumstances, in order to achieve the awards that they think they are entitled to?
Good morning. When you last gave evidence to the committee on the issue, which I think was on 16 September, I raised with you the circumstances of those young people to whom Rona Mackay has referred, who were unhappy with the teacher assessment that had been made of their achievement, because they felt that something had not been taken account of—for example, illness at the time of their prelim—and who found themselves unable to appeal, because an appeal could be made only by the school, and the school did not accept that the assessment was wrong. In the exchange that we had, you said that avenues for appeal were open to those young people. However, that was not and is not the case, as I think that you have acknowledged this morning.
Earlier, Professor Priestley was asked whether, in his view, it was too late for that to be changed for those young people—it might be a relatively small number of young people—who are facing a significant impact as a result of their inability to appeal the awards that were made to them. Professor Priestley was of the view that it was not too late to change that and to allow a direct appeal from those young people for this year’s diet. Will you consider allowing those young people to appeal what they feel are unfair circumstances, in order to achieve the awards that they think they are entitled to?
I am certainly happy to give further consideration to the point that Iain Gray has raised. However, as I highlighted in my answer to Rona Mackay, I think that there is a particular challenge with regard to the basis on which those judgments could be arrived at, which relates to the availability of evidence. Fundamentally, the evidence base that should be available to a school, upon which judgments were made for the estimates that were submitted in 2020, would be the same evidence base that would be available for any consideration of points of the kind that Mr Gray has put to me.
Subject to that point being acknowledged, I am happy to consider those issues further. I have looked carefully at some of those matters and I find it difficult to get beyond the issue that we have encountered this year, because the system was predicated and awards were made on the basis of the estimates that were submitted by individual teachers.
Mr Gray mentioned the example of a young person who might have been ill at the time of a prelim. I would have thought that the arrangements for exceptional circumstances would be relevant there. A number of avenues have been available for those issues to be explored. However, I will certainly give further consideration to the point that Mr Gray has put to me.
I am certainly happy to give further consideration to the point that Iain Gray has raised. However, as I highlighted in my answer to Rona Mackay, I think that there is a particular challenge with regard to the basis on which those judgments could be arrived at, which relates to the availability of evidence. Fundamentally, the evidence base that should be available to a school, upon which judgments were made for the estimates that were submitted in 2020, would be the same evidence base that would be available for any consideration of points of the kind that Mr Gray has put to me.
Subject to that point being acknowledged, I am happy to consider those issues further. I have looked carefully at some of those matters and I find it difficult to get beyond the issue that we have encountered this year, because the system was predicated and awards were made on the basis of the estimates that were submitted by individual teachers.
Mr Gray mentioned the example of a young person who might have been ill at the time of a prelim. I would have thought that the arrangements for exceptional circumstances would be relevant there. A number of avenues have been available for those issues to be explored. However, I will certainly give further consideration to the point that Mr Gray has put to me.
I have two points to make. A young person or their parents may feel that the circumstances were exceptional, but if the school does not believe that the circumstances were exceptional, they have no avenue to appeal. That is a point about the system that is in place, but it does not address the problem.
The Priestley report’s summary of findings says:
“While the application of the appeals process offered an in-principle technical solution to address these anomalies”—
some of which we are discussing—
“it paid insufficient attention to the severe impact on those students obliged to undergo it (in terms of mental health and wellbeing, missed opportunities to transition into Higher Education, etc.).”
We are talking about a relatively small group of young people, but the consequences of the system that was put in place are very serious for them, and the Priestley review acknowledges that.
I am pleased that you have said that you will reconsider the issue, but time is marching on and we are now in November. If you are to reconsider it, and if there is to be any benefit for that group of young people, you must make a decision almost immediately. You say that you will consider the issue, but will you say how you will consider it? If those young people will, in fact, be able to seek some redress for their situation, how and when will you tell them? They need that now.
I have two points to make. A young person or their parents may feel that the circumstances were exceptional, but if the school does not believe that the circumstances were exceptional, they have no avenue to appeal. That is a point about the system that is in place, but it does not address the problem.
The Priestley report’s summary of findings says:
“While the application of the appeals process offered an in-principle technical solution to address these anomalies”—
some of which we are discussing—
“it paid insufficient attention to the severe impact on those students obliged to undergo it (in terms of mental health and wellbeing, missed opportunities to transition into Higher Education, etc.).”
We are talking about a relatively small group of young people, but the consequences of the system that was put in place are very serious for them, and the Priestley review acknowledges that.
I am pleased that you have said that you will reconsider the issue, but time is marching on and we are now in November. If you are to reconsider it, and if there is to be any benefit for that group of young people, you must make a decision almost immediately. You say that you will consider the issue, but will you say how you will consider it? If those young people will, in fact, be able to seek some redress for their situation, how and when will you tell them? They need that now.
I am perfectly happy to consider the issue but, equally, Mr Gray must accept the caveat that there must be an evidence base to enable the issue to be judged. In previous discussions with the committee, I indicated the challenge that is raised by the fact that that evidence base is the same evidence base that led to the judgment of a teacher in submitting estimates about an individual candidate. It must be acknowledged as part of this process that there is no easy alternative to the approach that we have taken. We took the decision, which I know that Mr Gray supported, that we should anchor our judgments in 2020 exclusively on the judgment of teachers, and that decision has informed the awarding process for 2020. That is a significant obstacle that has to be overcome.
That said, I will discuss with the SQA the issues that have been wrestled with to do with the nature of appeals and will consider what steps we can take.
I am perfectly happy to consider the issue but, equally, Mr Gray must accept the caveat that there must be an evidence base to enable the issue to be judged. In previous discussions with the committee, I indicated the challenge that is raised by the fact that that evidence base is the same evidence base that led to the judgment of a teacher in submitting estimates about an individual candidate. It must be acknowledged as part of this process that there is no easy alternative to the approach that we have taken. We took the decision, which I know that Mr Gray supported, that we should anchor our judgments in 2020 exclusively on the judgment of teachers, and that decision has informed the awarding process for 2020. That is a significant obstacle that has to be overcome.
That said, I will discuss with the SQA the issues that have been wrestled with to do with the nature of appeals and will consider what steps we can take.
I simply say that the issue is not a new one, and consideration of it has already been undertaken. I would also say that there is little point in reaching a conclusion in February or March next year; this is an issue for the young people concerned now, and time is marching on.
I simply say that the issue is not a new one, and consideration of it has already been undertaken. I would also say that there is little point in reaching a conclusion in February or March next year; this is an issue for the young people concerned now, and time is marching on.
I have to say to Mr Gray that the appeals system that we have had in place for some time has been predicated on dialogue between schools, pupils and families about the basis on which appeals could be lodged. That has been the long-standing approach to appeals in our system.
This year, we changed those arrangements, because we opted for a teacher estimate-based model, which Mr Gray supported. That changes the nature of the approach that we can take to evidence gathering. In any appeals process, there has to be a gathering of evidence to enable information to be provided. In the pre-2020 situation, decisions on awards were based exclusively on the exam contribution and the decision taken by the SQA. In 2020, the decisions were based on the work of young people and the teachers’ awarding estimates. That makes for very different circumstances to enable me to fulfil the point that Mr Gray puts to me. That is a consequence of our decision to award on the basis of teacher estimates, which we all accept was the right way to proceed in 2020.
I have to say to Mr Gray that the appeals system that we have had in place for some time has been predicated on dialogue between schools, pupils and families about the basis on which appeals could be lodged. That has been the long-standing approach to appeals in our system.
This year, we changed those arrangements, because we opted for a teacher estimate-based model, which Mr Gray supported. That changes the nature of the approach that we can take to evidence gathering. In any appeals process, there has to be a gathering of evidence to enable information to be provided. In the pre-2020 situation, decisions on awards were based exclusively on the exam contribution and the decision taken by the SQA. In 2020, the decisions were based on the work of young people and the teachers’ awarding estimates. That makes for very different circumstances to enable me to fulfil the point that Mr Gray puts to me. That is a consequence of our decision to award on the basis of teacher estimates, which we all accept was the right way to proceed in 2020.
It is the gathering of evidence by the school and, in certain circumstances the selection of that evidence, that is at issue. The issue is that there is no route for young people to challenge that through the appeals process. I appreciate that you have said that you will reconsider the issue, so thank you very much.
It is the gathering of evidence by the school and, in certain circumstances the selection of that evidence, that is at issue. The issue is that there is no route for young people to challenge that through the appeals process. I appreciate that you have said that you will reconsider the issue, so thank you very much.
We will move on to Mr Greene.
We will move on to Mr Greene.
I want to really get to the nub of the issue. We have heard a lot of evidence about the role that the SQA played versus the role that the Government had in the process, and the report goes into great detail on that. There is a suggestion that the issue comes down to two fundamental problems with the SQA. One is a cultural issue. We know that the SQA has refused to apologise or express any regret about what happened this year. It has merely stated that it was acting under orders and was doing what the Government asked it to do, which it believed that it was delivering until the Government asked it to do something else.
The other issue is the clear lack of resource or ability to deal with such situations. There are two specific pieces of evidence for that. The first is that the SQA was advised that it should engage with schools, teachers and headteachers when the estimates were delivered, because it was clear that there were large variations and anomalies in the estimations. Some of the issues could have been addressed at that stage if the SQA had had the resource and ability to do so, but it is clear that it did not. The second relates to the scale of the appeals that could have come in after the moderation process took place and the exam results were announced. As the sheer scale of that could have overwhelmed the SQA, intervention was required to completely annul the process, which you did.
What do you make of the suggestion that there are deep-rooted problems in the SQA?
I want to really get to the nub of the issue. We have heard a lot of evidence about the role that the SQA played versus the role that the Government had in the process, and the report goes into great detail on that. There is a suggestion that the issue comes down to two fundamental problems with the SQA. One is a cultural issue. We know that the SQA has refused to apologise or express any regret about what happened this year. It has merely stated that it was acting under orders and was doing what the Government asked it to do, which it believed that it was delivering until the Government asked it to do something else.
The other issue is the clear lack of resource or ability to deal with such situations. There are two specific pieces of evidence for that. The first is that the SQA was advised that it should engage with schools, teachers and headteachers when the estimates were delivered, because it was clear that there were large variations and anomalies in the estimations. Some of the issues could have been addressed at that stage if the SQA had had the resource and ability to do so, but it is clear that it did not. The second relates to the scale of the appeals that could have come in after the moderation process took place and the exam results were announced. As the sheer scale of that could have overwhelmed the SQA, intervention was required to completely annul the process, which you did.
What do you make of the suggestion that there are deep-rooted problems in the SQA?
I was confident that the SQA had sufficient capacity arranged to handle the appeals that were going to arise from the awarding on 4 August. I was very confident that it had the resources in place and had prepared on that basis, so I do not think that it can be faulted for that. The fact that the process was not required was a consequence of the decision that I took and announced to Parliament on 11 August. I do not think that anyone can marshal criticism of the SQA for not being ready to handle the appeals—it knew that there was going to be a big volume. It had planned for that and secured the resources to deal with it.
There is a lot of focus on the question of dialogue with individual schools about performance. One issue that has troubled many people is that, in the application of the SQA model, there was too much emphasis on the past performance of schools. I understand and empathise with that concern. However, if the SQA had tried to identify outliers, it would inevitably have focused on the past performance of schools, because an outlier could be defined only as a school that had significantly changed its estimates compared with its past performance. That is the only way in which such an outlier could be identified.
The problem that concerns people, which is essentially about past performance informing the SQA’s decision making on awards, would still have applied if the SQA had engaged in dialogue with schools, because it would have been engaging with schools that were far adrift from their past performance. There would not have been an easy solution to that issue.
11:30That throws into the spotlight—the committee may come on to this point—the question of what the most appropriate means of assessment would be. One thing that troubles me concerns the A-to-C pass rate at higher level in the most deprived communities in Scotland. In 2019, under the exam system, it was 65 per cent, while in 2020, with teacher estimates, it was 85 per cent. The difference between those two numbers is massive.
My point is that different approaches to assessment can throw up very different responses. An exam-based system produces a 65 per cent pass rate among the most deprived 20 per cent of pupils, while an assessment that is based on teacher estimates generates a pass rate of 85 per cent. Those are two different numbers. I am not saying that one is right and one is wrong; I am simply saying that they are different. That exposes a huge question about what constitutes the most appropriate method of assessing the achievements and attainment of pupils.
The SQA gets knocked and criticised by many people. There are things that it does not get right, and it is important that all public bodies face up to things that they do not get right. However, we all have to accept that the SQA, as an organisation, often—every year, invariably—has to give people news that they would rather not receive. It has to say to some pupils, “I’m afraid you didn’t get the grade you were looking for.”
That is tough and difficult, but it has to be done if we want to maintain standards through our examination system. Naturally, that leads to a lot of criticism of the SQA for seemingly not being close to people on decision making. Ultimately, however, somebody has to make decisions on awarding, and it is the SQA that has to do that. That can sometimes involve taking difficult decisions. Nobody in the SQA relishes doing that—in my experience, the organisation is very professionally focused on ensuring that young people are able to fulfil their potential.
I was confident that the SQA had sufficient capacity arranged to handle the appeals that were going to arise from the awarding on 4 August. I was very confident that it had the resources in place and had prepared on that basis, so I do not think that it can be faulted for that. The fact that the process was not required was a consequence of the decision that I took and announced to Parliament on 11 August. I do not think that anyone can marshal criticism of the SQA for not being ready to handle the appeals—it knew that there was going to be a big volume. It had planned for that and secured the resources to deal with it.
There is a lot of focus on the question of dialogue with individual schools about performance. One issue that has troubled many people is that, in the application of the SQA model, there was too much emphasis on the past performance of schools. I understand and empathise with that concern. However, if the SQA had tried to identify outliers, it would inevitably have focused on the past performance of schools, because an outlier could be defined only as a school that had significantly changed its estimates compared with its past performance. That is the only way in which such an outlier could be identified.
The problem that concerns people, which is essentially about past performance informing the SQA’s decision making on awards, would still have applied if the SQA had engaged in dialogue with schools, because it would have been engaging with schools that were far adrift from their past performance. There would not have been an easy solution to that issue.
11:30That throws into the spotlight—the committee may come on to this point—the question of what the most appropriate means of assessment would be. One thing that troubles me concerns the A-to-C pass rate at higher level in the most deprived communities in Scotland. In 2019, under the exam system, it was 65 per cent, while in 2020, with teacher estimates, it was 85 per cent. The difference between those two numbers is massive.
My point is that different approaches to assessment can throw up very different responses. An exam-based system produces a 65 per cent pass rate among the most deprived 20 per cent of pupils, while an assessment that is based on teacher estimates generates a pass rate of 85 per cent. Those are two different numbers. I am not saying that one is right and one is wrong; I am simply saying that they are different. That exposes a huge question about what constitutes the most appropriate method of assessing the achievements and attainment of pupils.
The SQA gets knocked and criticised by many people. There are things that it does not get right, and it is important that all public bodies face up to things that they do not get right. However, we all have to accept that the SQA, as an organisation, often—every year, invariably—has to give people news that they would rather not receive. It has to say to some pupils, “I’m afraid you didn’t get the grade you were looking for.”
That is tough and difficult, but it has to be done if we want to maintain standards through our examination system. Naturally, that leads to a lot of criticism of the SQA for seemingly not being close to people on decision making. Ultimately, however, somebody has to make decisions on awarding, and it is the SQA that has to do that. That can sometimes involve taking difficult decisions. Nobody in the SQA relishes doing that—in my experience, the organisation is very professionally focused on ensuring that young people are able to fulfil their potential.
I do not think that people question the professionalism of individuals who work in public agencies, but we cannot hide from the fact that there is clear evidence that teachers were extremely disappointed by the lack of engagement during the whole process. That is evident not just from Professor Priestley’s report, but from other sessions that the committee has had, including focus groups that we have held with teachers. The lack of engagement is tangible and regrettable.
You touched on a wider point, which gets to the nub of the matter. We are in a situation in which the moderation process, which everyone admits was necessary to an extent, delivered a set of results that was not just unpopular but unfair. That required ministerial intervention on your part to revert to teacher estimates, which you have admitted varied wildly from the historical performance of pupils in those schools.
That leaves us with a huge conundrum as to what will happen next year. We do not, and cannot, guarantee where we will be at that point. Could we end up in the same situation, where we have to revert to teacher estimates, which—again, by your admission—could vary from historical average standards? Would that guarantee any sort of consistency in awards year on year?
Given that—as Iain Gray pointed out—time is extremely short, what are you doing to address the fundamental problem of how we award pupils’ grades, whether that is based on exams, teacher estimates, coursework or a mixture of all of the above?
I do not think that people question the professionalism of individuals who work in public agencies, but we cannot hide from the fact that there is clear evidence that teachers were extremely disappointed by the lack of engagement during the whole process. That is evident not just from Professor Priestley’s report, but from other sessions that the committee has had, including focus groups that we have held with teachers. The lack of engagement is tangible and regrettable.
You touched on a wider point, which gets to the nub of the matter. We are in a situation in which the moderation process, which everyone admits was necessary to an extent, delivered a set of results that was not just unpopular but unfair. That required ministerial intervention on your part to revert to teacher estimates, which you have admitted varied wildly from the historical performance of pupils in those schools.
That leaves us with a huge conundrum as to what will happen next year. We do not, and cannot, guarantee where we will be at that point. Could we end up in the same situation, where we have to revert to teacher estimates, which—again, by your admission—could vary from historical average standards? Would that guarantee any sort of consistency in awards year on year?
Given that—as Iain Gray pointed out—time is extremely short, what are you doing to address the fundamental problem of how we award pupils’ grades, whether that is based on exams, teacher estimates, coursework or a mixture of all of the above?
I have obviously taken decisions on that already, because we are not having a national 5 exam diet—we are going to have teacher estimates.
The SQA is currently providing schools with items that will form the basis of assessments that can be undertaken within the normal arrangements for teaching activities in our schools, which will spare teachers the requirement to generate the assessments themselves. As we speak, the SQA is making that material available across all subject areas, so that teachers can access the assessments that the SQA has produced. A lot of those assessments will be extracted from the 2020 examination papers that were not used, and various other materials will be available to schools. In the normal run of the arrangements, schools will be able to use those assessments to structure the gathering of the evidence that will inform their estimates.
The SQA has provided guidance to the school system, which has advised against the holding of prelims so that we can concentrate on judging the performance of young people through the year and ensuring that that information is gathered. It has highlighted the importance of the quality and not the quantity of assessments, in order to give a clear signal to the education system that we do not want a cottage industry to be created that adds to teachers’ workload. We want the process to be carried out efficiently and effectively, without putting an additional burden on teachers, as part of the routine, run-of-the-mill assessment of pupils’ performance that goes on every year in Scottish education.
All of that is under way for national 5. Obviously, that provides us with a foundation that will enable us to pivot to that approach, if necessary, in relation to highers and advanced highers. I have taken a decision not to apply that same methodology to highers and advanced highers, but I made it clear to Parliament in October that we retain the ability to pivot to that arrangement if necessary. The final moment at which we could take such a decision would be the mid-term break in February.
My clear priority is to run a higher and advanced higher exam diet in 2021, but I am mindful of the fact that we do not quite know what the course of the pandemic will be or what degree of disruption will be experienced by individual pupils, schools or the system in general.
The handling of those issues is well under way to respond to the important recommendations that Professor Priestley and his team have made.
I have obviously taken decisions on that already, because we are not having a national 5 exam diet—we are going to have teacher estimates.
The SQA is currently providing schools with items that will form the basis of assessments that can be undertaken within the normal arrangements for teaching activities in our schools, which will spare teachers the requirement to generate the assessments themselves. As we speak, the SQA is making that material available across all subject areas, so that teachers can access the assessments that the SQA has produced. A lot of those assessments will be extracted from the 2020 examination papers that were not used, and various other materials will be available to schools. In the normal run of the arrangements, schools will be able to use those assessments to structure the gathering of the evidence that will inform their estimates.
The SQA has provided guidance to the school system, which has advised against the holding of prelims so that we can concentrate on judging the performance of young people through the year and ensuring that that information is gathered. It has highlighted the importance of the quality and not the quantity of assessments, in order to give a clear signal to the education system that we do not want a cottage industry to be created that adds to teachers’ workload. We want the process to be carried out efficiently and effectively, without putting an additional burden on teachers, as part of the routine, run-of-the-mill assessment of pupils’ performance that goes on every year in Scottish education.
All of that is under way for national 5. Obviously, that provides us with a foundation that will enable us to pivot to that approach, if necessary, in relation to highers and advanced highers. I have taken a decision not to apply that same methodology to highers and advanced highers, but I made it clear to Parliament in October that we retain the ability to pivot to that arrangement if necessary. The final moment at which we could take such a decision would be the mid-term break in February.
My clear priority is to run a higher and advanced higher exam diet in 2021, but I am mindful of the fact that we do not quite know what the course of the pandemic will be or what degree of disruption will be experienced by individual pupils, schools or the system in general.
The handling of those issues is well under way to respond to the important recommendations that Professor Priestley and his team have made.
The question remains of whether there will be national moderation of the estimates that you are asking teachers to provide. I appreciate that the assessment process is slightly different this year to help teachers come up with their estimates, but will the SQA still apply moderation to those estimates? If so, is it doing what it did last year and basing its moderation on a set of parameters and rules that the algorithm will use? Is it going to be any different from last year, and, if so, how will it be different to ensure that we do not see a repeat of the use of the levels of moderation that we saw last year?
The question remains of whether there will be national moderation of the estimates that you are asking teachers to provide. I appreciate that the assessment process is slightly different this year to help teachers come up with their estimates, but will the SQA still apply moderation to those estimates? If so, is it doing what it did last year and basing its moderation on a set of parameters and rules that the algorithm will use? Is it going to be any different from last year, and, if so, how will it be different to ensure that we do not see a repeat of the use of the levels of moderation that we saw last year?
It will be fundamentally different to the approach that was taken this summer.
First, the preparations for the approach have been taken forward in a collegiate fashion, led by the SQA and involving local authorities, directors of education, professional associations, young people and parents. There has been openness and transparency around the arrangements that have been put in place.
Secondly, the SQA will be supporting the assessment process by providing materials, which I referred to in my answer a moment ago. Thirdly, the SQA will assist in the provision of information about standards, to assist teachers in forming the judgments that they have to make. There will be moderation at a local level, with schools and local authorities supporting that process.
There is a big role for education departments in local authorities to support their schools, in concert with the SQA and the work of the regional improvement collaboratives, to make sure that teachers are making appropriate judgments based on the assessments that they have in front of them. We did not have the opportunity to do that in 2020 because we had to close schools almost immediately that I announced the cancellation of the exam diet.
All that interaction will take place and the SQA will be involved in and leading that process, but it will also be motivating partners, particularly in our local authorities, to recognise the role that they will have to perform in relation to the identification and maintenance of standards.
It will be fundamentally different to the approach that was taken this summer.
First, the preparations for the approach have been taken forward in a collegiate fashion, led by the SQA and involving local authorities, directors of education, professional associations, young people and parents. There has been openness and transparency around the arrangements that have been put in place.
Secondly, the SQA will be supporting the assessment process by providing materials, which I referred to in my answer a moment ago. Thirdly, the SQA will assist in the provision of information about standards, to assist teachers in forming the judgments that they have to make. There will be moderation at a local level, with schools and local authorities supporting that process.
There is a big role for education departments in local authorities to support their schools, in concert with the SQA and the work of the regional improvement collaboratives, to make sure that teachers are making appropriate judgments based on the assessments that they have in front of them. We did not have the opportunity to do that in 2020 because we had to close schools almost immediately that I announced the cancellation of the exam diet.
All that interaction will take place and the SQA will be involved in and leading that process, but it will also be motivating partners, particularly in our local authorities, to recognise the role that they will have to perform in relation to the identification and maintenance of standards.
I return to points raised by Jamie Greene about the structural and cultural issues at the SQA, focusing on one in particular. The Equality and Human Rights Commission’s view was clear that the SQA lacks the capacity and experience to consistently fulfil its public sector equality duty responsibilities. Professor Priestley made the point that the SQA sees its role as technical and only views equalities through that lens, rather than through the real impact that they have on individuals. He also made it clear that greater emphasis was placed by the SQA on ensuring that teachers met equality duties than on ensuring that the SQA’s moderation process met them. Equality duties are not optional; they are a legal requirement. What steps have you taken to make sure that the SQA has both the internal culture and the capacity to meet those equality duties and handle those issues with greater seriousness than has apparently been the case until now?
I return to points raised by Jamie Greene about the structural and cultural issues at the SQA, focusing on one in particular. The Equality and Human Rights Commission’s view was clear that the SQA lacks the capacity and experience to consistently fulfil its public sector equality duty responsibilities. Professor Priestley made the point that the SQA sees its role as technical and only views equalities through that lens, rather than through the real impact that they have on individuals. He also made it clear that greater emphasis was placed by the SQA on ensuring that teachers met equality duties than on ensuring that the SQA’s moderation process met them. Equality duties are not optional; they are a legal requirement. What steps have you taken to make sure that the SQA has both the internal culture and the capacity to meet those equality duties and handle those issues with greater seriousness than has apparently been the case until now?
I struggle with that point in the sense that the SQA has undertaken all the steps that are required of it, statutorily, in relation to its public sector equality duties. The approach that the SQA took during the summer was, in essence, to recognise that it could not complete that work until it had completed the whole process. That material is now all in the public domain. I agree with Mr Greer that the exercise of the equality duty is not optional, but a statutory requirement. I expect the SQA to fulfil it and, on the basis of the material that has been concluded, I think that it has.
I struggle with that point in the sense that the SQA has undertaken all the steps that are required of it, statutorily, in relation to its public sector equality duties. The approach that the SQA took during the summer was, in essence, to recognise that it could not complete that work until it had completed the whole process. That material is now all in the public domain. I agree with Mr Greer that the exercise of the equality duty is not optional, but a statutory requirement. I expect the SQA to fulfil it and, on the basis of the material that has been concluded, I think that it has.
The Equality and Human Rights Commission has critiqued the material that is in the public domain. It engaged with the SQA and it has concerns about the organisation’s culture and capacity. I think that those aspects are significant, but this is not my main line of questioning. I am happy to come back to it in relation to how it is addressed through the 2020-21 diet.
The Equality and Human Rights Commission has critiqued the material that is in the public domain. It engaged with the SQA and it has concerns about the organisation’s culture and capacity. I think that those aspects are significant, but this is not my main line of questioning. I am happy to come back to it in relation to how it is addressed through the 2020-21 diet.
Good morning, cabinet secretary. Professor Priestley’s review was extremely helpful, given the time. However, he highlighted the restrictions on time and resources and suggested that, although it could have been a year-long academic paper, it had to be done in the relatively short time of six weeks. Why was the timescale allowed for it so short? Was there a restriction in terms of getting the information in order to learn for next year? Given the limitations on time and resources, what further work is being done to pick up on some of the issues that were not covered in the review? To flag up some, Professor Priestley talked about there being not enough engagement with groups representing younger people, which has already been highlighted, and not enough ability to focus on the algorithm. What further work is being done on that?
Good morning, cabinet secretary. Professor Priestley’s review was extremely helpful, given the time. However, he highlighted the restrictions on time and resources and suggested that, although it could have been a year-long academic paper, it had to be done in the relatively short time of six weeks. Why was the timescale allowed for it so short? Was there a restriction in terms of getting the information in order to learn for next year? Given the limitations on time and resources, what further work is being done to pick up on some of the issues that were not covered in the review? To flag up some, Professor Priestley talked about there being not enough engagement with groups representing younger people, which has already been highlighted, and not enough ability to focus on the algorithm. What further work is being done on that?
Will Mr Halcro Johnston clarify one element of his question on the point about engagement with young people. Was it about their engagement in Professor Priestley’s review or with the whole process of assessment?
Will Mr Halcro Johnston clarify one element of his question on the point about engagement with young people. Was it about their engagement in Professor Priestley’s review or with the whole process of assessment?
I am happy to clarify that. It was with regard to Professor Priestley’s review. He felt that he and his team did not have time to engage with as wide a range of young people’s groups as he would have liked.
11:45
I am happy to clarify that. It was with regard to Professor Priestley’s review. He felt that he and his team did not have time to engage with as wide a range of young people’s groups as he would have liked.
11:45
Mr Halcro Johnston has essentially answered his own question for me. I wanted to move at pace on the issues that were raised by the exam diet, to make sure, as we are still in the midst of the pandemic, that our approach to 2021 could be informed by independent thinking that would challenge the way in which the Government has exercised its responsibilities and directed the SQA to exercise its responsibilities. The limited timescale was designed to do that.
I am quite sure that it would have been possible to have a 12-month academic exercise. However, in looking at the review, I find it a crisp and direct response to the challenges that we face. In commissioning Professor Priestley, I was commissioning a respected academic who has a strong track record in that area of activity. I was obtaining his expertise, and I knew that he would be able to assimilate and assess all the issues, engage appropriately, and give us recommendations. There was no learning curve for Professor Priestley in undertaking the exercise, because he and Dr Shapira, as the committee will know from their previous encounters, are superbly well qualified to do such work. That was part of the reason why I felt that I could have a short timescale. I needed a short timescale, because I had to communicate to the education system what our approach will be in 2021, and I knew that, based on his respected expertise, which he brings from everything that he has done in the past, Professor Priestley would probably be able to work within that timescale.
Although it was a swift timescale, I do not think that anything was lost as a result. Of course, there could have been more time for dialogue, but the way in which Professor Priestley has structured his recommendations puts the onus on various organisations—the Government, the SQA and other bodies—to engage in some of the further dialogue that he was perhaps not able to have as part of his exercise.
On the issue of the algorithm, I am sure that academic papers will be written about the algorithms that were used across the four education systems of the United Kingdom, but it is not mission critical that we understand the algorithm for 2021, because we will not be using it. I am sure that it will be an interesting point of academic research, but it is not on the critical path for delivery of the 2021 set of qualifications.
Mr Halcro Johnston has essentially answered his own question for me. I wanted to move at pace on the issues that were raised by the exam diet, to make sure, as we are still in the midst of the pandemic, that our approach to 2021 could be informed by independent thinking that would challenge the way in which the Government has exercised its responsibilities and directed the SQA to exercise its responsibilities. The limited timescale was designed to do that.
I am quite sure that it would have been possible to have a 12-month academic exercise. However, in looking at the review, I find it a crisp and direct response to the challenges that we face. In commissioning Professor Priestley, I was commissioning a respected academic who has a strong track record in that area of activity. I was obtaining his expertise, and I knew that he would be able to assimilate and assess all the issues, engage appropriately, and give us recommendations. There was no learning curve for Professor Priestley in undertaking the exercise, because he and Dr Shapira, as the committee will know from their previous encounters, are superbly well qualified to do such work. That was part of the reason why I felt that I could have a short timescale. I needed a short timescale, because I had to communicate to the education system what our approach will be in 2021, and I knew that, based on his respected expertise, which he brings from everything that he has done in the past, Professor Priestley would probably be able to work within that timescale.
Although it was a swift timescale, I do not think that anything was lost as a result. Of course, there could have been more time for dialogue, but the way in which Professor Priestley has structured his recommendations puts the onus on various organisations—the Government, the SQA and other bodies—to engage in some of the further dialogue that he was perhaps not able to have as part of his exercise.
On the issue of the algorithm, I am sure that academic papers will be written about the algorithms that were used across the four education systems of the United Kingdom, but it is not mission critical that we understand the algorithm for 2021, because we will not be using it. I am sure that it will be an interesting point of academic research, but it is not on the critical path for delivery of the 2021 set of qualifications.
I would highlight that Professor Priestley himself raised those issues with regard to the timescales.
You rightly pointed out that there are a number of areas in which different groups may take work forward. What work is the Scottish Government doing, and are there plans to call for further work to be done in those areas ahead of the exams next year, or in relation to the resilience of the system?
I would highlight that Professor Priestley himself raised those issues with regard to the timescales.
You rightly pointed out that there are a number of areas in which different groups may take work forward. What work is the Scottish Government doing, and are there plans to call for further work to be done in those areas ahead of the exams next year, or in relation to the resilience of the system?
The Government has committed to implementing eight of the nine recommendations in the course of this year. Work is under way on all eight, some of which has already led to decision making, such as in relation to the cancellation of the national 5 diet. While recognising the statutory requirement on the SQA to be the awarding body, and to lead the awarding process, Professor Priestley recommended taking the opportunity to act collaboratively with a number of other players. That is where the qualifications group has come from, and the SQA is taking that work forward as we speak. The group is meeting regularly—I think that it is meeting twice this week—to advance some of the issues. All that work is under way, and I am happy to give the committee a distillation of the steps that we are taking to advance those eight recommendations. If it would be helpful, I am happy to provide a progress report in writing to the committee, so that it can see what steps have been taken by the Government and other bodies.
The Government has committed to implementing eight of the nine recommendations in the course of this year. Work is under way on all eight, some of which has already led to decision making, such as in relation to the cancellation of the national 5 diet. While recognising the statutory requirement on the SQA to be the awarding body, and to lead the awarding process, Professor Priestley recommended taking the opportunity to act collaboratively with a number of other players. That is where the qualifications group has come from, and the SQA is taking that work forward as we speak. The group is meeting regularly—I think that it is meeting twice this week—to advance some of the issues. All that work is under way, and I am happy to give the committee a distillation of the steps that we are taking to advance those eight recommendations. If it would be helpful, I am happy to provide a progress report in writing to the committee, so that it can see what steps have been taken by the Government and other bodies.
I am sure that that would be helpful. I am conscious of the time, so I will leave it there.
I am sure that that would be helpful. I am conscious of the time, so I will leave it there.
Good morning, cabinet secretary—I think that it is still morning. I will ask you about the meeting at which you were briefed on the outcome of the SQA’s moderation. No minutes were taken at that meeting—it took me three separate parliamentary questions to establish that fact. However, it was a critically important meeting; a chance to pull the plug on a system that others today have said was unfair, and on results that caused no end of grief.
At such a crucial juncture for Scottish education, I am astonished that no notes were taken and that no records were kept. I note that you later said that there was no real value in that minute. I therefore ask why there was no real value in that minute and why no notes were taken at the meeting?
Good morning, cabinet secretary—I think that it is still morning. I will ask you about the meeting at which you were briefed on the outcome of the SQA’s moderation. No minutes were taken at that meeting—it took me three separate parliamentary questions to establish that fact. However, it was a critically important meeting; a chance to pull the plug on a system that others today have said was unfair, and on results that caused no end of grief.
At such a crucial juncture for Scottish education, I am astonished that no notes were taken and that no records were kept. I note that you later said that there was no real value in that minute. I therefore ask why there was no real value in that minute and why no notes were taken at the meeting?
The meeting that took place was, in essence, a briefing meeting that explained to me what the outcome of the awarding process had been. Documentation was made available to me for that meeting, which I am almost entirely certain has been issued under a freedom of information request to set out all the material that I had in front of me on that occasion, as provided by the SQA and by the analysts in the Government. I am pretty certain that the material that was available to me has been published.
On the question of a minute, I note that we make a judgment about when it is appropriate to produce minutes within Government generally, and that it tends to be around the taking of decisions. I take part in endless meetings on a daily basis, not all of which are minuted, because not all of them are about taking decisions. If I am taking decisions in a meeting, they will be recorded. However, where I am involved in dialogue about certain questions, we could find ourselves having to write an almost verbatim account of every word that was said. I am afraid that we simply do not have the civil service resources to enable that to be the case and I do not think that it would be a justifiable use of public expenditure for that purpose. It is critically important that decision making is properly recorded, but I was not making decisions that night. I was, in essence, being briefed on the outcome of the awarding process that had been undertaken by the SQA.
The meeting that took place was, in essence, a briefing meeting that explained to me what the outcome of the awarding process had been. Documentation was made available to me for that meeting, which I am almost entirely certain has been issued under a freedom of information request to set out all the material that I had in front of me on that occasion, as provided by the SQA and by the analysts in the Government. I am pretty certain that the material that was available to me has been published.
On the question of a minute, I note that we make a judgment about when it is appropriate to produce minutes within Government generally, and that it tends to be around the taking of decisions. I take part in endless meetings on a daily basis, not all of which are minuted, because not all of them are about taking decisions. If I am taking decisions in a meeting, they will be recorded. However, where I am involved in dialogue about certain questions, we could find ourselves having to write an almost verbatim account of every word that was said. I am afraid that we simply do not have the civil service resources to enable that to be the case and I do not think that it would be a justifiable use of public expenditure for that purpose. It is critically important that decision making is properly recorded, but I was not making decisions that night. I was, in essence, being briefed on the outcome of the awarding process that had been undertaken by the SQA.
I thank you for that answer.
Fairness, wellbeing and the building up of pupils’ skills and knowledge are priorities, which the Welsh Liberal Democrat Minister for Education, Kirsty Williams, has been clear about. Her conclusion in Wales yesterday was the cancellation of exams for 2021. I know that a decision has been made about national 5s, and nobody is pretending that these are easy decisions. Nonetheless, will they be better decisions if ministers work in partnership with teachers? What is the early feedback on the alternatives for national 5s, and how is that fed into the Scottish Government’s thinking on other exams?
I thank you for that answer.
Fairness, wellbeing and the building up of pupils’ skills and knowledge are priorities, which the Welsh Liberal Democrat Minister for Education, Kirsty Williams, has been clear about. Her conclusion in Wales yesterday was the cancellation of exams for 2021. I know that a decision has been made about national 5s, and nobody is pretending that these are easy decisions. Nonetheless, will they be better decisions if ministers work in partnership with teachers? What is the early feedback on the alternatives for national 5s, and how is that fed into the Scottish Government’s thinking on other exams?
The first thing that I will say is about Kirsty Williams, whom I have had the pleasure of working with as a fellow education minister in Wales. She has made an outstanding contribution and I was disappointed to see that she intends to stand down at the forthcoming election. She has been a distinguished and thoughtful education minister in Wales and a great person to discuss issues with, because we both wrestle with the same difficult choices. She wrestled with a difficult choice around her exam diet over the summer, as I did about the forthcoming position in Scotland.
In answer to the question about how we engage with teachers, I talk to a lot of members of the teaching profession about those questions, as does the SQA. We have received many contributions about the right thing to do in relation to the planning of the 2021 diet, given the uncertainties that we still face around Covid. It would be fair to say that there is no unanimity about the right thing to do. There are different opinions, and teachers will express those different opinions. I was particularly influenced in my thinking around the decisions that I have taken about 2021 by the contribution of Professor Priestley in the form of his review and by the submission that I received from a group of headteachers, which is called the BOCSH—building our curriculum self-help—group. That group brings together a voluntary network of headteachers who argued for much the same type of approach that Professor Priestley argued for. I have had correspondence and representations from teachers saying that we should proceed with the national 5 diet and, equally, I have had recommendations that we should proceed with the higher and advanced higher diet.
There are judgments to be made, but I agree unreservedly with Beatrice Wishart that the decisions that we take should have at their heart fairness, the wellbeing of learners and the interests and prospects of learners. For that reason, having discussed the issue with learners, I judged that we should run a higher and advanced higher diet because learners are entitled to have the qualifications that enable them to progress to the next stages of their journey—whether in learning or work or in any other choice that they make—given that highers and advanced highers, in contrast to national 5, are exit qualifications from our school education system for the overwhelming majority of candidates. That factor weighed heavily in my judgment in coming to my conclusions.
The first thing that I will say is about Kirsty Williams, whom I have had the pleasure of working with as a fellow education minister in Wales. She has made an outstanding contribution and I was disappointed to see that she intends to stand down at the forthcoming election. She has been a distinguished and thoughtful education minister in Wales and a great person to discuss issues with, because we both wrestle with the same difficult choices. She wrestled with a difficult choice around her exam diet over the summer, as I did about the forthcoming position in Scotland.
In answer to the question about how we engage with teachers, I talk to a lot of members of the teaching profession about those questions, as does the SQA. We have received many contributions about the right thing to do in relation to the planning of the 2021 diet, given the uncertainties that we still face around Covid. It would be fair to say that there is no unanimity about the right thing to do. There are different opinions, and teachers will express those different opinions. I was particularly influenced in my thinking around the decisions that I have taken about 2021 by the contribution of Professor Priestley in the form of his review and by the submission that I received from a group of headteachers, which is called the BOCSH—building our curriculum self-help—group. That group brings together a voluntary network of headteachers who argued for much the same type of approach that Professor Priestley argued for. I have had correspondence and representations from teachers saying that we should proceed with the national 5 diet and, equally, I have had recommendations that we should proceed with the higher and advanced higher diet.
There are judgments to be made, but I agree unreservedly with Beatrice Wishart that the decisions that we take should have at their heart fairness, the wellbeing of learners and the interests and prospects of learners. For that reason, having discussed the issue with learners, I judged that we should run a higher and advanced higher diet because learners are entitled to have the qualifications that enable them to progress to the next stages of their journey—whether in learning or work or in any other choice that they make—given that highers and advanced highers, in contrast to national 5, are exit qualifications from our school education system for the overwhelming majority of candidates. That factor weighed heavily in my judgment in coming to my conclusions.
Ross Greer is next.
Ross Greer is next.
Thank you, convener. To make a quick observation, cabinet secretary, you are using the words “exam” and “qualification” somewhat interchangeably. A higher qualification would be available to all young people in Scotland, whether or not they took the exam. If you had decided to do for highers what you have done for national 5s, young people would still have got those qualifications. We are talking about a judgment about the exam, not the qualification.
You said something very welcome in your opening statement, which was that no young person should be disadvantaged by disruption throughout the year—I presume that we are talking about self-isolation—when it comes to the final exams. I have been contacted by a number of constituents in that situation. To give one example, a young woman who is studying for highers and advanced highers contacted me because she has had to self-isolate twice already, so she has missed about four weeks of school. If that were to be replicated in the remaining two thirds of the term, she could miss up to 12 weeks—it could be less; it could be more. She said:
“It’s a very stressful time as it is, and missing a lot of school makes it worse. I feel underprepared for prelims and exams.”
If we park the prelim point for a minute, based on what you have said about the guidance that has been issued, how will a young person who has missed a significant chunk of the school year due to self-isolation have that taken into account when it comes to their higher or advanced higher exams?
Thank you, convener. To make a quick observation, cabinet secretary, you are using the words “exam” and “qualification” somewhat interchangeably. A higher qualification would be available to all young people in Scotland, whether or not they took the exam. If you had decided to do for highers what you have done for national 5s, young people would still have got those qualifications. We are talking about a judgment about the exam, not the qualification.
You said something very welcome in your opening statement, which was that no young person should be disadvantaged by disruption throughout the year—I presume that we are talking about self-isolation—when it comes to the final exams. I have been contacted by a number of constituents in that situation. To give one example, a young woman who is studying for highers and advanced highers contacted me because she has had to self-isolate twice already, so she has missed about four weeks of school. If that were to be replicated in the remaining two thirds of the term, she could miss up to 12 weeks—it could be less; it could be more. She said:
“It’s a very stressful time as it is, and missing a lot of school makes it worse. I feel underprepared for prelims and exams.”
If we park the prelim point for a minute, based on what you have said about the guidance that has been issued, how will a young person who has missed a significant chunk of the school year due to self-isolation have that taken into account when it comes to their higher or advanced higher exams?
Before I answer the substance of Mr Greer’s question, I should say that he is absolutely right to point out that I am using different words interchangeably and perhaps somewhat casually. There is no necessity to have an exam to award a qualification—that is an important observation to make at the outset.
On the circumstances that Mr Greer has raised with me, that is a very important and legitimate issue that we must watch carefully. That is why I said what I did about the fact that I would like to proceed with a higher exam diet and an advanced higher diet. I think that that would give young people greater certainty about their qualifications, including as a foundation for their next steps in life. However, we have to ensure that the approach is being applied fairly across the board.
Mr Greer raised the circumstances of an individual candidate. We have to watch and monitor very carefully the extent to which the experience of the candidate whom Mr Greer has mentioned is more widespread. I am hearing reports from schools of young people experiencing the type of disruption that Mr Greer talks about, although I do not know exactly how many such people there are. We must monitor that very carefully because, if young people have experienced that amount of disruption, that is obviously material to the question whether every young person has a fair opportunity to be presented for a higher or advanced higher qualification—I am sorry; I mean a higher or advanced higher examination—in spring. That is a material consideration for us in making that judgment.
12:00The other issue that I want to raise is that every young person who is unable to attend school because of self-isolation should be supported in their learning by their school. That should be undertaken through access to online learning. A comprehensive proposition is available through the work of e-Sgoil, particularly for senior phase candidates, who will be able to access the digital learning resources that they require to enable them to undertake the curriculum and to give them the best opportunity to complete an examination in those circumstances.
There are different opportunities for young people to access remote learning. We have undertaken the provision of some study opportunities for out-of-school hours reinforcement work through e-Sgoil. Those opportunities have been fabulously well subscribed to by senior pupils, and they are providing young people with the opportunity to make up for some of the disruption to which Mr Greer has referred.
I am keeping the question under close review. It will be material to any consideration of the undertaking of the higher diet.
Before I answer the substance of Mr Greer’s question, I should say that he is absolutely right to point out that I am using different words interchangeably and perhaps somewhat casually. There is no necessity to have an exam to award a qualification—that is an important observation to make at the outset.
On the circumstances that Mr Greer has raised with me, that is a very important and legitimate issue that we must watch carefully. That is why I said what I did about the fact that I would like to proceed with a higher exam diet and an advanced higher diet. I think that that would give young people greater certainty about their qualifications, including as a foundation for their next steps in life. However, we have to ensure that the approach is being applied fairly across the board.
Mr Greer raised the circumstances of an individual candidate. We have to watch and monitor very carefully the extent to which the experience of the candidate whom Mr Greer has mentioned is more widespread. I am hearing reports from schools of young people experiencing the type of disruption that Mr Greer talks about, although I do not know exactly how many such people there are. We must monitor that very carefully because, if young people have experienced that amount of disruption, that is obviously material to the question whether every young person has a fair opportunity to be presented for a higher or advanced higher qualification—I am sorry; I mean a higher or advanced higher examination—in spring. That is a material consideration for us in making that judgment.
12:00The other issue that I want to raise is that every young person who is unable to attend school because of self-isolation should be supported in their learning by their school. That should be undertaken through access to online learning. A comprehensive proposition is available through the work of e-Sgoil, particularly for senior phase candidates, who will be able to access the digital learning resources that they require to enable them to undertake the curriculum and to give them the best opportunity to complete an examination in those circumstances.
There are different opportunities for young people to access remote learning. We have undertaken the provision of some study opportunities for out-of-school hours reinforcement work through e-Sgoil. Those opportunities have been fabulously well subscribed to by senior pupils, and they are providing young people with the opportunity to make up for some of the disruption to which Mr Greer has referred.
I am keeping the question under close review. It will be material to any consideration of the undertaking of the higher diet.
I appreciate that answer and your point that self-isolation does not mean that education is on pause for two weeks. However, I am sure that you appreciate that self-isolation affects young people in different ways and that, in particular, it is far more challenging and disruptive for young people with a variety of additional support needs if they are not in school. I listened with interest to the point that the decision that will be taken in February will be made not just on the basis of the potential cancellation of exams, but on the basis of what disruption throughout the year has looked like.
I will move on to the national 5s.
I appreciate that answer and your point that self-isolation does not mean that education is on pause for two weeks. However, I am sure that you appreciate that self-isolation affects young people in different ways and that, in particular, it is far more challenging and disruptive for young people with a variety of additional support needs if they are not in school. I listened with interest to the point that the decision that will be taken in February will be made not just on the basis of the potential cancellation of exams, but on the basis of what disruption throughout the year has looked like.
I will move on to the national 5s.
May I add just one more point to my earlier answer? Let us assume, for example, that the higher and advanced higher diet proceeds, but many young people experience significant disruption in the academic year. In that case, I would consider that those issues merited consideration under the exceptional circumstances procedures that the SQA will have in place.
May I add just one more point to my earlier answer? Let us assume, for example, that the higher and advanced higher diet proceeds, but many young people experience significant disruption in the academic year. In that case, I would consider that those issues merited consideration under the exceptional circumstances procedures that the SQA will have in place.
Thank you. That is good to know.
I return to the national 5s. Cabinet secretary, you told me in a committee meeting in September and again in Parliament in October that you wanted to ensure that the approach to national 5 exams
“does not in any way add to teachers’ workloads.”—[Official Report, 7 October 2020; c 57.]
That has not happened, has it? Teachers need to mark more and produce more than normal on top of an already unmanageable workload. What has happened with the national 5s, and why has the workload for teachers increased, although there was a pretty clear commitment that that would not need to be the case?
Thank you. That is good to know.
I return to the national 5s. Cabinet secretary, you told me in a committee meeting in September and again in Parliament in October that you wanted to ensure that the approach to national 5 exams
“does not in any way add to teachers’ workloads.”—[Official Report, 7 October 2020; c 57.]
That has not happened, has it? Teachers need to mark more and produce more than normal on top of an already unmanageable workload. What has happened with the national 5s, and why has the workload for teachers increased, although there was a pretty clear commitment that that would not need to be the case?
I do not agree that teachers have to mark more and produce more for that approach to be undertaken. The SQA is making available to schools the assessments that it would be advisable for pupils to undertake and upon which teachers can form estimates and judgments. That material is being produced, distributed and made available to schools as we speak.
I do not think that teachers have to mark more because the SQA has advised against the holding of prelims for national 5s—obviously, those would have been set and marked by teachers in individual schools. The approach that we are taking to the availability of assessment materials is to view them as part of the rudimentary work that teachers would do during the year to assess the progress that individual candidates are making. I see it essentially as providing a bit more structure to the normal process of assessment of the performance of young people during the year as a consequence of the materials that are being made available by the SQA.
I do not agree that teachers have to mark more and produce more for that approach to be undertaken. The SQA is making available to schools the assessments that it would be advisable for pupils to undertake and upon which teachers can form estimates and judgments. That material is being produced, distributed and made available to schools as we speak.
I do not think that teachers have to mark more because the SQA has advised against the holding of prelims for national 5s—obviously, those would have been set and marked by teachers in individual schools. The approach that we are taking to the availability of assessment materials is to view them as part of the rudimentary work that teachers would do during the year to assess the progress that individual candidates are making. I see it essentially as providing a bit more structure to the normal process of assessment of the performance of young people during the year as a consequence of the materials that are being made available by the SQA.
That is a debatable point, which is very much being contested by teachers. For English and maths courses, for example, the suggestion is that teachers will be required to set two tests and to produce a portfolio. Producing a portfolio for English courses is normal; it is not an additional workload. However, teachers are saying that having to set two specified tests looks a lot like putting the exam into the year and their having to mark the exam as part of their normal work.
As you are aware, teachers can often gain additional income by becoming an SQA marker. There is a feeling that teachers are being asked to do a considerable additional amount of work, for which they would usually be paid an additional sum, as part of their existing responsibilities. Before the pandemic, the committee discussed teacher workload with you. It has only gone up since then, and there is a strong feeling among teachers that the changes have significantly increased their workload.
If that is a contested point, that goes back, at least in part, to the issue of communication between the SQA and teachers. Has that communication improved? Given that the trajectory that we were heading towards became quite clear in the summer, one of the questions that I am asked most regularly is: why has it taken the SQA so long to finally give teachers the information? The full guidance for some subjects has still not been published. Why did the discussion with teachers not start earlier, so that we could have reached a point of something close to consensus by now?
That is a debatable point, which is very much being contested by teachers. For English and maths courses, for example, the suggestion is that teachers will be required to set two tests and to produce a portfolio. Producing a portfolio for English courses is normal; it is not an additional workload. However, teachers are saying that having to set two specified tests looks a lot like putting the exam into the year and their having to mark the exam as part of their normal work.
As you are aware, teachers can often gain additional income by becoming an SQA marker. There is a feeling that teachers are being asked to do a considerable additional amount of work, for which they would usually be paid an additional sum, as part of their existing responsibilities. Before the pandemic, the committee discussed teacher workload with you. It has only gone up since then, and there is a strong feeling among teachers that the changes have significantly increased their workload.
If that is a contested point, that goes back, at least in part, to the issue of communication between the SQA and teachers. Has that communication improved? Given that the trajectory that we were heading towards became quite clear in the summer, one of the questions that I am asked most regularly is: why has it taken the SQA so long to finally give teachers the information? The full guidance for some subjects has still not been published. Why did the discussion with teachers not start earlier, so that we could have reached a point of something close to consensus by now?
The pursuit of consensus has, not surprisingly, taken some time. I am all for consensus on that question, because that will lead to a much smoother operation of the system.
Mr Greer’s questions have been framed very much around the SQA. Professor Priestley recommended that the SQA should engage more substantively on the 2021 exam diet with a range of stakeholders—principally, local authorities, the teaching trade unions and parents—and I am satisfied that the SQA has done that.
It takes time to bring people together because, as I have highlighted in a number of my answers, not everyone agrees on everything in Scottish education. In fact, in my experience, people rarely agree on much in Scottish education. It takes time for the system to be put together, and the SQA is working closely with stakeholders and partners to ensure that some of the underlying issues relating to the question that Mr Greer asked can be adequately and properly addressed as part of the approach that we take for the 2021 assessments. Work is under way to try to do that.
I give the committee the assurance that I will look very closely at the question of workload, because I genuinely do not believe that the changes need to increase the workload. Indeed, that consideration has been one of the key elements of the direction that I have given to the process. The SQA’s guidance, which has been worked on with partners, focuses on not having prelims. A school might still take the decision to have prelims, but the SQA has said that it does not think that prelims are necessary, and it has provided some assessments that can be undertaken during the year.
In the guidance that has been issued, the SQA has said that the quality of the assessments is more important than the quantity of them. Teachers do not need to gather a huge volume of evidence to ensure that candidates are given the best chance, which I know is what teachers want to give. The SQA has signalled that we should concentrate on the quality of the assessments, not the quantity of them, to enable the process to be undertaken correctly. I hope that the communication around that work can help to address some of the fears and anxieties. I will keep the issue under close review, and I will discuss it with teachers.
The pursuit of consensus has, not surprisingly, taken some time. I am all for consensus on that question, because that will lead to a much smoother operation of the system.
Mr Greer’s questions have been framed very much around the SQA. Professor Priestley recommended that the SQA should engage more substantively on the 2021 exam diet with a range of stakeholders—principally, local authorities, the teaching trade unions and parents—and I am satisfied that the SQA has done that.
It takes time to bring people together because, as I have highlighted in a number of my answers, not everyone agrees on everything in Scottish education. In fact, in my experience, people rarely agree on much in Scottish education. It takes time for the system to be put together, and the SQA is working closely with stakeholders and partners to ensure that some of the underlying issues relating to the question that Mr Greer asked can be adequately and properly addressed as part of the approach that we take for the 2021 assessments. Work is under way to try to do that.
I give the committee the assurance that I will look very closely at the question of workload, because I genuinely do not believe that the changes need to increase the workload. Indeed, that consideration has been one of the key elements of the direction that I have given to the process. The SQA’s guidance, which has been worked on with partners, focuses on not having prelims. A school might still take the decision to have prelims, but the SQA has said that it does not think that prelims are necessary, and it has provided some assessments that can be undertaken during the year.
In the guidance that has been issued, the SQA has said that the quality of the assessments is more important than the quantity of them. Teachers do not need to gather a huge volume of evidence to ensure that candidates are given the best chance, which I know is what teachers want to give. The SQA has signalled that we should concentrate on the quality of the assessments, not the quantity of them, to enable the process to be undertaken correctly. I hope that the communication around that work can help to address some of the fears and anxieties. I will keep the issue under close review, and I will discuss it with teachers.
When will the final subject guidance for national 5s be published? When will we be at the point at which the subject-specific guidance for teachers for all national 5 subjects is published?
When will the final subject guidance for national 5s be published? When will we be at the point at which the subject-specific guidance for teachers for all national 5 subjects is published?
I think that that will be on 19 November. If I need to change that date, I will write to the committee to correct it, but I am pretty certain that it is 19 November.
I think that that will be on 19 November. If I need to change that date, I will write to the committee to correct it, but I am pretty certain that it is 19 November.
Thank you.
Thank you.
I will ask the cabinet secretary a simple question. Does this year’s cohort of pupils have access to the same breadth and depth of subject choice and courses to which previous years have had access?
I will ask the cabinet secretary a simple question. Does this year’s cohort of pupils have access to the same breadth and depth of subject choice and courses to which previous years have had access?
They might not have the same access because of the restrictions of the pandemic. There are certain circumstances—in level 3 areas, for example—in which it is difficult for young people to move between school and college because of the pandemic. There may be examples of places where young people are not able to pursue all the opportunities that they might wish to take, but that is on the basis of the public health advice that we have to follow.
They might not have the same access because of the restrictions of the pandemic. There are certain circumstances—in level 3 areas, for example—in which it is difficult for young people to move between school and college because of the pandemic. There may be examples of places where young people are not able to pursue all the opportunities that they might wish to take, but that is on the basis of the public health advice that we have to follow.
In the interests of time, could you write to the committee to go into that in a bit more detail? It seems to me that it is clear that there are students for whom there is a reduction or a narrowing of courses or of access to courses or subjects. I ask that also from the point of view of parents, who may have concerns about that. It is important to get a little bit more detail on the narrowing that has taken place, where that is the case, and on the reasons and justifications for that.
In the interests of time, could you write to the committee to go into that in a bit more detail? It seems to me that it is clear that there are students for whom there is a reduction or a narrowing of courses or of access to courses or subjects. I ask that also from the point of view of parents, who may have concerns about that. It is important to get a little bit more detail on the narrowing that has taken place, where that is the case, and on the reasons and justifications for that.
We have to be really careful with our language here. There is no “narrowing”; there is the following of public health advice. That is what is going on. There is no “narrowing”.
We have to be really careful with our language here. There is no “narrowing”; there is the following of public health advice. That is what is going on. There is no “narrowing”.
No. Sorry—
No. Sorry—
No—I am going to take issue with that language. Unless I am misunderstanding where Mr Greene is going here, if the insinuation is going to be that opportunities for young people are being narrowed in 2021, the only reason for that is that we have a global pandemic on our hands. That is the point. I do not want this discussion to be framed in the language of “narrowing”. We all know what that means, and I totally contest that argument. I am not in a position in which I am prepared to be casual with public health advice.
No—I am going to take issue with that language. Unless I am misunderstanding where Mr Greene is going here, if the insinuation is going to be that opportunities for young people are being narrowed in 2021, the only reason for that is that we have a global pandemic on our hands. That is the point. I do not want this discussion to be framed in the language of “narrowing”. We all know what that means, and I totally contest that argument. I am not in a position in which I am prepared to be casual with public health advice.
Let me reframe that—and you are welcome to go with me on this. I refer to page 43 of Professor Priestley’s report on the review of the 2020 situation. The professor makes this clear, and I am using language from his report; I have not made this up. I will quote directly from the last paragraph on that page. On the SQA’s own plans for 2021, the report states:
“The review has uncovered concerns that the proposals will lead to a narrowing of courses, with significant implications for education. Related to this, it has been communicated to us that the proposals may impact negatively on attainment, particularly for disadvantaged students”.
Those are Professor Priestley’s words that I am communicating back to you, cabinet secretary. I am not making them up.
Let me reframe that—and you are welcome to go with me on this. I refer to page 43 of Professor Priestley’s report on the review of the 2020 situation. The professor makes this clear, and I am using language from his report; I have not made this up. I will quote directly from the last paragraph on that page. On the SQA’s own plans for 2021, the report states:
“The review has uncovered concerns that the proposals will lead to a narrowing of courses, with significant implications for education. Related to this, it has been communicated to us that the proposals may impact negatively on attainment, particularly for disadvantaged students”.
Those are Professor Priestley’s words that I am communicating back to you, cabinet secretary. I am not making them up.
I think that Mr Greene is drawing the wrong conclusion from the words of Professor Priestley. The SQA is trying to ensure that, in reducing the scale of assessment that is required for certain qualifications, young people are still able to undertake the breadth of the curriculum to satisfy the fact that they have undertaken their courses and are therefore able to be certificated in those courses, while the fact is recognised that there has been an erosion this year of the learning and teaching time that is available to individual young people in certain circumstances.
That is not a narrowing of choice, which is what Mr Greene put to me; that is a different concept altogether. My answer on the narrowing of choice is based on the fact that there are some opportunities that are not available for young people just now because of public health advice. I would have thought that all of us, as members of the Parliament, would accept that point.
What Professor Priestley has indicated is that, in certain courses, the SQA is recommending a narrowing or a reduction in the volume of options that are taken forward, which does not in any way erode the breadth of the qualification for which a young person is presented. Those are two entirely different concepts.
I think that Mr Greene is drawing the wrong conclusion from the words of Professor Priestley. The SQA is trying to ensure that, in reducing the scale of assessment that is required for certain qualifications, young people are still able to undertake the breadth of the curriculum to satisfy the fact that they have undertaken their courses and are therefore able to be certificated in those courses, while the fact is recognised that there has been an erosion this year of the learning and teaching time that is available to individual young people in certain circumstances.
That is not a narrowing of choice, which is what Mr Greene put to me; that is a different concept altogether. My answer on the narrowing of choice is based on the fact that there are some opportunities that are not available for young people just now because of public health advice. I would have thought that all of us, as members of the Parliament, would accept that point.
What Professor Priestley has indicated is that, in certain courses, the SQA is recommending a narrowing or a reduction in the volume of options that are taken forward, which does not in any way erode the breadth of the qualification for which a young person is presented. Those are two entirely different concepts.
Whichever way you spin it, there is still a narrowing.
Whichever way you spin it, there is still a narrowing.
They are two completely different concepts.
They are two completely different concepts.
I will leave it there.
12:15
I will leave it there.
12:15
I refer to the point about the decision that will be made in February about the exam diet. You mentioned looking carefully at the impact of the disruption throughout the year. What data is being collected? We have pupil absence data, which is, I think, updated on a weekly basis throughout the year. It seems to be incredibly important for that decision that that absence data be broken down not only by year group but by taking into account other factors, such as the Scottish index of multiple deprivation and whether young people have additional support needs. If we were to find that the overall number of young people who faced significant disruption was very small but that that disproportionately affected those in SIMD 20 areas or young people with additional needs, for example, that would be materially relevant. What data is being collected? How is that data being published? Will it be available to the committee, for example?
I refer to the point about the decision that will be made in February about the exam diet. You mentioned looking carefully at the impact of the disruption throughout the year. What data is being collected? We have pupil absence data, which is, I think, updated on a weekly basis throughout the year. It seems to be incredibly important for that decision that that absence data be broken down not only by year group but by taking into account other factors, such as the Scottish index of multiple deprivation and whether young people have additional support needs. If we were to find that the overall number of young people who faced significant disruption was very small but that that disproportionately affected those in SIMD 20 areas or young people with additional needs, for example, that would be materially relevant. What data is being collected? How is that data being published? Will it be available to the committee, for example?
Data on pupil absence is collected through the SEEMiS system. That enables us to look at absence patterns and how they affect young people.
We must also have information that is more connected than that, and such information can come only from engaging directly with schools. We have only 350 secondary schools in Scotland, so it is not a gargantuan challenge for us to engage in dialogue with local authorities to identify the scale of the problem. I would want to look at that question with reference to that dialogue rather than by relying only on what the SEEMiS system tells us about pupil absence. We must be satisfied that we are getting all the detail that is required to inform that decision.
We will actively monitor that question, and we are already in dialogue with local authorities about it. I can think of one local authority in Mr Greer’s region that is already raising questions with us about the anecdotal evidence of some pupils’ experiences. We will continue that dialogue to enable us to inform the decision.
Mr Greer said that a decision will be made in February. I have said that that is the final point at which a decision to revert could be made. We could come to a different conclusion at an earlier stage, of course. To take such a decision earlier would be advantageous, as that would give us more time to pivot to the other approaches that would be available to us.
Data on pupil absence is collected through the SEEMiS system. That enables us to look at absence patterns and how they affect young people.
We must also have information that is more connected than that, and such information can come only from engaging directly with schools. We have only 350 secondary schools in Scotland, so it is not a gargantuan challenge for us to engage in dialogue with local authorities to identify the scale of the problem. I would want to look at that question with reference to that dialogue rather than by relying only on what the SEEMiS system tells us about pupil absence. We must be satisfied that we are getting all the detail that is required to inform that decision.
We will actively monitor that question, and we are already in dialogue with local authorities about it. I can think of one local authority in Mr Greer’s region that is already raising questions with us about the anecdotal evidence of some pupils’ experiences. We will continue that dialogue to enable us to inform the decision.
Mr Greer said that a decision will be made in February. I have said that that is the final point at which a decision to revert could be made. We could come to a different conclusion at an earlier stage, of course. To take such a decision earlier would be advantageous, as that would give us more time to pivot to the other approaches that would be available to us.
Has a code been put into the SEEMiS system to allow schools to record self-isolation? If not, is there consistency in how self-isolation is being recorded in that system?
Has a code been put into the SEEMiS system to allow schools to record self-isolation? If not, is there consistency in how self-isolation is being recorded in that system?
We see information about Covid-related absences, so I assume that there must be a code. I see information on a daily basis about pupil absences and the proportion of those that are or are not for Covid-related reasons or that are authorised absences or that match various other criteria. I suspect that the origins of that must be a change to the SEEMiS codes.
We see information about Covid-related absences, so I assume that there must be a code. I see information on a daily basis about pupil absences and the proportion of those that are or are not for Covid-related reasons or that are authorised absences or that match various other criteria. I suspect that the origins of that must be a change to the SEEMiS codes.
Those are all the questions from the committee. Thank you very much for your attendance at the committee, cabinet secretary. We will now move into private session on Microsoft Teams.
12:19 Meeting continued in private until 12:47.
Those are all the questions from the committee. Thank you very much for your attendance at the committee, cabinet secretary. We will now move into private session on Microsoft Teams.
12:19 Meeting continued in private until 12:47.Air ais
Examinations Diet 2020 and 2021Air adhart
Examinations Diet 2020 and 2021