Join Today

The AI landscape in Scotland, the UK and Europe – an update

In February a public consultation exercise started to gather views to help shape the AI Strategy for Scotland. Responses are currently being sought from the broad spectrum of citizens and businesses based here, please read more below and give your opinion to help deliver a truly transformational approach for Scotland.  The full strategy is expected to be ready for publication in summer 2020.

Two further papers were published in February from both the UK Government and the European Commission. A UK parliamentary committee published a report on the impact of AI on public standards, and the European Commission published their data strategy and a white paper on the development of Artificial Intelligence.

There are a number of common themes throughout, however key were both the opportunities and risks that AI brings.

Detailed below are the key messages from each of these reports.  If you have any questions please contact Katy Guthrie – , cluster manager for Scottish data companies, and Head of ScotlandIS Data.


A scoping document was published to accompany the public consultation on the AI Strategy for Scotland. This has two main strategic goals for the adoption of AI:

  • That the people of Scotland will flourish
  • Scotland’s organisation will thrive and prosper

ScotlandIS sit on the steering group and will be involved in the working group on the development and commercialisation of AI. 

This public consultation exercise for the AI Strategy for Scotland runs until 27th March and is an opportunity to make sure the AI Strategy works for everyone. It needs to be representative, so forward the link to parents, grandparents, teenage kids and to friends who work in totally different sectors. 

To engage different groups of people there are some useful resources to help understanding –

Please also fill the consultation in with your own views and concerns to help us make sure that the strategy also works to help develop a strong and safe business sector delivering high quality AI products and services:


In 2018, the UK government published the AI Sector Deal which led to the creation of three new institutions: a Government Office for AI, an industry-led AI Council, and the Centre for Data Ethics and Innovation (CDEI). The recent UK committee report concluded that a new regulator is not required, but that existing regulations do need to be adapted to address the challenges brought by AI. The CDEI should be in a central role to advise all the relevant regulators. In particular, transparency and bias are singled out as areas where current regulation and guidance is deficient. These areas were covered in the draft guidance that ICO have recently been consulting on. The report also highlights some of the existing ethical frameworks which should guide organisations in their uses of AI.

The full paper is available here:


The European AI white paper highlights the fact that although investment in research and innovation in Europe is rising – in the past three years it rose 70% compared to the previous period  – it is still low in comparison to public and private investment in other regions of the world. The landscape in Europe is fragmented and more synergies and networks are needed to align efforts. One action relates to the launch of a scheme to provide 100 million Euros for equity financing for innovative AI developments. Another action relates to the creation of specialist AI digital innovation hubs supported by the Digital Europe programme and the creation of public-private partnerships. 

Together these points underline the importance for Scotland to maintain a close link with EU AI funding and centres of excellence. This is an area to watch closely during the negotiations about the future UK-EU relationship and we will explore with partners if and how Scotland’s data cluster can benefit from any future funding and co-operation opportunities.

Lack of trust is seen as a major factor holding back the wider development of AI. While the regulatory framework does exist (GDPR, Equalities Act), changes are proposed, and a clearer regulatory framework should help build trust and confidence in both citizens and businesses and therefore accelerate development. The white paper proposes a risk based approach to regulations on the development of AI. A sliding scale would allow for a very light touch for low or no risk applications to an outright ban for certain high risk applications. The way in which risk is quantified needs to be tightly defined. It’s not fully defined in the white paper, however it suggests it should be determined based on a combination of the sector and the specific use, and the regulatory framework should also focus on how to minimise risk. The paper also references the difficulty in enforcing current regulation, in particular, in terms of accountability when there are multiple parties involved.

Similar to the UK perspective, any changes to the regulatory framework are likely to focus on ensuring that the data on which AI algorithms are trained are free from bias, and to mandate that records for proving this are kept. The regulatory framework is also expected to require that systems are resilient to attack including more subtle attacks designed to manipulate the results and decisions taken. Legislation should include clarifications related to the transparency and the risk of ongoing changes made in response to new data (i.e. “safety” is not fixed at point of sale). Obligations are also likely to include greater requirements on human intervention and oversight – particularly for high risk applications. It’s also expected there would be specific provision made for biometric identification algorithms. Another suggestion made in the document is a voluntary labelling scheme for non-high risk applications, where, by demonstrating compliance to a set of criteria, AI product providers could be awarded a “quality” label in a similar way to kitemarks. 

The full paper is available here:

Scroll to top