AI in Education: Clara Hawking, Globeducate
In the first feature of our new series “Conversations about AI in Education”, EducationInvestor interviews Clara Hawking, who has recently been appointed as head of artificial intelligence at Globeducate.
What are the main challenges that educational institutions and organisations are facing while trying to adopt AI and integrate these technologies into their models?
The adoption of AI-driven technologies and products in education has generated an urgent need for teachers’ reskilling and upskilling. Educators will need to learn how AI works, what generative AI is and how it can be useful for them. They will also need to make critical choices and be capable of deciding when AI might not be useful and when they should not adopt it. Teachers will also have to learn how to integrate these technologies into their lessons and think about how curriculum and assessment can be changed and developed to meet the demands of this new technology in a meaningful way. That is why, in my new role at Globeducate, I am focusing on webinars, face to face workshops and in person meeting, travelling out to all our schools, to work with our educators and our school leadership teams to facilitate the development of a safe learning space where they can effectively embrace these technologies and become familiar with them. We are also working on establishing a digital learning platform for educators where they can take micro lessons in what they identify as the areas where they have some weaknesses and need further upskilling. And we are making sure that all this training material is available in multiple languages, to ensure that all educators can learn comfortably in their mother tongue.
The other major concern with AI adoption is that innovation has moved much quicker than regulation. This has left us and our educators with a complex situation where this technology is pushing its way into our schools, without us having a framework for the boundaries that we need to set. We still do not have meaningful regulation and guidelines around data and privacy use in generative AI, neither on a global scale nor national scale. That is why we are working very hard on putting together our own policy and regulation guidelines, a proper framework not just for integration, but also for benchmarking the technology, to ensure it meets our high safeguarding standards. This will help our schools develop an appropriate response to major concerns and use these new technologies to meet demands, pioneer new initiatives and be innovative, while at the same time safeguarding our students and setting rules to prevent data and privacy issues.
Is it still possible, under the propelling force of AI technologies, for traditional educational institutions to retain their structure and role? Or is it likely that they will be left behind by the AI revolution and be replaced by something completely different?
We are at an interesting, exciting generational intersection. There is a powerful disruption happening and it is not going to unhappen, we are not going back to the way things were. And this is just the beginning, we will certainly see further disruption. What is coming is a total change around how we really think about education, about everything that education involves, from the way a school looks like to the role of the teacher in the classroom. It is not only the innovation in technology that is fuelling this change, and the lack of coming regulation, but also the new generations. We can observe a deep mindset shift in our students. The students of today are digital natives, while most of their teachers are not. This creates a deep gap that needs to be bridged. We need the older generations that do that, to learn what it means to be human in a digital world and become able to reach our digital native children. Because the way that schools have been for the last couple of centuries has irreversibly changed and there is no coming back. Traditional educational institutions need to accept the disruption and fully embrace the change if they don’t want to disappear.
What are in your opinion the most advanced and exciting innovations and new developments in AI technology for education?
For me, the most exciting technology is interactive AI, which works through knowledge formed on all digitally available human knowledge, and has conversations with multiple people as we interact with other platforms, social media accounts, software, and AI tools. This means a technology that is extremely adaptable and responsive, capable of going further than just analysing data to make predictions based on a predetermined model. It is a type of AI that really knows you and can become your personal assistant, an assistant that knows what you are looking for and is able to even anticipate some of the things that you need in your work life and private life. Through this technology, we could automate many things and tasks, so that we would be free to do activities that we really like, follow our passions and get some of our time back for ourselves. In education, this kind of tools would be capable of boost engagement with our students and make their learning experience much more enjoyable and successful. Educators will use AI as a teaching assistant – an assistant who very precisely knows the learning needs of every student in class. Students would not be passively learning; they would be actively learning. And the most exciting aspect of this technology is that our students will have an opportunity to learn without the need of sitting behind a screen. We could regain an active approach and the creativity that I think we have been missing for many years in standard education.
Despite positive learning outcomes, we still have resistance and fear around AI adoption. How do you consider these negative feelings and perspectives around AI? Do they need to be fought or taken into consideration?
I welcome the resistance. I think that it is very important that we have people that are critical of innovating, and that we consider the ethics, the consequences, and the potential outcomes of what we are creating. There are actual risks in developing AI technologies, and we need to think about what impact our strategies might have. It is important that we are very determined in our efforts to continue to develop AI while at the same time taking into serious consideration potential risks. This implies, for us all, rethinking what our relationship with technology is and how we want to shape this relationship in the future. And this is why it is very important that everybody is involved, including social scientists, philosophers, and historians as well as mathematicians, tech experts and scientists, as well as our young people, so that we can all come together in an open forum to have these very important debates. I think that we must be responsible enough to recognise that there is good in AI but there is also bad and to be willing to engage with these conversations, so that we don’t just innovate, but we innovate and progress while also taking into consideration the long-term consequences of our innovation, making sure that they move in the direction of enriching and improving human life.
We know that AI has become very attractive for investors. However, with the regulatory issues that we have mentioned before, it can also make an investment much riskier. How important is AI for the investibility of an educational business?
It has become crucial for businesses that are trying to grow and for investors as well. That is why it is an imperative for organisations to establish the role of a head of AI. It should be in all educational organisations, big and small, and must be a designated person who focuses exclusively on this. There are simply too many variables moving around AI right now that there needs to be a gatekeeper for how this technology comes in and is integrated within an organisation. There needs to be a qualified person who can do that research, be up to date with technological development and keep up with the evolving regulation to ensure relevant strategy, compliance, and safety. It is also imperative for organisations to be proactive in understanding political AI trends. So that we do not inadvertently break some new regulation that has just been put into place, for example. It is an absolute investment imperative to ensure that there is a person placed strategically in that role within every organisation. The higher up the better to keep these threads together and help facilitate a safe and prosperous change into the future with generative AI.