Technologies that are more and more invasive in schools, but the protection of children is just an excuse

The school is increasingly seen as a testing ground to test new technologies, as if everything that is innovation was necessarily also right and useful to protect students, but this irresponsible use of invasive systems can actually involve great risks for student rights. Let’s try to understand which ones.

A few days ago the news that the privacy guarantor had initiated an investigation against a platform focusing on social assessment of studentsbut this is only part of the problem.

Face recognition even in schools: this is how it is used and privacy issues

Face recognition to curb the massacres in American schools

In the United States, for example. Clearview – a company already sanctioned in Italy as well – is preparing to install face recognition cameras in schools.

ICT channel: 5 successful initiatives to automate business

In this case, the spark to turn on the fuse was yet another massacre of children in school in the hands of one of their comrades armed with firearms. In particular, it should be noted how this system does not bring significant benefits in terms of declining cases, as it at most allows for faster identification of those who have already entered the school by shooting at their classmates.

It would have a really tangible effect if, instead of equipping the schools with face recognition, one made a real one. campaign against firearms, aimed at better regulating its sales and possession. Installing cameras with face recognition will, in addition to an apparently increased security, have only two effects: report Clearviewwhose actions have now almost everywhere been declared illegitimate; to oblige children and teachers to undergo biometric processing of their dataresulting in the creation of databases, which will then be shared with police forces around the world (this is the core business of Cleraview), contributing to the mass surveillance society beautifully sketched by Shoshana Zuboff.

Predictive intelligence to control the studio

Unfortunately, however, the problems for students do not end here, as a third invasive system is entering the school world; we are talking aboutpredictive intelligence aimed at guiding the child in its studies.

At this point, we need to be clear, as long as these predictive guidance systems are used independently of the students, without any intervention from the schools, then it is something ethically and legally acceptable in the author’s opinion. However, everything changes radically when the school intervenes in this process, even if only for a brief moment results are sharedfor example, with potential employers. This is because, as we will also see below, artificial intelligence systems have too often shown malfunctions due to bias of different types and therefore trust these systems, for example the decision to admit a boy to a university or his employment in a job, is something completely wrong that can lead to consequences that are irreparable for the lives of these children.

The (unlikely) justification

But why do these technologies focus so much on school?

Anyone who has been involved in digital law since before May 25, 2018, will remember how the story of the relationship between minors and technology has always been characterized by an at least toxic bond in which one party (the minors) suffers and the other party exploits a favorable situation to achieve secondary interests.

Let me explain: as the conservatives in the United States would censor vulgarity and explicit content, they did it by saying that it was only for children. When the EU decided to launch legislation that would allow reading of the content of messages (such as text messages) from all Europeans, it did so by saying that it was to protect children from pedophiles, and so on …

The point is that children on many occasions, in Europe as well as in the United States, have been used as excuse for adopting restrictive rules on fundamental freedoms. This is basically because by shifting the level of discourse from the level of adults to minors, anyone who tries to explain the reasons why the use of a free-killing technology is not recommended risks being considered insensitive to the rights of children. It would be a social suicide, and therefore there is hardly anyone who opposes those who propose the adoption of technologies in favor of children’s presumed safety or in any case in favor of their future inclusion in the world of adults.

In this sense, some companies’ attempts to sell highly invasive systems to schools that may adversely affect children’s rights are therefore not new.

Sure, if you talk to one of these manufacturers, they will tell you that these systems are error free, but they are not.

The previous one

Something is known to the children at an English school who, after failing to complete the school year regularly due to the first wave of covids, have been exposed to a decision of an algorithm who then decided who should be promoted and who should be rejected.

It is a pity that the students a few days later realized how strange they had only been promoted, or children of wealthy families on the other hand rejected children of less affluent families. This bias was due to the fact that the algorithm had been trained with data from the past to predict the future; but the past is always different from the future, even more so when it comes to school, where only wealthy families in recent decades could afford to have their children complete their studies. Hence the divergent data, according to which only the rich complete the school, which then led to failure for all children of poor families.

Problems like these are on the agenda when using algorithms, so it is very important to create awareness and above all school staff as a simple mistake could ruin the life of a young student in an irreparable way. The technology is not flawless.

The absence of legal basis

It is therefore no coincidence that the Privacy Guarantee intended to hit the problem in the bud by launching investigations into the issues involved in social scoring between school walls.

At this point, if one disregards the ethical question at the moment, it is in the author’s opinion worth highlighting total lack of adequate legal basis to support such treatment. In these schools, someone will probably have sent an information message to the parents with a nice request for consent, without remembering, however, that according to the GDPR and EDPB Consent Guidelines, this legal basis is not usable when it comes to public administration. This is because the citizen (in our case a student) is not really able to assert their contractual weight against PA, which therefore acts in a power situation than the ordinary person. For this reason, it must be considered that any consent is to be regarded as deficient.

It is therefore not consent, but neither is the public interest, nor at least a legal obligation, to justify such processing of data, which must therefore be considered absolutely illegitimate.

But even then ethical point of view, this type of treatment shows huge problems. When we talk about social assessment, it is actually a matter of judging the child’s actions to understand what kind of adult he will be and to exclude certain professional paths / life choices, based on actions performed even 20 years earlier. Which of you readers can honestly say that you do not have a friend who as a child was one of the “desperate cases”, and who then over the years has redeemed himself and now may have conquered an enviable social position? Here, with social scoring, this will never happen because a child’s actions affect its future. If a child in primary school is lively with a tendency to violence, with social scoring or with predictive technologies, it can be trapped in a cluster that will prevent him from striving for a better social position. Not only, he will be prevented from redeeming himself and this is, in the author’s opinion, even more serious.

After all, the penal system is also based on the concept of re-education, to overcome any problems to help the criminal redeem himself and thus again become a fully integrated member of the community. If this is the goal of criminal law, it is not clear how it can be compatible with a system that, even without the subject committing crime, judges his behavior by branding him forever.


After all, schools are not born with these goals, it should be remembered more often. For this reason, they should not be suited to such social experiments and should not succumb to the temptation of this “technological solutionism” capable of leading the educational institution on impenetrable paths that do not belong to it.

FORUM PA 2022: The Global Challenges of Cyber ​​Security and Digital Sovereignty


Leave a Comment