Safety in digital innovation requires innovators to take preventive measures proportionate to the risks, remedies, support and care for victims. However, careful consideration is also needed to ensure that the protection of children does not come at the cost of children’s full enjoyment of the digital environment and other rights.
The principle of safety draws together three sets of children’s rights:
Protection against abuse and neglect: considering how digital technologies can be abused to facilitate violence and harm against children or to recruit children for extremist, terrorists or other violent activities.
Special protection against sexual exploitation and sexual abuse: including the use of digital technologies ‘to solicit children for sexual purposes and to participate in online child sexual abuse’.
Promotion of physical and psychological recovery and social reintegration of child victims: within an environment that encourages good ‘health, self-respect and dignity of the child’.
In the digital environment, risks of harm to children manifest in various forms, including ‘cyberaggression’, ‘cyberattacks’ and ‘information warfare’; digital technologies could also be used to facilitate ‘child trafficking’ and ‘gender-based violence’.
Making digital innovation safe by design for children does not mean boxing children into a safe digital corner. Instead, embedding safety into digital innovation by design means thoroughly assessing the risks of both the intended and unintended use of digital technologies, and building proportionate measures to mitigate these risks and provide appropriate support and redress when harms occur.
Relevant legal frameworks and guidance
To assess online risks to children and devise proportionate measures, digital innovators need to recognise different categories of risks: content, contact, conduct and contract. The UK Online Safety Bill prescribes ‘children’s risk assessment duties’ to assess and mitigate content, conduct and contact risks to children. For example, mitigating contact risks requires digital innovators to assess the risk of ‘functionalities enabling adults to contact other users (including children) by means of the service.’
Design cases
While hate, cyberbullying and harmful online content (e.g., ‘proana’) are symptoms of social ills, digital innovation can form part of the solution. Design safe spaces for users, including children, while ensuring that abusive behaviour or harmful content does not reach them by taking safety-by-design measures.
Notice-and-take-down and moderation systems are standard measures referred to in legislation and voluntary guidelines. The purpose is to ‘detect, surface, and remove illegal and harmful conduct, contact and content with an aim for preventing harms before they occur’ and ‘feedback loops that inform users on the status of their reports [and outcomes]’.
However, the administration of moderation systems could interfere with children’s other rights, including freedom of expression and information. To address this, the Santa Clara Principles on Transparency and Accountability in Content Moderation emphasise:
Human rights and due process
Understandable rules and policies
Cultural competence
State involvement in content moderation
Integrity and explainability.
Other measures to mitigate online risks are also available. Examples include ‘blocking technologies’, ‘heuristic filtering’, ‘automated CSAM detection’ and ‘web crawlers.’
To build safety into digital innovation by design, digital innovators first need to understand the nature of online risks to children, assess them against the features and functionalities of their products and services, and devise proportionate measures to mitigate them. Innovators also need to support children in ways compatible with their needs and appropriate to to the risks.