Digital India Act - Child Online Safety Recommendations (Draft)

Working paper in consultation with: IGPP, Wranga & YOLO, Upsurge, ActionAid, Tulir, The Gender Project, State Planning Commission, Centre for Child Rights and Development, IMAARA, Animesh Sinha & Partners, Siddhant Chatterjee, Karan Baweja, Sumitro Chatterjee, Raman Chandna, Venkatesh Ramamrat, Laksh Khanna, Chandni Chhajer, Anantha Giri, Velu Mariappan, Xavier Vivek Jerry and Andrew Sesuraj.

Based on the existing laws in other countries and best practices in the field of child online safety, a possible outline for a proposed Indian law for children's internet protection could include the following elements:

  1. Age Assurance: Social media intermediaries must implement a mechanism for users to verify their age, and minors under the age of 18 must not be allowed to use their services without parental consent.
  2. Privacy Protection: Social media intermediaries must provide robust privacy protections for minors, including limiting the collection, use, and sharing of their personal information.
  3. Cyberbullying Prevention: The law should prohibit cyberbullying and require social media intermediaries to implement measures to prevent and respond to instances of cyberbullying.
  4. Harmful Content: Social media intermediaries should be required to take proactive measures to prevent minors from accessing harmful content, including explicit or violent content, hate speech, and online grooming.
  5. Parental Involvement: The law should encourage parental involvement in their children's online activities and require social media intermediaries to provide tools and resources for parents to monitor and control their children's online behaviour.
  6. Reporting Mechanism: Social media intermediaries should be required to establish a reporting mechanism for users to report instances of harmful content or behaviour, and to promptly respond to and take down such content or behaviour.
  7. Penalties: The law should establish penalties for non-compliance with the above provisions, including fines, suspension or revocation of licence, and criminal liability for repeated or egregious violations.

Age Assurance Laws + Mechanism

Age verification laws vary across the globe, and they are constantly evolving to keep up with the changing digital landscape. Here are some examples of age verification laws in different countries:

  1. United States: The Children's Online Privacy Protection Act (COPPA) requires online services to obtain parental consent before collecting personal information from children under the age of 13. The law also requires online services to implement age verification mechanisms to prevent children under 13 from creating accounts without parental consent.
  2. United Kingdom: The Digital Economy Act 2017 requires commercial websites that contain pornographic material to implement age verification mechanisms to prevent children under the age of 18 from accessing such material.
  3. Australia: The Enhancing Online Safety Act 2015 requires social media intermediaries to implement age verification mechanisms to prevent children under the age of 18 from accessing harmful material and to provide mechanisms for users to report harmful material.
  4. European Union: The General Data Protection Regulation (GDPR) requires online services to obtain parental consent before collecting personal information from children under the age of 16. Member states of the European Union may choose to lower the age requirement to 13 years old.
  5. Canada: The Protecting Canadians from Online Crime Act requires social media intermediaries to implement age verification mechanisms to prevent children under the age of 13 from using their services without parental consent.
In India, Aadhar, UPI + Mobile verification can be deployed for this purpose.

There are several online age verification mechanisms available to help ensure that minors are not accessing inappropriate content or services online. Here are some common age verification mechanisms:
  1. Date of Birth Verification: This is the most common age verification method, where the user is asked to enter their date of birth during account creation. However, this method is not foolproof as it relies on the user's honesty and accuracy in providing their date of birth.
  2. Third-Party Verification Services: Third-party verification services, such as Experian, Equifax, or TransUnion, use publicly available data to verify the user's age. This method is more reliable than date of birth verification, but it can also be costly and time-consuming.
  3. Credit Card Verification: Credit card verification requires users to enter their credit card information to confirm their age. However, this method may not be suitable for minors who do not have access to a credit card.
  4. Social Security Number Verification: Social security number verification requires users to enter their social security number to confirm their age. However, this method may not be suitable for minors who do not have a social security number.
  5. Mobile Carrier Verification: Mobile carrier verification requires users to enter their phone number to confirm their age. The user receives a verification code via SMS or phone call, and they enter the code to confirm their age. This method is suitable for minors who have a mobile phone but may not be suitable for those who do not.
It's important to note that each age verification mechanism has its strengths and weaknesses, and no single method is perfect. Online services may need to use a combination of these methods to ensure that minors are not accessing inappropriate content or services online.

If you would like to give your feedback please fill this form.

Children's Online Data Protection Laws + Recommendations

Children's online privacy protection laws are designed to protect the personal information of children under the age of 13 or 16, depending on the country. Here are some examples of children's online privacy protection laws across the globe:

  1. United States: The Children's Online Privacy Protection Act (COPPA) regulates the collection of personal information from children under the age of 13 by websites and online services. It requires websites to obtain verifiable parental consent before collecting personal information, and it sets guidelines for the privacy policies that websites must provide to parents.
  2. European Union: The General Data Protection Regulation (GDPR) sets guidelines for the collection and processing of personal data, including that of children under the age of 16. It requires websites to obtain verifiable parental consent before collecting personal information, and it sets guidelines for the privacy policies that websites must provide to parents.
  3. Canada: The Personal Information Protection and Electronic Documents Act (PIPEDA) sets guidelines for the collection, use, and disclosure of personal information. It requires websites to obtain verifiable parental consent before collecting personal information from children under the age of 13.
  4. Australia: The Privacy Act 1988 sets guidelines for the collection, use, and disclosure of personal information. It requires websites to obtain verifiable parental consent before collecting personal information from children under the age of 16.
  5. United Kingdom: The Data Protection Act 2018 sets guidelines for the collection, use, and disclosure of personal information, including that of children. It requires websites to obtain verifiable parental consent before collecting personal information from children under the age of 13.
It's important to note that these examples are not exhaustive, and each country has its own specific laws and guidelines for protecting children's online privacy.

These frameworks set guidelines and best practices for websites and online services to follow in order to protect the personal information of children. Here are some examples:
  1. The Children's Online Privacy Protection Act (COPPA) in the United States sets guidelines for websites and online services to obtain verifiable parental consent before collecting personal information from children under the age of 13. It also requires websites to post privacy policies that are easily accessible and understandable for parents.
  2. The General Data Protection Regulation (GDPR) in the European Union sets guidelines for the collection and processing of personal data, including that of children. It requires websites to obtain verifiable parental consent before collecting personal information from children under the age of 16.
  3. The Children's Code in the United Kingdom sets out 15 standards that online services must follow to protect the privacy of children. These standards include age-appropriate design, parental controls, and privacy settings that default to high privacy.
  4. The Privacy Commissioner of Canada has published guidelines for websites and online services to protect the personal information of children under the age of 13. These guidelines include obtaining verifiable parental consent, limiting the collection of personal information, and providing clear and understandable privacy policies.
  5. The Australian Privacy Principles set guidelines for the collection, use, and disclosure of personal information, including that of children. These principles require websites to obtain verifiable parental consent before collecting personal information from children under the age of 16.
If you would like to give your feedback please fill this form.

CyberBullying Laws + Recommendation

Cyberbullying is a form of online harassment or bullying that can take many forms, including spreading rumours, making threats, or posting abusive messages or images online. Here are some examples of laws against cyberbullying:

  1. United States: Many states in the US have laws that specifically address cyberbullying. For example, some states have laws that prohibit cyberbullying in schools or on school property, while others have laws that make it a crime to use electronic communication to harass or intimidate another person.
  2. Canada: The Canadian Criminal Code includes provisions that make it a crime to use any form of electronic communication to harass, threaten, or intimidate another person. This includes cyberbullying.
  3. United Kingdom: The Protection from Harassment Act 1997 makes it a criminal offence to harass or stalk another person, including through online communication. The Malicious Communications Act 1988 and the Communications Act 2003 also make it illegal to send threatening or abusive messages online.
  4. Australia: The Australian Federal Police Act 1979 makes it a criminal offence to use a telecommunications service to menace, harass or cause offence, which can include cyberbullying. Additionally, some Australian states have specific laws that address cyberbullying.
  5. New Zealand: The Harmful Digital Communications Act 2015 makes it a criminal offence to use electronic communication to cause harm, including emotional distress, to another person. This includes cyberbullying.
There are legal recommendations for tech companies to prevent cyberbullying. For example:
  1. The European Commission recommends that tech companies implement age verification systems to prevent minors from accessing harmful content, and provide effective reporting mechanisms to enable users to report harmful content or behaviour.
  2. The United Kingdom's Online Harms White Paper recommends that tech companies take a range of measures to prevent online harm, including implementing age verification mechanisms, moderating content, and providing effective reporting mechanisms.
  3. The US Federal Trade Commission's COPPA Rule requires that websites and online services that collect personal information from children under the age of 13 obtain verifiable parental consent before collecting personal information, and post clear and accessible privacy policies.
  4. The Australian eSafety Commissioner provides guidance for social media and technology companies on how to implement safety features and policies to prevent cyberbullying and other forms of online harm.
  5. The Canadian Centre for Child Protection provides guidelines for tech companies on how to prevent child sexual exploitation and cyberbullying, including implementing age verification systems and providing effective reporting mechanisms.
If you would like to give your feedback please fill this form.

Harmful Content Laws + Recommendations

Many countries have laws and regulations that are designed to protect children from harmful content online. These laws typically cover a range of content, including violent or sexually explicit material, hate speech, and other types of content that may be inappropriate for children.

Some examples of laws and regulations that protect children from harmful online content include:

  1. The Children's Online Privacy Protection Act (COPPA) in the United States, which requires website operators to obtain verifiable parental consent before collecting personal information from children under the age of 13, and to post clear and accessible privacy policies.
  2. The Audiovisual Media Services Directive in the European Union, which regulates audiovisual media services, including online platforms, and requires that content providers take appropriate measures to protect minors from harmful content.
  3. The Online Safety Act in Australia, which requires social media platforms and other online services to remove harmful content, including cyberbullying and child sexual abuse material, and to take steps to prevent the spread of such content.
  4. The Harmful Digital Communications Act in New Zealand, which makes it a criminal offence to use electronic communication to cause harm, including emotional distress, to another person.
  5. The Digital Economy Act in the United Kingdom, which includes provisions to protect children from harmful online content, including requiring age verification for access to pornographic material.
Recommendation

Many recommendations for tech companies to ensure that children are not exposed to harmful content online. Some of the most common recommendations include:
  1. Age verification: Implement age verification systems to ensure that children are not exposed to content that is not appropriate for their age.
  2. Content filtering: Use automated systems or human moderators to filter out content that may be inappropriate for children, such as violent or sexually explicit material.
  3. Parental controls: Provide parents with the ability to set controls and filters on their children's devices or accounts to limit their access to certain types of content.
  4. Reporting mechanisms: Provide easy-to-use and effective reporting mechanisms that allow users to flag content that is inappropriate or harmful, including cyberbullying or hate speech.
  5. Privacy protections: Ensure that children's privacy is protected online by implementing strong data protection policies and providing clear and accessible privacy notices.
  6. Education and awareness: Work with educators and child welfare advocates to develop programs that promote online safety and teach children and parents how to identify and avoid harmful content.
If you would like to give your feedback please fill this form.

Parental Involvement

Examples of laws and regulations that encourage parental involvement in their children's online activities. Some of the most common examples include:

  1. The Children's Online Privacy Protection Act (COPPA) in the United States, which requires websites and online services to obtain verifiable parental consent before collecting personal information from children under the age of 13.
  2. The Australian eSafety Commissioner's Online Safety Act, which requires online service providers to notify parents and guardians when their child has reported harmful content or conduct, and to provide them with information on how to manage their child's online safety.
  3. The United Kingdom's Digital Economy Act, which requires online pornography providers to verify the age of their users, and to take steps to prevent children from accessing adult content.
  4. The New Zealand Harmful Digital Communications Act, which requires online content hosts to provide a complaints process for harmful digital communications, and to notify parents or guardians if a complaint has been made about their child.
  5. The European Union's Audiovisual Media Services Directive, which requires content providers to take appropriate measures to protect minors from harmful content, and encourages parents to use parental control tools to monitor their children's online activities.
Good examples of countries that have achieved encouraging parental involvement in their children's online activities. Here are a few examples:
  1. South Korea: South Korea has a law called the Youth Protection Revision Act, which requires all smartphones sold to minors to have a parental control app pre-installed. The app allows parents to set time limits for phone use, block specific apps or websites, and monitor their child's online activity.
  2. Denmark: Denmark has a program called "Digitalt Forældreskab" (Digital Parenting), which provides parents with information and resources to help them navigate their children's digital lives. The program includes workshops, online courses, and a website with tips and advice for parents.
  3. New Zealand: New Zealand's Netsafe organization provides resources and support for parents to help them keep their children safe online. Netsafe offers a free online safety helpline for parents, as well as workshops and resources on topics such as cyberbullying, social media, and online gaming.
  4. United States: The Family Online Safety Institute (FOSI) is a non-profit organization that works to promote online safety and privacy for children and families. FOSI provides resources and information for parents, including a "Good Digital Parenting" guide with tips for managing children's online activities.
  5. United Kingdom: The UK Safer Internet Centre is a partnership of organizations that work together to promote online safety and digital citizenship for children and young people. The center provides resources and support for parents, including a helpline and a range of online safety guides and resources.
If you would like to give your feedback please fill this form.

Reporting Mechanisms

Good examples of online reporting mechanisms from across the globe:

  1. The UK Safer Internet Centre: The UK Safer Internet Centre provides an online reporting tool called the "Report Harmful Content" service, which allows users to report harmful or illegal online content, including cyberbullying, hate speech, and inappropriate sexual content.
  2. The Australian eSafety Commissioner: The Australian eSafety Commissioner has an online reporting system called "Report Cyberbullying", which allows users to report cyberbullying and other online safety concerns.
  3. The Canadian Centre for Child Protection: The Canadian Centre for Child Protection provides an online reporting tool called "Cybertip.ca", which allows users to report suspected child sexual abuse material, online luring, and other online safety concerns.
  4. NetSafe New Zealand: NetSafe New Zealand provides an online reporting tool called "The Orb", which allows users to report online safety concerns such as cyberbullying, scams, and online harassment.
  5. INHOPE: INHOPE is a global network of hotlines that work to combat online child sexual abuse material. The network provides an online reporting tool called "Report Illegal Online Content", which allows users to report suspected child sexual abuse material and other illegal online content.
Tech platforms have online reporting mechanisms that allow users to report inappropriate or harmful content, behavior, or activities. Here are some examples of tech platforms with good online reporting mechanisms:
  1. Facebook: Facebook has a reporting system that allows users to report a wide range of issues, including hate speech, cyberbullying, and other types of harassment. The platform also provides users with tools to control their privacy settings and limit who can see their content.
  2. Instagram: Instagram has a reporting system that allows users to report a wide range of issues, including bullying, harassment, and inappropriate content. The platform also provides users with tools to control their privacy settings and limit who can see their content.
  3. Twitter: Twitter has a reporting system that allows users to report a wide range of issues, including harassment, hate speech, and other forms of abuse. The platform also provides users with tools to control their privacy settings and limit who can see their content.
  4. YouTube: YouTube has a reporting system that allows users to report inappropriate or harmful content, including cyberbullying and hate speech. The platform also provides users with tools to control their privacy settings and limit who can see their content.
The problem is these mechanisms are trained for the USA, we need India specific training, also they need to work closely with the police administration.

If you would like to give your feedback please fill this form.

Fines For Violating The Law

The fines imposed on tech platforms for violating child online safety laws vary by jurisdiction and specific laws that have been violated. Here are some examples of fines that have been imposed on tech platforms for violating child online safety laws in various countries:

  1. United States: Under the Children's Online Privacy Protection Act (COPPA), the Federal Trade Commission (FTC) can impose fines of up to $43,280 per violation on tech platforms that fail to comply with the law.
  2. United Kingdom: Under the UK's Online Safety Bill, tech platforms that fail to comply with the law can face fines of up to 10% of their global revenue or £18 million, whichever is higher.
  3. Australia: Under the Enhancing Online Safety Act 2015, tech platforms that fail to remove cyberbullying material targeted at an Australian child within 48 hours can face fines of up to AUD 111,000 per day.
  4. France: Under the Law on the Protection of Children Against Violence, tech platforms that fail to remove illegal or harmful content related to child pornography, violence, or terrorism can face fines of up to €250,000.
  5. Germany: Under the Network Enforcement Act (NetzDG), tech platforms that fail to remove illegal hate speech or other harmful content can face fines of up to €50 million.
If you would like to give your feedback please fill this form.

Current Laws

  1. Information Technology Act, 2000: This act was amended in 2008 to include Section 67B, which deals with the publication or transmission of sexually explicit material depicting children. It also includes provisions to punish cyberbullying and online harassment.
  2. Protection of Children from Sexual Offences Act, 2012 (POCSO): This act criminalizes the sexual abuse and exploitation of children, including online sexual exploitation.
  3. Juvenile Justice (Care and Protection of Children) Act, 2015: This act provides for the care, protection, and rehabilitation of children in need of care and protection, including victims of online sexual abuse and exploitation.
  4. The Central Board of Secondary Education (CBSE) Cyber Safety Handbook: This handbook provides guidelines for students, parents, and teachers on how to stay safe online and how to deal with cyberbullying.
  5. The Ministry of Electronics and Information Technology (MeitY) guidelines for social media platforms: In February 2021, MeitY issued guidelines for social media platforms to ensure online safety, including a requirement to appoint grievance officers to address complaints.
  6. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: These rules, issued in February 2021, aim to regulate digital media and online platforms, including requirements for age verification, removal of harmful content, and establishment of grievance redressal mechanisms.

Copyright © 2024 Social Media Matters. All Rights Reserved.