1. Purpose & Scope
We (Wizapp, and our affiliated entities) are committed to providing a safe environment for all users, including minors. This Child Safety Standards Policy ("Policy") describes the rules, standards, and procedures we adopt to prevent the support, facilitation, or exposure of children to sexual abuse, exploitation, or endangerment in connection with the App and Service. This Policy is an integral part of our Terms & Conditions and is binding on all users, whether minors or adults.
This Policy applies to all content, interactions, messages, profiles, media, user-generated content, communications, and services made available via the App (mobile, web interface, APIs etc.).
2. Definitions
For purposes of this Policy, the following definitions apply (in addition to definitions in the Terms):
Child / Minor: a person under the age of 18 (or the legal age of majority in applicable jurisdiction).
CSAE: Child Sexual Abuse and Exploitation — includes any content, behavior, or communication that involves or promotes sexual abuse or exploitation of minors.
CSAM: Child Sexual Abuse Material — any content (images, videos, text, audio, etc.) that depict or promote sexual activity involving a minor, sexual exploitation of a child, sexualized content involving minors, or graphic sexual content involving minors.
Grooming / Predatory Behavior: attempts to exploit, manipulate, coerce, or build an inappropriate relationship with a minor for sexual or exploitative purposes.
Designated Child Safety Contact: the person or role within our organization responsible for receiving and acting on reports related to child safety.
3. Prohibited Content & Behavior
Users (and all content) must NOT engage in any of the following (this list is illustrative, not exhaustive):
Any form of CSAM.
Sexual content with minors, including messages, images, or other media that sexualize minors, depict minors in sexual situations, or encourage minors to produce sexual content.
Requests to minors for sexual content or exploits, including solicitation of explicit images or sexual conversations with a minor.
Grooming, sextortion, or predatory behavior toward minors (e.g. sending sexual or suggestive messages, offering gifts or favors to gain trust, coercion).
Endangering or facilitating exploitation of minors, including encouraging dangerous or illegal acts involving minors.
Any content that encourages or normalizes sexual relations with minors, child prostitution, trafficking, or sexual slavery.
If any user engages in such behavior or uploads prohibited content, we will take immediate action (e.g. removal, suspension, reporting to authorities) as detailed below.
4. Moderation, Detection & Enforcement
4.1 Automated & Human Review
We employ a mix of automated tools (e.g. filters, pattern recognition, AI models) and human moderators trained in identifying child safety risk to detect suspicious content or behavior. High-risk or flagged content is escalated for priority review.
4.2 Removal & Suspension
We will promptly remove or disable access to any content or account found in violation of this Policy. In serious or repeated cases, we may permanently ban the user from the App.
4.3 Reporting to Authorities / External Organizations
Where required by law or in our discretion, we will report CSAM, grooming, or other violations involving minors to relevant law enforcement agencies, regulatory bodies, or child protection hotlines / centers (e.g. NCMEC in the U.S. or local equivalent).
4.4 Retention & Logging
We may retain logs, metadata, or backups of removed content for audit, investigation, or legal purposes, as permitted by law.
4.5 Staff Training
Our moderation, support, and security teams receive training in recognizing, handling, and escalating reports of CSAE, grooming, and related child safety issues.
5. In-App Reporting & Feedback
We provide an in-app mechanism (e.g., "Report" button, "Flag content" option) that allows any user to submit reports or feedback regarding suspected violations of child safety (including CSAE).
Users may also contact us via email or other support channels expressly provided for reporting child safety concerns (see "Contact" below).
We encourage users to report content or behavior even if they are uncertain it violates policy.
We will review, act, and respond to such reports in a timely manner (e.g. within 24 hours or as appropriate based on severity).
6. Child Safety Point of Contact
We designate a Child Safety Contact who is empowered to receive, evaluate, and act on reports of violations under this Policy. All such incidents must be escalated to this contact.
Name / Role: [Wizapp Support]
Email: [support@eternalenginecorp.com]
Response Time: We commit to acknowledge receipt of any report within 72-96 hours (or other reasonable timeframe) and to act in accordance with this Policy and applicable law.
7. Legal Compliance & Cooperation
We comply with all applicable laws, regulations, and reporting obligations related to child protection, child abuse, and sexual exploitation in every jurisdiction in which we operate.
We will cooperate with lawful requests or orders from law enforcement, courts, or regulatory bodies concerning child safety issues.
Where a local jurisdiction mandates specific reporting or handling (e.g. mandatory reporting to child protection agencies), we will follow those requirements.
8. Review, Updates & Transparency
We review this Policy and our child safety practices regularly (at least annually or more frequently if required).
Updates to this Policy will be posted publicly and users may be notified (e.g. via app update or notice).
We will maintain a publicly accessible version of our CSAE / child safety standards (e.g. via a web page link), which can be referenced in our Play Console submission as required by Google Play.
We will publicly (or internally) document summary statistics or counts of child safety reports (without revealing personally identifying information) if legally permissible, to promote accountability.
9. Advertising, SDKs & Third-Party Content
If the App displays ads, all ad SDKs or mediation platforms used must be self-certified as appropriate for children or families, and comply with Google's Families Self-Certified Ads SDK requirements. Google Help
We do not permit ad content that is sexual, exploitative, or inappropriate for minors.
Any third-party content or embedded modules (e.g. social plugins, content feeds) must comply with this Policy; we will enforce removal or restriction if child safety violations are detected.
10. Applicability to Social / Community Features
If the App provides social features (chat, messaging, user profiles, media sharing, comments, etc.), we impose additional safeguards, including:
Age gating or parental consent mechanisms if minors may access interactive features.
Content filtering and moderation for user-uploaded images, video, text.
Restrictions or supervision of communication between minors and unknown adults.
Limits on direct contact requests or private messages from adults to minors (unless pre-approved or mediated).
Logging and review of communications or interactions that are flagged as suspicious or high risk.
11. Consequences of Non-Compliance
Violation of this Policy (by any user or content) may lead to:
Immediate removal of content, suspension or termination of user account.
Reporting to law enforcement or child protection authorities.
Referral for civil or criminal liability, if applicable.
Preventing reinstatement unless substantial review and remediation occurs.
12. Jurisdictional Considerations & Age of Majority
Where local laws define "child" or "minor" differently, we will comply with the stricter standard.
If a user is legally a minor, their parent or legal guardian may have the right to access, delete, or review their content or account as required by law.
If local law restricts or mandates parental consent for certain features (e.g. collection of personal data, media content), we will enforce those restrictions in that jurisdiction.
13. Integration into Terms & Conditions
This Policy is part of the governing Terms & Conditions; any terms in the Terms that conflict shall defer to this Policy in matters of child safety.
Users warrant that they will comply with this Policy and not use the App to facilitate any prohibited activity involving minors.