Earlier this month, the FTC sent a letter to Wildec, LLC, the Ukraine-based maker of several mobile dating apps, alleging that the apps were collecting the personal information and location data of users under the age of 13 without first obtaining verifiable parental consent or otherwise complying with the Children’s Online Privacy Protection Act (COPPA). The letter pressed the operator to delete personal information on children (and thereafter comply with COPPA and obtain parental consent before allowing minors to use the apps) and disable any search functions that allow users to locate minors. The letter also advised that the practice of allowing children to create public dating profiles could be deemed an unfair practice under the FTC Act. Subsequently, the three dating apps in question were removed from Apple’s App Store and Google’s Google Play Store following the FTC allegations, showing the real world effects of mere FTC allegations, a response that might ultimately compel Wildec, LLC to comply with the statute (and cause other mobile apps to reexamine their own data collection practices). Wildec has responded to the FTC’s letter by “removing all data from under age accounts” and now prevents minors under the age of 18 from registering on the dating apps.

In late March, the French Data Protection Authority, Commission Nationale de l’Informatique et des Libertés (“CNIL”) released a model regulation (the “Model Regulation”) governing the use of biometric access controls in the workplace.  Unlike many items of personal information, biometric data (such as a person’s face or fingerprints) is unique and, if stolen or otherwise compromised, cannot be changed to avoid misuse.  Under Article 9 of the GDPR, biometric data collected “for the purpose of uniquely identifying a natural person” is considered “sensitive” and warrants additional protections.  The GDPR authorizes Member States to implement such additional protections.  As such, the French Data Protection Act 78-17 of 6 January 1978, as amended, now provides that employers – whether public or private – wishing to use biometric access controls must comply with binding model regulations adopted by the CNIL, the first of which is the Model Regulation.

Per our previous post, the European Parliament and the Member States agreed to adopt new rules that would set the standard for protecting whistleblowers across the EU from dismissal, demotion, and other forms of retaliation when they report breaches of various areas of EU law. According to a press

Unwanted robocalls reportedly totaled 26.3 billion calls in 2018, sparking more and more consumer complaints to the FCC and FTC and increased legislative and regulatory activity to combat the practice. Some automated calls are beneficial, such as school closing announcements, bank fraud warnings, and medical notifications, and some caller ID spoofing is justified, such as certain law enforcement or investigatory purposes and domestic violence shelter use.  However, consumers have been inundated with spam calls – often with spoofed local area codes – that display fictitious caller ID information or circumvent caller ID technology in an effort to increase the likelihood consumers will answer or otherwise defraud consumers. To combat the rash of unwanted calls, Congress and federal regulators advanced several measures in 2019 and states have tightened their own telecommunications privacy laws in the past year.  For example, within the last week, the Arkansas governor signed into law S.B. 514, which boosts criminal penalties for illegal call spoofing and creates an oversight process for telecommunications providers.

On January 30, 2019, the Office of the New York Attorney General (“NY AG”) and the Office of the Florida Attorney General (“Florida AG”) announced settlements with Devumi LLC and its offshoot companies (“Devumi”), which sold fake social media engagement, such as followers, likes and views, on various social media platforms. According to the NY AG, such social media engagement is fake in that “it purports to reflect the activity and authentic favor of actual people on the platform, when in fact the activity was not generated by actual people and/or does not reflect genuine interest.”

These settlements are the first in the United States to find that selling fake social media engagement constitutes illegal deception and that using stolen social media identities to engage in online activity is illegal. The NY AG emphasized that the New York settlement sends a “clear message that anyone profiting off of deception and impersonation is breaking the law and will be held accountable.”