Reflecting the movement to toughen data security laws on a state-by-state basis, on July 25, 2019, Governor Cuomo signed into law the Stop Hacks and Improve Electronic Data Security Act (the “SHIELD Act” or the “Act”). The Act amends New York State’s current data breach notification law, which covers breaches of certain personally-identifiable computerized data (referred to in the New York breach law as “Private Information”). The Act also breaks new ground by imposing substantive data security requirements on businesses that own or lease the Private Information of New York residents, regardless of whether the businesses otherwise conduct business in New York State. Both portions of the Act are backed by potential civil penalties for noncompliance.
GDPR fines are seemingly like buses, you wait over a year for enforcement action by the UK’s data supervisory authority, the ICO, and then two come along at once – and with quite dramatic effect. Continue Reading
Earlier this month, the FTC sent a letter to Wildec, LLC, the Ukraine-based maker of several mobile dating apps, alleging that the apps were collecting the personal information and location data of users under the age of 13 without first obtaining verifiable parental consent or otherwise complying with the Children’s Online Privacy Protection Act (COPPA). The letter pressed the operator to delete personal information on children (and thereafter comply with COPPA and obtain parental consent before allowing minors to use the apps) and disable any search functions that allow users to locate minors. The letter also advised that the practice of allowing children to create public dating profiles could be deemed an unfair practice under the FTC Act. Subsequently, the three dating apps in question were removed from Apple’s App Store and Google’s Google Play Store following the FTC allegations, showing the real world effects of mere FTC allegations, a response that might ultimately compel Wildec, LLC to comply with the statute (and cause other mobile apps to reexamine their own data collection practices). Wildec has responded to the FTC’s letter by “removing all data from under age accounts” and now prevents minors under the age of 18 from registering on the dating apps. Continue Reading
Senate Bill 561’s smooth sail through the California legislature came to an end on Thursday, May 16. On the eve of the deadline for all fiscal committees to hear and report on the bills introduced in their house, the Senate Appropriations committee decided to hold the bill. Meaning, SB 561 will not pass out of the Senate this session.
Notably, the controversial bill was a proposed CCPA amendment that threatened to expand the Act’s private right of action by allowing consumers to bring actions when any of their CCPA rights were violated. Currently, the CCPA only permits consumers to bring actions in the event of a data breach. For a detailed review of the CCPA, please view our previous posts.
The bill would have also affected the way that the Attorney General could enforce the CCPA. The CCPA provides businesses that have violated the Act 30 days to cure the violation, and allows businesses and third parties to request guidance from the Attorney General on how to comply with the Act. The bill would have eliminated the 30-day window and would have put the onus on the Attorney General to promulgate any guidance. A more detailed review of SB 561 is available in a previous post.
While this does not necessarily mean it’s the end of the road for SB 561 – after all, the CCPA was resurrected from an inactive file – it does mean good-bye for now.
In late March, the French Data Protection Authority, Commission Nationale de l’Informatique et des Libertés (“CNIL”) released a model regulation (the “Model Regulation”) governing the use of biometric access controls in the workplace. Unlike many items of personal information, biometric data (such as a person’s face or fingerprints) is unique and, if stolen or otherwise compromised, cannot be changed to avoid misuse. Under Article 9 of the GDPR, biometric data collected “for the purpose of uniquely identifying a natural person” is considered “sensitive” and warrants additional protections. The GDPR authorizes Member States to implement such additional protections. As such, the French Data Protection Act 78-17 of 6 January 1978, as amended, now provides that employers – whether public or private – wishing to use biometric access controls must comply with binding model regulations adopted by the CNIL, the first of which is the Model Regulation.
Per our previous post, the European Parliament and the Member States agreed to adopt new rules that would set the standard for protecting whistleblowers across the EU from dismissal, demotion, and other forms of retaliation when they report breaches of various areas of EU law. According to a press release issued by the European Parliament on April 16, 2019, the Parliament approved these changes by an overwhelming majority. The new rules require that employers create safe reporting channels within their organization, protect whistleblowers who bypass internal reporting channels and directly alert outside authorities, including the media under certain circumstances, and require that national authorities provide independent information regarding whistleblowing. This legislation marks a significant departure from the jurisdiction-specific approach that has resulted in disparate protection across Europe, with some jurisdictions, like Germany and France, offering relatively limited protection when compared to other jurisdictions, such as the UK. These changes, if approved by the EU ministers, will set a uniform baseline and therefore considerably increase whistleblower protections in the EU. Member States will have two years to achieve compliance. For an additional discussion as to the implications of this legislation, see this article by The New Times. We will continue to monitor this development.
Unwanted robocalls reportedly totaled 26.3 billion calls in 2018, sparking more and more consumer complaints to the FCC and FTC and increased legislative and regulatory activity to combat the practice. Some automated calls are beneficial, such as school closing announcements, bank fraud warnings, and medical notifications, and some caller ID spoofing is justified, such as certain law enforcement or investigatory purposes and domestic violence shelter use. However, consumers have been inundated with spam calls – often with spoofed local area codes – that display fictitious caller ID information or circumvent caller ID technology in an effort to increase the likelihood consumers will answer or otherwise defraud consumers. To combat the rash of unwanted calls, Congress and federal regulators advanced several measures in 2019 and states have tightened their own telecommunications privacy laws in the past year. For example, within the last week, the Arkansas governor signed into law S.B. 514, which boosts criminal penalties for illegal call spoofing and creates an oversight process for telecommunications providers.