Welcome
Welcome to our eighth 2024 issue of Decoded - our technology law insights e-newsletter. We have a few announcements to pass along.
2025 BEST LAWYERS
We are very pleased to announce that 66 of the firm's attorneys were selected by their peers for inclusion on the 2025 Best Lawyers list, five were selected as Best Lawyers "Lawyers of the Year," and 19 others were selected as Best Lawyers "Ones to Watch." Recognition by Best Lawyers is based entirely on peer review. Its methodology is designed to capture the consensus opinion of leading lawyers about the professional abilities of their colleagues within the same geographical area and legal practice area. Click here to learn more.
FORMER WV COMMERCE SECRETARY JAMES BAILEY JOINS SPILMAN
Spilman is excited to welcome James Bailey, former WV Commerce Secretary, to the firm. James brings extensive public sector experience and will focus on government relations, government contracts, public finance, and general corporate law. His addition bolsters the firm’s already strong capabilities in navigating the complex intersection of business and government. Click here to learn more about James and his practice.
ABA ANNUAL LABOR AND EMPLOYMENT CONFERENCE - New York, NY, November 13-16
We are also looking forward to hearing from our Spilman colleagues as they present at the ABA Annual Labor and Employment Conference in New York City November 13-16. This conference is the Section’s signature event of the year, which will offer cutting-edge and in-depth programs covering developments across the full range of labor and employment law topics along with numerous networking opportunities. Spilman is pleased to be a sponsor of the Section and this exciting program! Click here to learn more.
As always, if you have any suggested topics you would like us to address here or in a webinar format, please let us know.
Thank you for reading.
Nicholas P. Mooney II, Co-Editor of Decoded, Chair of Spilman's Technology Practice Group, and Co-Chair of the Cybersecurity & Data Protection Practice Group
and
Alexander L. Turner, Co-Editor of Decoded and Co-Chair of the Cybersecurity & Data Protection Practice Group
| |
“With broad extraterritorial reach, significant penalties of up to seven percent of worldwide annual turnover, and an emphasis on risk-based governance, the EU AI Act will have a profound impact on U.S. businesses that develop, use, and distribute AI systems.”
Why this is important: The EU AI Act, effective August 1, 2024, marks a major shift in AI regulation, impacting businesses globally, particularly in the U.S. The Act introduces a risk-based governance system with penalties up to 7 percent of global revenue, and focuses on high-risk AI systems, such as those used in critical sectors like biometric identification, education, law enforcement, and healthcare. The Act applies to U.S. companies operating AI systems in the EU, requiring compliance with strict regulations on risk management, transparency, and data governance.
The Act defines an AI System as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that…infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The Act then classifies AI systems into four risk categories: prohibited, high-risk, limited-risk, and minimal-risk, with most attention on high-risk systems. Companies must implement measures to ensure the accuracy, security, and transparency of AI systems, especially those used in regulated products like medical devices or vehicles. High-risk AI providers must register their systems in an EU database and complete conformity assessments.
The legislation applies to U.S. companies across the AI value chain, even if they only develop or use AI systems with outputs used in the EU. Most rules for high-risk AI will take effect in August 2026, while additional regulations for certain products will come into force in 2027.
U.S. companies are advised to assess their AI use, review data quality, and prepare for evolving AI compliance laws to avoid significant fines and ensure smooth market operations.
If your business needs assistance understanding and complying with the new EU AI Act, attorneys at Spilman Thomas & Battle are ready to help. Please reach out today to schedule a consultation to answer your questions. --- Shane P. Riley
| |
“Hong Kong emerged as the fastest-growing Eastern Asian country in terms of global crypto adoption, with 40% of the region’s value received in stablecoins.”
Why this is important: The sun rises in the East for cryptocurrency enthusiasts. The outlook for digital currency is bright in Eastern Asia, according to new reports covering many transactional and economic trends in the ever-lasting battle between fiat currency and cryptocurrency. One practical implication of digital currency and corresponding platforms in the region is that stablecoins are emerging as a cheaper and faster alternative to traditional bank transfers, especially for cross-border transactions, which can be expensive for emerging economies. Where there is reliability and cost efficiency, capitalization usually follows. Next, the growing adoption of crypto and stablecoins is prominent in countries with constant fiat currency devaluation and high inflationary rates. As in every environment, unreliable tools yield to the evolution of advanced equipment. Though stablecoins bear the moniker of precisely what emerging economies yearn for, their age is new and their durability is yet to be tested. In the meantime, interest in and the use of digital currencies is flourishing in understandable ways. In addition to the natural causes of use, there is consistent annual investment. In terms of cryptocurrency adoption, Hong Kong experienced over 85.6 percent growth, making it the largest year-over-year increase among Eastern Asian countries, followed by South Korea. Finally, a split has occurred between major investor pools. Between July 2023 and June 2024, institutional investors, with a market influence of $1M-$10M, mainly used decentralized exchanges (DEXs) and other decentralized finance (DeFi) services, while professional investors, with a market influence of $10k-$1M, continued opting for centralized exchanges (CEXs). Will all of these forces combine in other regions similarly and create a boom in those markets? Who knows? What is clear is that opportunities for divergence from traditional patronage of central banks are becoming more plenteous, though far from omnipotent. --- Sophia L. Hines
| |
“This move comes after the company suffered a massive data breach in July when a threat actor named 'NullBulge' breached Disney's Slack platform and stole 1.1TB of data.”
Why this is important: When a corporation like Disney experiences a breach of important data, they lose trust in their platforms such as Slack. It is paramount that platforms that house important data take the necessary steps to ensure these types of breaches will not occur. Failure to be proactive in preventing breaches can cause corporations to leave the platform and migrate to different messaging options. The breach may not only affect the corporation directly involved but could also prompt other companies in the industry to consider switching platforms after learning about the incident. This in turn decreases the traffic on the platform and hurts the overall business. If an organization is negligent in managing their platform then they could potentially open themselves to legal liability.
On the other side of the data protection coin, the organizations that use a third-party platform for their messaging and collaboration need to know who they are doing business with. The platforms are not the only ones that can be proactive in preventing breaches. The organizations using the platform can vent and properly examine the possible flaws in using a specific platform. By conducting thorough due diligence and identifying that a platform lacks adequate security measures, decision-makers can protect sensitive data and avoid exposing their organizations to risk by choosing not to partner with a vulnerable platform.
Disney’s move away from Slack following a significant data breach highlights the crucial role cybersecurity plays in corporate communications. This incident serves as a warning for both platform providers and users to prioritize data protection. While platforms must bolster their security measures to prevent breaches, organizations using these tools should also conduct thorough assessments before partnering with a third party. The pitfalls of a breach extend to both the platforms and the users. Where users have to deal with the breach of information and the headaches that come with it, the platform's significant pitfalls are less obvious. Failing to secure data can lead to a loss of trust, not only from the affected corporation but also from others, potentially impacting the platform’s business and user base across the industry. The loss of goodwill and trust can be made worse if a legal claim is brought where damages could be awarded. --- Nicholas A. Muto
| |
“The hacker has created Telegram bots to access data of 31,216,953 customers updated till July 2024 and 5,758,425 claims of the company available till early August.”
Why this is important: Corporate espionage by an insider is every company’s nightmare. It is even worse if the insider is purposefully sharing customers’ personal identifying information (PII) and protected health information (PHI) with bad actors. That was the case with Star Health, an insurance company in India. In this instance, it is alleged that Star Health’s own chief information security officer (CISO) colluded with bad actors to compromise the data of 31 million insureds’ PII and PHI. The breach exposed insureds’ names, PAN numbers, mobile numbers, email addresses, policy details, birthdates, and confidential medical records. The breach was only discovered after third parties alerted Star Health to the possible breach.
If this had happened in the U.S., this breach would have constituted a major HIPAA violation. The breach would have triggered the reporting requirements outlined in the Breach Notification Rule (45 CFR §§ 164.400-414). Specifically, impacted individuals would have to be contacted without reasonable delay, and in no case later than 60 days following the discovery of the breach. The breach notification would be required to include, to the extent possible, a brief description of the breach, a description of the types of information that were involved in the breach, the steps affected individuals should take to protect themselves from potential harm, a brief description of what the covered entity is doing to investigate the breach, mitigate the harm, and prevent further breaches, as well as contact information for the covered entity, or business associate, as applicable. Because this breach would have impacted more than 500 residents of a state or jurisdiction, the covered entity or business associate would also be required to notify prominent media outlets serving the state or jurisdiction. Additionally, if this breach had taken place in the U.S., the covered entity or business associate would also have to contact the Office of Civil Rights within the Federal Department of Health and Human Services without unreasonable delay, and in no case later than 60 days following a breach.
It is well known that an organization’s employees are the greatest risk to the organization’s data because they have the most access. Most of the time, data breaches caused by employees are not malicious. That is why we recommend extensive training so that your employees can easily recognize threats to the organization’s data, develop a culture that values data security so that your employees feel safe reporting potential threats, and limit access to data to only those who need to know the information to do their job duties. While these steps likely would not have prevented the breach that Star Health experienced because its head data privacy officer purposefully provided access to the bad actor in exchange for a bribe, a culture that values data privacy and reporting threats may have minimized the harm by empowering a lower level employee to recognize a possible problem to report it to other high level executives. --- Alexander L. Turner
| |
“As the technology makes rapid progress, observers say it's 'imperative' to establish the foundations for inclusive global institutions to track its evolution and establish safeguards.”
Why this is important: A U.N. advisory body has urged the creation of global governance frameworks for artificial intelligence (AI) to prevent its potential risks and ensure its benefits are widely shared. In a 100-page report, the group highlighted AI’s transformative potential, from advancing science and economic growth to improving public health and energy efficiency. However, it warned that unregulated AI could deepen inequalities, disrupt labor markets, and threaten global security, including the development of autonomous weapons.
The group recommended new global institutions to govern AI, rooted in international law and human rights, and called for a global AI fund to ensure the technology benefits both rich and poor nations. It also proposed establishing an international scientific panel to assess AI risks and capabilities and a "Standards Exchange" to ensure technical compatibility.
Currently, only a few countries are engaged in AI governance initiatives, such as the EU’s legal framework and recent G20 guidelines promoting ethical and transparent AI. The advisory body, which consists of 39 AI experts, emphasized that AI is too critical to be left to market forces or fragmented national regulations. While it didn’t recommend creating a U.N. agency for AI at this stage, the possibility remains under consideration. U.N. Secretary-General António Guterres praised the group’s work, urging global collaboration on AI governance.
The report was released ahead of the upcoming "Summit of the Future," which aims to address global challenges, including AI regulation. --- Shane P. Riley
| |
“Cybersecurity experts note a significant rise in fake job offers, with over 245,500 people scammed last year, and a 25% increase in these scams from 2022 to 2023.”
Why this is important: Those who are unemployed have become vulnerable targets to fake job listings. These fake listings look realistic thanks to generative AI and there has been a significant increase in these types of scams. From 2022 to 2023, there was a 25 percent increase in fake job listing scams. There are a couple of reasons for the uptick in these scams. The first is how evolved AI has become to make these jobs look incredibly realistic and attractive. The second is the number of layoffs that have occurred especially in the tech sector. Those who are desperate and trying hard to find a job are vulnerable to these fake listings as they are trying to gain employment as quickly as possible.
The scams often ask for upfront payments for job-related equipment, but they aren’t solely interested in obtaining money. The scams are in place in order to obtain personal information. A prospective employee may not think twice about giving their bank information, driver's license number, or their social security number because this is all information one would give to a real employer. The information being sought in the scam is reasonable information an employer would want you to give to them for different reasons like payroll, contact information, and their own databases.
Employers and recruiters who are actively hiring need to act in order to prevent confusion between their real listings and the fake listings generated by AI. One move that those hiring can make is to post the listings directly to their organization's website, giving it more credibility. If a company limits the places where their job openings can be posted, then those applying know where the real listings are. When posting listings, a company can emphasize that these listings will not appear anywhere else. These practices can ensure that those applying can trust these real listings.
Those seeking employment need to be able to tell the difference between a real listing and a fake one. There are a couple of red flags to look for when examining a listing. If a job is paying much more than you would expect then this is a red flag for a scam. If the correspondence is entirely over the internet for a job, this is also a red flag. Asking as many questions as possible is a way to figure out if a listing is real. Conducting thorough research is also a way to prevent being tricked by a fake job listing. In the world today, every business has an online footprint, and if the one being researched does not, then it is probably not real. --- Nicholas A. Muto
| |
“CISA attributed 2 in 5 successful intrusions to valid account abuse last year, but that is down from 2022.”
Why this is important: Access to valid accounts remained the most common and successful path for threat actors to gain access to critical infrastructure environments last year, according to several reports. According to one report, almost one-third of all global cyberattacks arose from compromises of valid accounts. Another report explained that compromising legitimate account credentials resulted in almost 40 percent of all ransomware attacks. Though the number of attacks declined from 2022, the reports still provide a significant warning. Whether or not a company is in the critical infrastructure space, the importance of guarding against account compromises cannot be overstated. Good password hygiene, multi-factor authentication, regular training, and simulated phishing attacks are all important parts of a company’s cyber defenses. If you have any questions about the measures your company has in place, contact Spilman’s cybersecurity and data privacy practitioners.
--- Nicholas P. Mooney II
| |
“Since 2016, the average number of medical AI device authorizations by the FDA per year has increased from two to 69, indicating tremendous growth in commercialization of AI medical technologies.”
Why this is important: AI and healthcare appear to be a great match. However, as with every new technology, the utilization of AI in the healthcare industry comes with its own concerns. Primarily, industry observers are apprehensive about the use of AI in healthcare because of patient privacy concerns, the possibility of bias, and concerns regarding device accuracy. These concerns regarding device accuracy are exacerbated by the fact that the development of AI is outpacing the government’s ability to effectively regulate it. Since 2016, the number of AI devices authorized by the FDA has grown from two per year to sixty-nine per year. The result is that even with FDA approval, AI medical devices have not been sufficiently reviewed for clinical effectiveness using actual patient data. AI medical devices are evaluated using three different methods. Retrospective validation involves feeding the AI model image data from the past. Prospective validation is based on real-time data from patients, and is seen as more effective than retrospective validation. Finally, the best evaluation method is considered randomized controlled trials that utilizes random assignment controls for confounding variables that would differentiate the experimental and control groups. To ensure an accurate evaluation of the effectiveness of AI devices, researchers recommend that the “FDA and device manufacturers should clearly distinguish between different types of clinical validation studies in its recommendations to manufacturers.”
Researchers hope that the FDA will adopt their evaluation recommendations in order to improve AI medical device safety and effectiveness. --- Alexander L. Turner
| |
“The global average cost of a data breach reached an all-time high of $4.45 million in 2023, which is a 15% increase over the past three years.”
Why this is important: As cyberattacks have grown in frequency and complexity, so too has the cost of these attacks on the victims. The average cost of a data breach has risen 15 percent since the year 2023. This is largely attributed to the expenses in dealing with the breach after it happens and lost profits from the breach. While many different industries experience cyberattacks, the healthcare industry suffered the highest average breach costs. Due to the rise in the level of complexity of these breaches, there continue to be more and more large breaches that involve millions of records.
There are ways for those in the healthcare industry to combat these breaches and to protect themselves. It is better to be proactive rather than reactive. Those in the industry who fail to stay on top of their cybersecurity measures will be prone to more breaches and higher costs on the back end to pay for the cleanup of a breach. Doing more and spending more on resources before the breach can mitigate the costs after one occurs. By staying vigilant, healthcare organizations can detect breaches much faster as opposed to a hands-off and reactive approach. Many times, breaches go on for weeks before the appropriate parties notice and act. The average length of a breach in the healthcare industry is 213 days. A shocking way to reduce costs is to not pay the ransom requested by the attackers. Organizations that caved in and paid the ransom to the cyber-attackers did not save any significant amount of money. Another way to mitigate costs is by using AI technology. There are AI threat detection and response systems that can be implemented into an organization's software to fight against cyberattacks. Using AI to detect breaches faster will reduce the costs of those attacked. Also, if healthcare organizations can consolidate their data into one environment this will help defend their data. When data is stored across different environments like private clouds, public clouds, and on-site servers, attacks tend to happen more frequently and with more ease.
When thinking about cybersecurity and the potential data breaches organizations experience, it is important to think about how to save money. It is equally as important to think about how to protect your data from potential attacks. Both of these desired outcomes can be achieved. Taking the necessary precautions to defend against attacks and being proactive can save organizations money while at the same time making them less vulnerable to these cyberattacks. --- Nicholas A. Muto
| |
This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.
Responsible Attorney: Michael J. Basile, 800-967-8251
| | | | |