Welcome to our eighth issue of Decoded for 2023.
Please join us in welcoming our new Associate (North Carolina State Bar application pending) Malcolm Lewis to the firm. Malcolm's primary area of practice will be litigation with a specific interest in data privacy. He received his B.S. in Criminal Justice from Western Carolina University, his J.D., magna cum laude, from North Carolina Central University School of Law, and is a Certified Information Privacy Professional (CIPP/US) from the International Association of Privacy Professionals. Malcolm is a great addition to our Technology Practice Group!
We are very pleased to announce that 71 of the firm's attorneys were selected by their peers for inclusion on the 2024 Best Lawyers list, nine were selected as Best Lawyers "Lawyers of the Year," and 18 others were selected as Best Lawyers "Ones to Watch."
Recognition by Best Lawyers is based entirely on peer review. Its methodology is designed to capture the consensus opinion of leading lawyers about the professional abilities of their colleagues within the same geographical area and legal practice area. More information can be found here.
We hope you enjoy this issue and, as always, thank you for reading.
Nicholas P. Mooney II, Co-Editor of Decoded, Chair of Spilman's Technology Practice Group, and Co-Chair of the Cybersecurity & Data Protection Practice Group
and
Alexander L. Turner, Co-Editor of Decoded and Co-Chair of the Cybersecurity & Data Protection Practice Group
| |
“A federal judge upheld a finding from the U.S. Copyright Office that a piece of art created by AI is not open to protection.”
Why this is important: U.S. District Judge Beryl Howell found that the bedrock of U.S. copyright law is human authorship and that the law has never been expanded to protect works of authorship generated by non-humans or new forms of technology without the guidance and artistry of a human. The ruling came in response to a challenge by Stephen Thaler, CEO of neural network company Imagination Engines, who had submitted an AI-generated piece of visual artwork for copyright registration, titled “A Recent Entrance to Paradise.” Thaler had argued that AI should be recognized as an author, but the judge's ruling affirmed that copyright law is centered around human creativity.
Judge Howell did acknowledge that Thaler’s argument was correct to assert that copyright law is malleable enough to cover works that are created with or through the use of new technology and that Congress’ scope is not limited to print material. In fact, the Copyright Act provides that copyright attaches to “original works of authorship fixed in any tangible medium of expression, now known or later developed.” 17 U.S.C. § 102(a) (emphasis added). However, the underlying principle is that human creativity is being channeled through these new media and is what copyright is intended to protect.
The decision comes at a time when AI companies are facing legal challenges related to training their systems on copyrighted material. The U.S. Copyright Office had previously stated that while most AI-generated works are not copyrightable, AI-assisted creations could qualify for protection if a human creatively selects or arranges them. The decision also comes at a time that AI-generated works are front and center in the Hollywood guild negotiations. The inability to copyright such works will be a major negotiation point for the human creators trying to limit the use of AI writers moving forward. --- Shane P. Riley
| |
“However, in our rush to embrace data-driven decisions, we risk falling victim to biased, incomplete, or just plain wrong conclusions.”
Why this is important: As the saying goes, the more the better. When it comes to big data, that is not always the case. As defined by Oracle, “big data is data that contains greater variety, arriving in increasing volumes and with more velocity. This is also known as the three Vs.” This information is so large, fast, or complex that it is difficult to process without specialized software. Organizations are using machine learning, or artificial intelligence, to organize and interpret big data in the decision-making process. However, the utilization of big data as the basis for organizational decision-making is not the panacea many believe it is. The results of big data analysis are only as good as the inputs. This includes the quality of the data used by the AI to form its conclusion, and even the algorithms used by the AI to process the data. Outdated or bad data creates incorrect conclusions, as do biases programmed into the algorithms used to process the data. The result is mistaking correlation for causation. This can lead to blind spots in decision-making and unintended outcomes. Reliance on AI alone to make decisions is not advisable. Human involvement is required to put the analyzed data into context and avoids confirmation bias.
These are steps you can take before the final human analysis that will help with these inherent big data issues. An organization first has to know what data it has, why it has it, and what it is doing with it because not all data is equal. Bad data will skew the results. Organizations should conduct an annual audit of the data they hold to determine what they actually have and what they need based on how they want to use that data. Utilizing the results of the data audit, bad data needs to be discarded. This includes outdated data, incomplete data, and data that is irrelevant to the organization’s intended use. If the data is going to be used for multiple purposes, then it needs to be organized by use and by who will have access to that data. If the use of specific data changes, then this process must be repeated to account for the change in data use. This process provides better inputs for the big data analysis by the AI, while also providing the ultimate human decision maker with greater confidence in the quality of the AI’s conclusions after analyzing the data. --- Alexander L. Turner
| |
“’The investigation revealed that two former Tesla employees misappropriated the information in violation of Tesla’s IT security and data protection policies and shared it with the media outlet.’”
Why this is important: From application to termination, employee privacy considerations live throughout all stages of the employment lifecycle. Thus, employers should take heed of best practices and mechanisms when handling employee personal information far beyond the applicant screening stage of employment. However, is having policies relating to the security and protection of this employee data enough?
Privacy considerations prior to employment commonly include rules and best practices concerning background screenings, which the Fair Credit Reporting Act (“FCRA”) regulates. During the employment stage, privacy considerations commonly include substance testing, workplace surveillance, and employee misconduct. Of course, especially since the COVID pandemic began at the onset of 2020, encouraging many employers to implement work-from-home opportunities, issues relating to monitoring employee communication and device tracking on employer-owned devices and personal devices used for work purposes emerged increasingly. At the post-employment stage, a primary privacy consideration when ending the employment relationship with a former employee is properly restricting or terminating access to workplace facilities and devices with physical and electronic information. What does that look like? That may include deactivating or returning identification badges, key fobs, and PINs allowing entry into the facility, keeping a record to ensure all employer-owned devices are returned, or implementing post-termination contracts.
Here, Tesla may have unintentionally dropped the ball during the employment stage of the employment lifecycle. However, IT security and data protection policies were reportedly in place. So, what may have been the issue? This Tesla data breach comes after Reuters reported in April 2023 that a group of Tesla employees privately shared via internal messaging systems customer information, including videos and images recorded through car cameras. According to The Guardian, the Handelsblatt (foreign media outlet) report stated Tesla was failing to adequately protect access to customer and employee data. The Guardian further reported Handelsblatt quoted a Tesla attorney blaming the 100-gigabyte data breach on a "disgruntled former employee" who worked as a service technician.
Insider data breaches like this are a potential consequence when failing to create and enforce an effective information management program. Four essential pillars companies should aim for when developing information management programs include (1) discovery, (2) build, (3) communication, and (4) evolution. Within the discovery pillar, analyzing and characterizing potential targets of privacy threats for the purpose of discovering ways to combat privacy risks is paramount for the security and protection of collected data. This risk analysis generally consists of collected personal information being inventoried and classified according to its level of sensitivity; flows of data identified in a manner that illustrates what data is at risk, in transit, or in use; limits on employee access to data based on their need to perform their job; and regularly conducted self-assessments for accountability purposes. The build of an effective information management program should gather the results of the risk analysis to develop internal privacy policies for employees and contractors and external privacy notices for consumers based on fair information practices. These privacy policies and privacy notices should be documented and communicated to their respective audiences. Communication of privacy policies to internal employees may require additional training and certifications. Lastly, as privacy-related laws, market conditions, and company needs change and evolve, it is essential for there to be a review and update framework that allows for continued compliance, security, and protection.
Merely having policies in place is not enough. Taking these steps, in addition to other best practices, will help to prevent insider data breach incidents like this one. If you would like assistance formulating an effective information management program or have questions regarding best practices to ensure data protection and compliance with applicable laws, please do not hesitate to contact Spilman’s Data Privacy and Cyber Security Practice Group. --- Malcolm E. Lewis
| |
“Scientists have found a way to use nanotechnology to create a 3D ‘scaffold’ to grow cells from the retina –paving the way for potential new ways of treating a common cause of blindness.”
Why this is important: Scientists from Anglia Ruskin University (“ARU”) have utilized nanotechnology to develop a 3D scaffold for growing retinal pigment epithelial (“RPE”) cells, which could lead to new treatments for a common cause of blindness. The team, led by Barbara Pierscionek, aimed to create RPE cells that remain viable for extended periods. RPE cells are vital for maintaining healthy vision, and damage to them can result in vision deterioration.
The researchers employed a technique called "electrospinning" to create a scaffold on which the RPE cells could thrive. This scaffold was treated with fluocinolone acetonide, a steroid that protects against inflammation and promotes cell resilience and growth. This breakthrough approach has the potential to revolutionize treatment for age-related macular degeneration (“AMD”), a significant cause of blindness in the developed world, particularly as the aging population grows.
AMD can result from changes in the Bruch's membrane, which supports RPE cells, and the breakdown of the adjacent choriocapillaris. In Western populations, the accumulation of lipid deposits called drusen and subsequent degeneration of RPE cells and the outer retina is a common cause of sight deterioration. In the developing world, AMD is often caused by abnormal blood vessel growth in the choroid, leading to hemorrhaging and detachment of RPE cells.
The researchers' approach offers promise for effective treatment of AMD and other sight-related conditions. By using nanofiber scaffolds treated with anti-inflammatory substances, RPE cell growth, differentiation, and functionality are enhanced. Unlike traditional flat surfaces, the 3D environment of the scaffold promotes cell growth. This technology could serve as a substitute for the Bruch's membrane, providing a stable support for RPE cell transplantation, potentially benefiting millions worldwide who suffer from eye diseases like AMD. --- Shane P. Riley
| |
"Moreover, AI is already deployed by over a third of companies according to the 2022 IBM Global AI Adoption Index and at least 40% of other companies are considering potential uses.”
Why this is important: This article discusses what it sees as an inevitable step in society’s development of Artificial Intelligence: cybersecurity threat response being managed by AI with human oversight only at the macro level. This comes about as AI is arming those who wish to do harm with greater and faster tools. Simply put, it’s an AI arms race, and humans will not be able to respond quickly enough to thwart every attack. The article turns to a different field to provide an example. In 2010, automated sale orders caused the stock market to suffer a “flash crash” that resulted in the Dow Jones losing almost 1,000 points in 36 minutes. The automated sale orders executed too quickly for any human to intervene. As the article puts it, $1 trillion in market value was lost due to “interacting algorithms.” Imagine if the sale orders were cyberattacks. The article anticipates a time when attacks happen at such a rapid pace, they occur too frequently and quickly for any human to intervene. The benefit of cybersecurity threat response being managed by AI is efficiency. The article discusses several benefits to the implementation of an AI-managed response. However, this approach is not without concerns, and one of the biggest is the fact that studies show people are all too willing to assume AI has human qualities. The article cautions that AI systems are “nothing more than narrow-but-powerful exercises in task performance.” Until Artificial General Intelligence is created, any system employing AI, including cybersecurity threat response, will need to include humans to execute those oversight functions that involve moral, parochial, and socio-economic values. --- Nicholas P. Mooney II
| |
“Complexities in syncing patient records across healthcare organizations have raised safety concerns and have resulted in significant financial losses for ambulance services, hospitals, and health systems.”
Why this is important: Acadia Ambulance, one of the country’s largest private ambulance companies, has partnered with Duality Health to implement a face biometric patient identification system. The goal of this system is to sync all levels of health care providers to avoid denial of insurance claims. “Duality’s software uses face biometrics to create unique identifiers linked to patients’ medical record numbers. This approach ensures over 99.96 percent accurate patient identification, promoting better health data exchange between healthcare organizations, according to the announcement.” While this technology increases revenue, it also comes with significant legal risks.
Before implementing any biometric collection technology, a company must first ensure that it complies with applicable biometric data privacy laws. The U.S. does not have a federal data privacy or biometric data privacy law. This means that a company collecting biometric data has to comply with the individual biometric data laws in each state in which it operates. Acadia Ambulance operates out of Louisiana, Mississippi, Tennessee, and Texas. Louisiana and Texas have biometric data privacy laws, and the Mississippi Legislature has introduced a biometric privacy act for discussion and vote this year. Compliance with multiple states’ data privacy laws is difficult, and the consequences in the event of improper collection or a data breach can be considerable. This includes the fact that in the event of a data breach of biometric data in Louisiana, Acadia Ambulance may be subject to private causes of action brought by affected patients. One way to counteract having to comply with myriad data privacy laws in all the states in which an organization operates is to identify the most stringent state laws that would apply to your organization’s data collection and apply that across all your operations. If you plan to implement biometric data collection and need assistance complying with multiple states’ biometric data privacy laws, or even one state’s biometric privacy laws, please contact a member of Spilman’s Cybersecurity & Data Protection Practice Group. --- Alexander L. Turner
| |
“It gives you the right to deny and delete personal data from social media, requires companies to tell you if your online information is being used when making a major life decision, like buying a home.”
Why this is important: As with most puzzles, it can be difficult to see the whole picture until all the pieces have been placed together. From the perspective of large companies, the sectoral landscape of privacy laws in the United States is very similar. In 2003, California was the first state to implement data breach notification laws and required businesses and state agencies to disclose when California residents' personal information was exposed in a security breach. Since then, multiple states have passed privacy laws, and to this date, nine states have comprehensive data privacy laws. On June 6, Florida Governor Ron DeSantis added to this list by signing Senate Bill 262 to create the Florida Digital Bill of Rights (“FDBR”), which is scheduled to go into effect on July 1, 2024.
FDBR resembles other newly enacted state privacy laws, but includes several facets that add additional analytical steps when determining multi-state privacy compliance. The requirements appear to target large-scale technology and advertisement companies and companies with a predominantly child-based audience. Not excluding any challenges stemming from the added facets of the FDBR, large companies operating in multiple states throughout the United States are typically faced with challenges whenever individual states pass privacy laws that do not identically resemble other state privacy laws. Due to the inconsistencies in state privacy laws, companies are forced to decipher the inconsistent patchwork of the state privacy laws and implement different practices for handling personal information differently throughout their places of business based on the state where they sit. Such an effort can be especially challenging in today’s modern era where companies have a large online platform that collects personal information from hundreds of thousands of consumers across the country. As more states enact privacy laws, it will be interesting to see if the United States maintains its sectoral landscape or transitions into an omnibus approach similar to the European Union and other countries. --- Malcolm E. Lewis
| |
“The Associated Press has issued guidelines for its journalists on use of artificial intelligence, saying the tool cannot be used to create publishable content and images for the news service.”
Why this is important: As organizations begin to regulate the use of artificial intelligence (“AI”), the Associated Press (“AP”) has issued guidelines for its journalists stating that AI cannot be used to create publishable content or images for the news service. However, the AP encouraged journalists and staff members to use AI and become familiar with the revolutionary technology. The AP also will add a chapter to its Stylebook—a writing and editing reference for newsrooms, classrooms and corporate offices worldwide—advising journalists how to experiment with the technology safely. While AI can create text, images, audio, and video on demand, it does not always distinguish fact from fiction, meaning such content must be carefully vetted. Still, AI technology is affecting many areas of our lives, and the AP encourages its staff to get familiar with it. We can expect to see continuing updates from the AP and other organizations regarding the use of AI in their respective industries. --- Alison M. Sacriponte
| |
“The city’s universities — particularly Carnegie Mellon, which boasts a top robotics program — can play a role in these technological advancements, he said, pointing out that Pittsburgh was similarly at the forefront of autonomous vehicle technology.”
Why this is important: Astrobotic and ProtoInnovations, two local companies in Pittsburgh, are leading the transformation of the city into a hub for space technology. Astrobotic, based in North Shore, is set to launch its lunar lander project, Peregrine, which will be the first commercial mission to the moon and the first soft landing on the lunar surface by the U.S. since the Apollo program. Peregrine will carry various scientific instruments, including a rover from Carnegie Mellon University and micro-rovers from Mexico.
Astrobotic is also developing another lunar lander, Griffin, to deliver a NASA rover called VIPER, designed to search for water on the moon's south pole. ProtoInnovations is providing software that enables VIPER to autonomously navigate and perform tasks on the lunar terrain. This technology allows the rover to operate independently, compensating for communication delays between Earth and the moon.
Both companies have received grants from NASA's Tipping Point Program, aimed at advancing technology for space exploration. ProtoInnovations plans to create standardized software for autonomous rovers, reducing costs and enhancing efficiency. Astrobotic is also working on CubeRovers, exploring wireless recharging technologies for lunar equipment.
Pittsburgh's universities, particularly Carnegie Mellon, contribute to the city's prominence in space technology. The growth in this sector not only leads to job creation but also opens opportunities for local companies to provide components for space robotics. With increased NASA funding and renewed interest in space exploration, the space industry in Pittsburgh is set to flourish and contribute to technological advancement. --- Shane P. Riley
| |
This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.
Responsible Attorney: Michael J. Basile, 800-967-8251
| | | | |