Welcome to the sixth issue of Decoded for 2023.
Please join us in welcoming Michael W.S. Lockaby to the firm. As a Member in the Roanoke office, Mike's primary areas of practice include representing local governments and public entities in technology and broadband matters, including financing and construction, permitting and taxation, franchising, and receipt and management of state and federal broadband grants. He is Secretary of the Board of Directors of the Virginia Association of Telecommunications Officers and Advisors.
Secondly, we are very pleased to announce that several of the firm’s practice groups and attorneys were recognized in the 2023 edition of Chambers USA, a directory of leading law firms and attorneys. Chambers and Partners annually researches the strength and reputation of law firms and individual lawyers across the globe. The research process for the United States includes interviewing lawyers and their clients, including influential general counsel at Fortune 100 companies, high-profile entrepreneurs, and significant purchasers of legal services. Considerable credence is given to the opinions of clients. You can learn more by clicking here.
We hope you enjoy this issue and, as always, thank you for reading.
Nicholas P. Mooney II, Co-Editor of Decoded, Chair of Spilman's Technology Practice Group, and Co-Chair of the Cybersecurity & Data Protection Practice Group
and
Alexander L. Turner, Co-Editor of Decoded and Co-Chair of the Cybersecurity & Data Protection Practice Group
| |
“The threat organizations face with GenAI is not new, but it could speed how quickly private data reaches a wider audience.”
Why this is important: Generative artificial intelligence, or GenAI, is on its way to becoming mainstream if it isn’t already there. The ubiquity of consumer-facing GenAI resources has led to a debate of how much it will change the way people interact with computer technology in the future. Notable people have weighed in with concerns. Everyone from Bill Gates and Elon Musk to Megan Fox have voiced concerns about various aspects of GenAI. However, this article argues that the end isn’t necessarily near. The article, and the reports it references, discuss the risks to companies that GenAI brings and puts those risks into perspective. Those risks include bias and discrimination inherent in some information fed into GenAI platforms, insecure code generation, and legal issues like inadvertent copyright violation. The article also discusses another potential risk: compromise of data privacy. GenAI platforms invite users to input data, including personal private data or company proprietary data. While the fear that inputting proprietary data makes it immediately available to competitors isn’t exactly correct, that data still can be available to competitors in a short time period. Companies need to be deliberate about how, when, and what company information is permitted to be input into GenAI platforms by employees. The same holds true for inputting data about vendors, customers, or others. One commentator noted that GenAI didn’t bring a new worry about the compromise of data. Most companies already spend considerable time, money and resources to keep data safe. What GenAI does is provide employees with an easy way to accidentally disclose too much in the pursuit of better job performance. That commentator summed it up well: “AI isn’t the key threat here. Human behavior is.” As noted, companies need to be deliberate about how their employees are permitted and prohibited from interacting with GenAI about company business. We suggest that companies implement written policies and procedures to address these issues. Contact Spilman’s Technology Law Practice Group if you’d like to learn more. --- Nicholas P. Mooney II
| |
“‘There is a quick profit to be made by using these programs but we need to look at what is going to be the long-term cost,’ Rep Chris Pielli said.”
Why this is important: Pennsylvania lawmakers have introduced a series of bills to regulate and monitor the use of AI in the state. The proposed legislation aims to understand AI better and address potential negative impacts. One bill calls for the establishment of a task force to study the need for a state agency to monitor and license AI products. Another bill focuses on requiring AI-generated content to carry disclosures to prevent the dissemination of false information. The lawmakers also aim to criminalize the unauthorized dissemination of AI-generated deep fakes. Several other states are also considering similar regulations to ensure the responsible use of AI and address issues such as bias, discrimination, consumer protection, and data privacy. However, regulating AI is challenging due to its diverse applications, rapid evolution, and potential impact on job markets. Instead, experts suggest focusing on regulating the industries where AI is employed, implementing risk management practices, and minimizing catastrophic outcomes. Businesses in Pennsylvania and across the nation should remain up-to-date and involved in the formation of this new area of law, as it will have wide reaching policy implications in many sectors. --- Shane P. Riley
| |
“As of now, shopping with the help of generative A.I. isn't quite frictionless.”
Why this is important: A post-purchase survey company discovered a trend over the last few months: customers are first hearing about products or brands through ChatGPT. ChatGPT is OpenAI’s language model that interacts in a conversational way. Through the use of a dialogue format, customers can utilize ChatGPT to answer follow-up questions, such as “what should I get my wife as a gift?” ChatGPT may suggest a diamond necklace or make-up, and direct customers to relevant retail websites. Although ChatGPT is currently trained with data that cuts off at September 2021, interested retailers should act sooner rather than later because the general expectation is that ChatGPT and other generative AI models will soon be able to crawl the internet in real time to showcase products. Retailers should be aware, however, of the risks associated with ChatGPT, such as copyright infringement and data security. It is important for retailers to weigh these risks against the beneficial value of capturing new customers and increasing brand awareness through ChatGPT. --- Victoria L. Creta
| |
“Major biometric damages claims are here to stay, and companies using biometric-based tech must watch out for new exclusions in their liability coverage.”
Why this is important: With the increase in biometric privacy laws throughout the country comes an increase in biometric damages claims against businesses that utilize that biometric-based technology. This includes violations based on the use of employees’ fingerprints or retina scans. There has been an increase in awards pursuant to the Illinois Biometric Information Privacy Act (“BIPA”) following recent rulings by the Illinois Supreme Court, which ruled that BIPA claims are subject to a five-year statute of limitations and that every time an individual’s biometric information is collected it is a separate violation. The result is an increase in the number of viable claims and an explosion in economic exposure. Thus far, insurers’ attempts to deny coverage in biometric liability cases based on existing policy exclusions have largely failed. However, to combat the increase in awards and settlements related to the violation of biometric privacy laws, insurance companies are introducing more specific exclusions in their general liability and employment practices liability policies. Therefore, it is imperative that when obtaining or renewing these types of policies that businesses and those subject to biometric privacy laws closely examine their policies and know what they will and will not cover. Entities operating in states with strict biometric privacy laws are recommended to obtain additional coverage that will expressly cover biometric privacy claims. --- Alexander L. Turner
| |
“Prolonged spaceflight can result in many of the same physiological changes associated with aging, only at a much quicker rate.”
Why this is important: This project represents one of the most recent to be added to the research portfolio being conducted at the International Space Station’s National Laboratory. The ISS National Lab first launched its research initiative in 2013 with a focus on stem cell research, but over the years those initial projects have developed into highly specialized investigations of targeted therapeutics. The tissue chips, which launched with SpaceX CRS-27, will evaluate cardiac tissues. The project hopes to leverage the impact of prolonged spaceflight to accelerate data collection on tissue structures that would normally require many years through more traditional research techniques. Because exposure to a microgravity environment has been shown to mimic the effects of aging, researchers are hopeful that analysis of the tissue chips in the microgravity environment of the ISS National Laboratory will enable them to explore both cardiac function preservation as well as cardiac disease progression over time. This and other research included in the payload of CRS-27 reflect a positive development in the type of public-private partnerships that NIH and other agencies can support. Such partnerships and collaborations remain a critical element of technological advancement, and should be supported. --- Brian H. Richardson
| |
“Barron's deputy editor Ben Levisohn, reporter Carleton English and associate editor Jack Hough discuss the impact of artificial intelligence on the music industry on 'Barron's Roundtable.'”
Why this is important: This roundtable discusses the widespread issues facing the music industry with the rise of AI generated works ranging from Paul McCartney’s use of AI to incorporate John Lennon’s real voice into newly released Beatles’ songs to the full mimicry of popular artists, such as Drake and The Weeknd. The use of AI to generate music raises monumental questions regarding the copyright implications, the most basic of which is: who owns the copyright? In recent formal guidance, the Copyright Office confirmed its position that AI generated works made without human intervention are not copyrightable, but also clarified that they may become copyrightable if there is sufficient human involvement and authorship in the end product.
The broader and more nuanced question is: what rights can artists enforce against AI generated music mimicking their voices? As recently discussed with Harvard Law Today, IP expert Louis Tompros explains that this issue is both complex and unresolved. The process of AI training necessarily includes the copying of an artist’s music to create a derivative work and, thus, could amount to copyright infringement. However, creating music in the style of another artist is not considered infringing activity and could be protected by fair use doctrine stemming from First Amendment rights. Additionally, and most basically, AI generated songs are new songs, so there is no direct copying in these cases.
Tompros further explains that rights of publicity provide a clearer route to remedy in these circumstances. Under Ninth Circuit precedent, the intentional imitation of a distinctive professional singer’s voice for commercial purposes is a violation of the right of publicity under California law. The downside to these claims, however, is that state courts work much slower than federal copyright takedowns, allowing the infringing work to remain available for months or years.
All in all, both state and federal courts should see a rise in copyright and publicity infringement claimants in the coming years, which should provide more case law and clarity in conjunction with the Copyright Office’s formal guidance on the issue of AI. --- Shane P. Riley
| |
“While AI has streamlined operations significantly, its widespread use isn’t free from flaws.”
Why this is important: As we have discussed in previous issues of Decoded, AI is an emerging trend in the economy at large. Everything from students' book reports, title scene artwork on the latest Disney+ Marvel show, and now banking are utilizing AI. AI is created by having the programs vacuum up information from public spaces on the internet, including individuals’ social media posts, to allow the AI to learn and evolve. Even the questions asked of AI contribute to the AI’s algorithm. The concern is what information is being used to construct the AI algorithms, which impacts individual data privacy, and what impact AI will have on various industries.
The rise of the use of AI in the banking and financial industries has raised concerns regarding consumer protection and privacy with the Consumer Financial Protection Bureau (“CFPB”) and U.S. Senators. Government officials are equally concerned about AI being used as a tool to scam the public as they are of customers’ personal data being breached and that unconscious biases programmed into an AI’s algorithm may lead to discrimination. As the use of AI increases in the banking and finance industries, increased consumer protections will necessarily need to be implemented because traditional measures are insufficient in an era of AI. These include the development of data literacy tools that educate the public on how to spot an AI generated scam, and increased government expertise on technological developments involving AI. --- Alexander L. Turner
| |
“Applications are being sought by the U.S. Department of Commerce’s Economic Development Administration for the selection of 20 federally designated regional Tech Hubs across the country, with $500 million in appropriated funding support available in 2023 to help these hubs drive U.S.-based technology, innovation, and job growth.”
Why this is important: In August 2022, the CHIPS and Science Act (the “Act”) was signed into law, triggering a wave of investment and initiatives across several sectors of the U.S. economy. Regional governments and local authorities in various locales have spent considerable time and effort to position themselves for designation as a Regional Technology and Innovation Hub through a program made available through the Act. The initiative to designate Tech Hubs focuses on strengthening capacity and capability within a geographic region, including manufacture, commercial activity, and deployment for 10 Key Technology Focus Areas, including artificial intelligence, biotechnologies, energy, and industrial efficiency. $500 million has been appropriated through the U.S. Economic Development Authority to launch the Tech Hubs program, and applications for selection in Phase 1 must be submitted by August 15, 2023. --- Brian H. Richardson
| |
“Four West Virginia county school districts have installed or are in the process of installing new facial recognition technology.”
Why this is important: The use of facial recognition technology in education is not a new concept. Since the pandemic and the use of remote learning, and more importantly testing, the use of facial recognition in education has increased exponentially. Four school districts in West Virginia are implementing the use of facial recognition as an extra layer of security for their schools. The systems will have the faces of staff, students, and regular visitors uploaded to the system. The systems can also have a database of known sex offenders or anyone known to be a threat to the school uploaded to the system. These systems will allow school administrations to be immediately notified who is approaching the outside of the school. The cost-effective part of this type of system is that it integrates into the school’s already existing security cameras. This is all part of Governor Justice’s statewide school safety program that was implemented earlier this year. The goal of the program is to utilize $2 million to ensure prompt law enforcement response in the event of an emergency. --- Alexander L. Turner
| |
This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.
Responsible Attorney: Michael J. Basile, 800-967-8251
| | | | |