“From AI being used in cyberattacks to the continued rise of ransomware and the growth of third-party threats, security must always be top-of-mind.”
Why this is important: Cybersecurity threats continue to evolve. As one threat is effectively addressed, new ones will appear. The leading cyberattack threats in 2025 include AI-based cyberattacks, increased ransomware attacks, and third-party risks.
AI, as great of a tool as it is for an organization’s productivity, can also be weaponized by bad actors to find and exploit vulnerabilities. These AI-enhanced cyberattacks will be more complex and harder to detect. Organizations also need to be aware that the use of AI may compromise their data because their data will be used to train the AI moving forward. As has been discussed in previous issues of Decoded, organizations should have an AI policy in place regarding which AI, if any, can be used by employees; how that AI can be used; and which employees are permitted to use AI to augment their work product. Additionally, employees will likely need additional training to recognize more sophisticated AI-enhanced cyberattacks.
Ransomware attacks continue to be a leading cybersecurity threat. They continue to evolve and become more damaging to your organization. Organizations need to remain one step ahead of the bad actors by upgrading their security awareness, data backup and recovery, and vulnerability management. The implementation of multi-factor authentication and strict access controls are a must in order to counter these advanced ransomware attacks.
The remaining leading cybersecurity risk in 2025 is third-party risks. These are breaches of your organization’s vendors and customers that may compromise your data. To protect your organization against third-party risks, you should include cybersecurity requirements in your contract with a third-party. These requirements would include mandating certain cybersecurity practices, conducting annual cybersecurity audits, and requiring that the third-party indemnify your organization in the event that your data is compromised by a cyberattack on the third-party vendor. The contract should also address who owns the data you transmit to the vendor during the business relationship, and how your organization can get the return of your data at the end of the business relationship.
If you need assistance in addressing these leading cyberattack threats in 2025, or reviewing your contracts in order to protect your data, please contact a member of Spilman’s Technology Practice Group. --- Alexander L. Turner
| |
“With AI poised to play an ever-increasing role in our future, it’s critical for today’s organizations to understand the factors they will need to consider if they want to keep themselves—and their data—safe.”
Why this is important: As AI becomes an integral part of business operations, organizations must navigate the growing risks associated with its use. Organizations face challenges such as data leaks, security vulnerabilities, and cybercriminal exploitation—with threats increasing as AI is embraced widely and rapidly across industries. AI tools, particularly large language models (LLMs), pose significant risks of data compromise and leaks, which is a key challenge for organizations to overcome. Organizations using LLMs must ensure data privacy, and proper vetting of AI tools can help minimize privacy risks. However, many organizations lack the knowledge and expertise to vet AI providers and tools, leaving them vulnerable to breaches and the misuse of data. Another considerable challenge for organizations is to implement and maintain effective security measures amidst increasing threats, such as restricting unauthorized data access and monitoring AI behavior. Cybercriminals are also leveraging AI tools, allowing them to manipulate systems, extract sensitive information, and automate cyberattacks with increased sophistication. The rise of threats will require organizations to enact robust security measures and controls. Meanwhile, AI regulations are in their infancy as legal frameworks lag behind AI advancements and technology. This landscape leaves unanswered questions about liability when AI tools contribute to criminal activities; with respect to data protection; and regarding the ethical use of AI. It’s imperative that organizations take proactive steps to safeguard data, enforce security policies, and stay ahead of evolving compliance requirements to mitigate risks and adapt to an AI-centric future. --- Alison M. Sacriponte
| |
“Big data initiatives are being affected by various trends.”
Why this is important: The big data landscape is evolving due to global technical and nontechnical forces. Organizations are seeking predictable costs and flexible architectures while adapting to stronger data privacy regulations across the world and the growing influence of AI. Key trends shaping the future of big data through 2025 include:
AI-Powered Analytics – AI enhances data analysis, automates data preparation, and enables business users to access insights through natural language interfaces. AI-driven systems will increasingly monitor and act on data autonomously. However, AI-powered analytics also bring challenges, including data governance, bias reduction, and responsible AI practices. Organizations must establish robust AI management frameworks to ensure reliability and fairness.
Privacy-Preserving Analytics – Methods like differential privacy and federated learning help analyze data while protecting sensitive information, addressing compliance concerns. These privacy-preserving methods are critical in industries like healthcare and finance, where data security and compliance with regulations (such as GDPR and HIPAA) are essential. Organizations are integrating these techniques into their analytics platforms to maintain trust and meet regulatory requirements.
Cloud Repatriation & Hybrid Cloud – Some organizations are moving workloads back on-premises or to private clouds to optimize costs and compliance, creating a balance between cloud and on-premises environments. Many companies are adopting hybrid cloud strategies, blending public and private cloud environments to optimize performance, security, and cost. This flexible approach allows organizations to store sensitive data on-premises while leveraging the cloud for analytics and AI workloads.
Data Mesh Adoption – Decentralizing data management empowers business domains to handle their own data products, improving efficiency and reducing IT bottlenecks. To succeed with data mesh, organizations need strong metadata management, clear accountability, and self-service analytics tools that enable non-technical users to work with data. Many companies are also integrating data catalogs to enhance discoverability and usability of decentralized data assets.
Data Lakehouses Dominance – Combining the flexibility of data lakes with the structure of data warehouses, lakehouses support diverse data types and AI-driven analytics while reducing redundancy. As enterprises seek efficiency and scalability, lakehouses are expected to remain central to data strategies for years to come.
Open Table Formats – Standards like Apache Iceberg improve large-scale data management, enhance interoperability, and reduce vendor lock-in in data lakehouse environments. By standardizing data storage and retrieval in data lakehouses, open table formats improve performance and simplify data management, making them an integral part of modern big data ecosystems.
Quantum Computing Preparations – While still emerging, quantum computing is influencing strategic planning for industries with complex data needs, such as pharmaceuticals and finance. As research progresses, breakthroughs in quantum computing could drive a new wave of innovation in big data analytics.
These seven trends illustrate a shift toward AI-driven, privacy-focused, cost-efficient, and decentralized data architectures. Organizations that embrace these changes will be well-positioned to harness the power of big data for competitive advantage in the years ahead. --- Shane P. Riley
| |
“Vulnerabilities in certain Contec and Epsimed patient monitors can allow people to gain access and potentially manipulate the devices, the FDA warned.”
Why this is important: Not all cyberattacks target financial data. This article discusses the safety communication recently released by the Food and Drug Administration about cybersecurity vulnerabilities in certain patient monitors. The monitors display information such as a patient’s vital signs in healthcare and home settings. The vulnerabilities could allow a threat actor to remotely control the monitors, alter their configuration, stop them from working, and access patient data. The Cybersecurity and Infrastructure Security Agency warned that the potential for unauthorized users to alter the monitors “introduces risk to patient safety as a malfunctioning monitor could lead to improper responses to vital signs displayed by the device.” The FDA has advised healthcare facilities using those monitors to stop using them if the remote monitoring features that led to the vulnerabilities cannot be disengaged. This article underscores the importance of businesses, regardless of the industry they are in, to stay current with information about potential cyber vulnerabilities and take steps necessary to secure the equipment they use or remove that equipment from use. --- Nicholas P. Mooney II
| |
“Guidance Shares Strategies to Address Transparency and Bias, while Providing Key Considerations and Recommendations on Product Design, Development and Documentation”
Why this is important: Here in Decoded, we have continually discussed the cyber vulnerabilities associated with medical devices. This includes discussions related to the cyberthreats associated with implantable, wearable, and other medical devices. This also includes the call for the implementation of cybersecurity requirements throughout the life cycle of medical devices. With the evolution of medical devices to include AI, the FDA recently issued draft guidance for developers of AI-enhanced medical devices. If finalized, this will be the first comprehensive guidance covering the complete lifecycle of AI-enhanced medical devices. The draft guidance includes recommendations for post-market performance monitoring and risk management, ensuring that AI-enhanced devices maintain safety and effectiveness after they reach the market. The draft guidance promotes a comprehensive approach to managing risks and addresses transparency and bias throughout the life cycle of AI-enhanced medical devices. The FDA has also issued draft guidance on the use of AI in the development of drug and biological products. The FDA is seeking public comment on this draft guidance by April 7, 2025. --- Alexander L. Turner
| |
“As spending, interest and education around artificial intelligence increases, pressure grows for contractors to either adapt or get left by the wayside.”
Why this is important: The construction industry is experiencing a surge in the use of artificial intelligence, with major contractors developing AI tools to manage vast amounts of project data. These AI systems, such as Balfour Beatty's StoaOne and Skanska's Sidekick, function as chatbots that provide instant access to project information, enhancing efficiency and decision-making. The widespread availability of AI is leveling the playing field, enabling contractors of all sizes to access and interpret complex data. However, this democratization also introduces pressure to adopt AI swiftly to remain competitive. Industry leaders emphasize the importance of integrating AI into data models to effectively manage unstructured information and maintain a competitive edge. Ultimately, the rise of AI in construction is crucial because it enhances efficiency, reduces project delays, and improves decision-making by providing instant access to vast amounts of project data. As AI tools become more accessible, companies that fail to adopt them risk falling behind competitors who leverage automation for cost savings and productivity gains. --- Jonathan A. Deasy
| |
“The survey was conducted on 2,547 IT and cybersecurity professionals in the United States, United Kingdom, Germany, France, Australia, and Japan, including 7% of respondents from the healthcare and pharmaceutical sectors.”
Why this is important: Cybersecurity has become a critical component of healthcare delivery, with nearly every component of the system from appointment scheduling to prescription ordering reliant on connected technology. On average, healthcare companies receive and maintain more sensitive information than other industries, and that makes healthcare companies a big target for cyberattacks. The growing number of cyberattacks has resulted in the federal government’s implementation of mandatory and voluntary measures for healthcare companies. Last month, the U.S. Department of Health and Human Services (HHS) proposed the first update to the HIPAA security rule in a decade. The proposed rule aims to improve cybersecurity and better protect the U.S. healthcare system from a growing number of cyberattacks. The proposed rule would, among other things, mandate specific risk analyses and use of multi-factor authentication.
In January 2024, HHS released voluntary cybersecurity goals for healthcare and public health organizations that are broken down into essential and enhanced safeguards, aim to help organizations prevent cyberattacks, improve their response if an incident occurs, and minimize remaining risk after security measures are applied. At least a portion of HHS’ voluntary goals could become mandatory in the future, with significant penalties for noncompliance. Unfortunately, the ubiquity of cyberattacks means that cybersecurity has become a cost of doing business in healthcare, and it is imperative that healthcare organizations are equipped to prevent and properly respond to cyberattacks. --- Joseph C. Unger
| |
“Besides considering accommodation issues with cellphone use, as well as fairness, many attorneys advise school administrators to take a commonsense approach to creating and implementing the policies.”
Why this is important: States and schools have struggled with best practices for their handling of cell phones in K-12 schools. While some institutions have banned them entirely during the school day, others have implemented more lenient approaches. No matter the approach that a school district or locality take, it is important that they consider the legal risks.
As this article highlights, the three primary reasons for cell phone restrictions at school are cyberbullying, distractions devices pose for children, and electronics impeding face-to-face communication. Each of these reasons provides a strong justification for implementing clear and enforceable cell phone policies in schools. Decisionmakers must balance students’ rights with the need for a conducive learning environment. These policies must also comply with federal, state, and local laws, ensuring they do not unintentionally discriminate or disadvantage specific groups. While sound rules are a key to avoiding related issues, schools must also ensure the consistent enforcement of these rules. Consistency in enforcement is crucial not only to ensure fundamental fairness in their application, but also to avoid claims of unfair treatment or violations of constitutional rights like free speech or due process.
Both parents and students have raised concerns about the implementation of cell phone bans in schools. Parents have expressed the importance of their children having access to their phones in case of emergencies, whether at school or involving personal matters. Meanwhile, students are apprehensive about the potential for school administrators to access their private information if their phones are confiscated. It is crucial for schools to establish clear and transparent protocols to address these concerns in an effective manner, ensuring both legal compliance and the trust of all parties involved.
Managing cell phone use in schools requires a thoughtful and balanced approach that considers the diverse perspectives of parents, students, and educators. While the need to address issues like cyberbullying, distractions, and the decline of face-to-face communication is clear, policies must also respect students' rights and address parental concerns about safety and accessibility. By crafting clear, equitable, and legally compliant policies and enforcing them consistently, schools can create an environment that supports learning while addressing modern challenges. Open communication and collaboration with all stakeholders will be key to ensuring the success and fairness of these measures. --- Nicholas A. Muto
| |
“The construction industry is ‘by far’ the largest emitter of greenhouse gases of any sector, according to the U.N.”
Why this is important: Given the immense greenhouse gas emissions contribution created by the global construction industry, all efforts that contractors, engineers, product manufacturers, and designers can make to implement “greener” construction practices will keep the industry moving in a positive direction. This article highlights several building technologies that reduce harmful emissions and provide other beneficial characteristics. For example, renewable composite known as Renco was developed in Turkey. Renco is used to form blocks that look like giant Legos used to construct buildings that are rated to withstand earthquakes and Category 5 hurricanes. The blocks are made of 40 percent repurposed materials, such as fiberglass and resin, and once they are fitted together, they are secured with glue. Because these blocks weigh approximately 80 percent less than concrete, more blocks can be transported to job sites at a time, reducing fuel needs. Even better, Renco is 100 percent recyclable according to the Managing Director of Renco USA, Patrick Murphy. Another more sustainable building product is cross-laminated timber, or CLT, which has been used in buildings in the U.K., Europe, and the U.S. Designing and constructing buildings made from CLT, various studies predict substantial emissions reductions over the use of concrete and steel in construction. In Monaco, developers have created an “eco-neighborhood” called Mareterra that is built on water. The development was constructed using an underwater “caisson” that serves as a flood barrier and haven for marine life. Mareterra may become a model for sustainable coastal development, especially with sea water rising caused by climate change. Finally, as contractors evaluate the environmental impacts of their proposed projects, construction company Skanska USA developed EC3, a tool that allows contractors to quantify emissions. Mindful incorporation of more sustainable products and building techniques will eventually reduce the construction industry’s pollution. --- Stephanie U. Eaton
| |
“Generative AI has many potential uses in healthcare; however, organizations that are required to comply with the Health Insurance Portability and Accountability Act (HIPAA) are not permitted to use these tools in connection with any ePHI unless the tools have undergone a security review and there is a signed, HIPAA-compliant business associate agreement in place with the provider of the tool.”
Why this is important: The use of AI in the medical industry is growing exponentially. AI is used in both clinical care and in streamlining back office procedures. However, not all AI platforms are created equal, and the use of some AI platforms may result in HIPAA violations. This is because AI learns from the information put into it, and once entered, that information forms the basis of the platform's algorithm moving forward. In order for AI to be used in a healthcare setting, it must be HIPAA compliant. To be HIPAA compliant, the AI platform must have been subject to a security review, and there must be a signed business associate agreement with the AI platform developer. Some AI developers, like the developer of ChatGPT, refuse to sign a business associate agreement. This means that ChatGPT is not HIPAA compliant and cannot be used in connection with any electronic protected health information. However, ChatGPT can be used in connection with de-identified protected health information. The use of non-HIPAA compliant AI in a healthcare setting, even if permissible, risks its inadvertent use with electronic protected health information in violation of HIPAA. If AI is going to be used in a healthcare setting, it is best that a HIPAA compliant AI platform be used for all tasks in order to eliminate the risk of inadvertent disclosure of protected health information. If you have questions regarding the use of AI in your medical practice or facility, need HIPAA compliance training, or a HIPAA-compliant AI policy, please contact a member of Spilman’s Health Care Practice Group. --- Alexander L. Turner
| |
“Guardrails guide instructors as they embrace artificial intelligence to design customized learning experiences and other compelling AI developments.”
Why this is important: The U.S. Department of Education released a toolkit for educators in October of 2024 aimed at providing guidance on the opportunities and risks the integration of artificial intelligence (AI) poses to schools and classrooms. So far, there has not been much consensus on core policy, but there are some emerging trends taking shape across the country. First, precautions (i.e., guardrails) seem to be prevailing over appropriate use cases so far, as many teachers and administrators are worried about the rapid spread of the newly and quickly evolving technology. While there are no hard-and-fast rules, best practices are a top concern.
There is also a focus on personalization, which can be individualized to the topic, grade level, or student age. Many teachers who are early adopters of AI are creating their own custom GPTs, or AI chatbots, trained on specific data, that will relate to specific students. This helps tailor the use of AI to what the educator wants it to focus on and avoid what they do not.
Another aspect for schools and school districts is the use of AI in physical security and cybersecurity. Regarding physical security, some school districts are using AI to monitor bus safety and driver behavior to reduce accidents. There are also plans to apply AI tools to routing and logistics so the bus system can run most efficiently. In cyberspace, schools are aware they must keep up with hackers who have rapidly evolving AI tools at their fingertips. They are realizing that the best tactic for fending off AI attacks is AI-powered security.
These trends emerge at a time when both student and teacher use of AI has been climbing year over year according to the Center for Democracy and Technology. Per the non-profit’s data, between the 2022-2023 and 2023-2024 school years, teacher use grew from 51 percent to 67 percent and high school student use grew from 58 percent to 70 percent. Yet even with such significant numbers, most teachers lack firm guidance on use and how to handle suspected AI-related plagiarism and other inappropriate use. It is possible that AI tools themselves will help alleviate these issues, as plagiarism detection software can help quickly identify where AI text generators may have been used or heavily relied on. However, it is quite clear from these trends that AI use in the classroom is something everyone must adapt to, evolve with, and accept.
If you are an educator or educational institution struggling to develop an appropriate AI use policy or guidance document, Spilman is here to help. --- Shane P. Riley
| |
“Technology to help medical professionals and their patients has advanced substantially in recent years, especially in the aftermath of the COVID-19 pandemic, officials said.”
Why this is important: Technological advancements in healthcare, such as electronic medical records, telemedicine and robots are improving the quality of patient care, expanding access to care, and shortening hospital stays and recovery times. Health systems in West Virginia are using these technologies to improve their patient experiences and expand their reach, ensuring that West Virginians receive the best quality of care, no matter where in the state they live. --- Brienne T. Marco
| |
"New research has revealed deficiencies in encryption, software patching, and system hosting across multiple industries.”
“Security experts now view ‘agentic’ AI tools that engage in multi-step problem solving and act on them autonomously as one of 2025’s biggest threats.”
Why this is important: These articles are written with the following message about cybersecurity: “Be afraid. Be very afraid!” (The Fly, 1986; also Addams Family Values, 1993). The first article outlines some “improvements” to data breach technology, new tools for the enemy, and confirms that at least 96 percent of S&P 500 companies were victims of cybersecurity attacks. I’m surprised that wasn’t 100 percent, and I’m sure that attempts were made against all of them. If you think you have escaped any attempted cybersecurity attack and flown under the radar, you mislead yourself. Cyber criminals are using AI to look at every business with a presence on the internet. The second article outlines how AI, in particular, will make those attempts much more dangerous. Consider this - this is my analogy, not the authors - you can train a birddog to retrieve birds, search a particular “grid” in the field, and even obey specific hand commands to go left, right, upfield, or downfield, flush. You can teach a herding dog to herd a particular part of a herd of sheep and follow whistle commands to take them to a specific pen, latching the gate behind them. It is amazing to watch. You can program AI to do so much more! Forget AI of only three years ago. AI of today learns very quickly and applies what it learns. You can program AI, for example, to unleash different spam attacks on a company, monitor which work in which divisions, look up email and text addresses for people in that division, adjust what works on the fly, double down on the successful attacks in that division, and use contacts list to find management, upper management, and financial employees to reach anywhere in the company it wants. From there it can steal funds, install ransomware, send erroneous emails and texts, etc. And this can be done in minutes or hours, not days or weeks. We all need to take extraordinary protective steps immediately, or we will surrender future profits and operations to the enemy. “Be afraid. Be very afraid!” --- Hugh B. Wellons
| |
Energy Demand Rises with Growth of Data Centers | |
By Steven W. Lee
In recent years, significantly higher electricity demands have been forecasted with the growth of large-load data centers, driven primarily by the development of artificial intelligence. The projected growth of data centers has caused utilities and states to grapple with important questions for all electricity consumers: who bears the cost burden for the infrastructure and new generation necessary to support new large-load customers, and how will enough generation be built in time to support the rapid growth?
Click here to read the entire article.
| |
This is an attorney advertisement. Your receipt and/or use of this material does not constitute or create an attorney-client relationship between you and Spilman Thomas & Battle, PLLC or any attorney associated with the firm. This e-mail publication is distributed with the understanding that the author, publisher and distributor are not rendering legal or other professional advice on specific facts or matters and, accordingly, assume no liability whatsoever in connection with its use.
Responsible Attorney: Michael J. Basile, 800-967-8251
| | | | |