Thought Leadership
Innovation thrives on the exchange of ideas, where shared thoughts spark new possibilities.
Read Datalytica’s latests articles written by our subject matter experts on current technology topics, emerging trends, and essential business principles. Our experts share their knowledge and perspectives to help you stay informed and ahead in an ever-evolving landscape, offering valuable insights that drive innovation and success.
-
Steve Salinas, Captain (Former), USMC
May 2024
Artificial Intelligence (AI) kill switches are hardware or software mechanisms designed to remotely disable or restrict the operation of artificial intelligence systems if they exhibit undesirable or dangerous behaviors. Proposed by experts from academia and industry, including OpenAI and the University of Cambridge's Centre for the Study of Existential Risk, these kill switches are seen as a way to mitigate the risks associated with increasingly complex AI models.
One of the proposed uses for AI kill switches is to maintain control over the hardware running AI, making malicious or rogue behavior detectable, excludable, and quantifiable. Some implementations of kill switches embed co-processors that check digital certificates and allow for hardware deactivation or performance reduction if the certificate is invalid. Such mechanisms could prevent misuse of AI in applications like military technology or critical infrastructure. This protection prevents malicious actors from circumventing some of the safeguards in place that keep foundation models compliant to an AI service-provider's ethical guidelines.
However, implementing AI kill switches comes with pros and cons. Kill switches offer a tangible method to control potentially rogue AI behaviors, ensuring safety and regulatory compliance. An important consideration is that these additional checks performed by kill switch software and hardware add computational burden to end systems, which further prohibits robust kill switch implementations on computationally restricted end systems. Furthermore, these very safeguards could become targets for hackers or be misused by authoritarian regimes to stifle innovation, free speech, and human rights. Balancing these aspects is crucial as AI continues to evolve and integrate safety features.
Killswitch technology is still new, and implementation of this safeguard is nuanced and may not be the correct solution for every deployment of AI technology. As AI becomes increasingly important to organizations and businesses, so too is it important to weigh in expert perspectives when considering these complexities. At Datalytica, our deep understanding and innovative approaches are essential for any organization interacting with AI systems, ensuring both the effective deployment of AI and the necessary safeguards to protect against its risks.
-
Steve Salinas, Captain (Former), USMC
May 2024
Yesterday, the Cybersecurity and Infrastructure Security Agency (CISA) published three advisories on vulnerabilities targeting Industrial Control Systems (ICS). While these specific vulnerabilities are new, ICS systems have long been the target of malicious actors due to how the systems are employed. Unlike conventional Information Technology (IT) systems that may be integrated with industrial systems, Operation Technology (OT) prioritizes availability over confidentiality or integration which balances security considerations differently that in conventional systems.
Maintaining ICS systems is difficult. ICS provides control, monitoring, and vision of OT that in many cases must operate continuously outside of strict maintenance windows. This constrains the update cycle of industrial systems to pre-planned instances, in the best of cases. Often these systems are deployed in remote locations that further delay the ability of operators and maintainers to identify vulnerabilities. Historically, one of the ways maintainers circumvented issues with remote systems, is by using hardcoded passwords. The use of a standard password across devices ensured that when a maintainer arrived at a remote location they would have some means of accessing the system of interest. ICSA-24-123-01 and ICSA-24-067-01, identifies this as one of the issues with PowerPanel 4.9.0 and prior, that ultimately allow an attacker to gain administrative access to the affected system.
ICS systems often only have a fraction of the computational power that consumer-grade hardware offers. This benefits industrial systems because it ensures that the systems used to monitor and control the OT due not increase the power requirement and cost beyond what is necessary. Unfortunately, as ICSA-24-123-02 shows, this can result in the abandonment of some security measures like input sanitization in favor of performance. The resultant effect is that these systems can then be used to access IT systems with confidential or proprietary data that an attacker can use in more advanced attacks.
Securing ICS systems requires a systematic approach that impacts how future systems are developed and integrated into new critical infrastructure, as well as techniques to mitigate the impact of existing vulnerable technologies. Most vulnerabilities in ICS systems can be significantly mitigated by minimizing the connectivity of ICS infrastructure, segmenting networks, and improving the robustness of password policies. Protecting these systems from cyber threats is not just about safeguarding data but is crucial for the continuity of essential services that sustain society.
-
Steve Salinas, Captain (Former), USMC
April 2024
At the end of February, a significant announcement emerged from the White House, advocating for the widespread use of "memory safe" programming languages. This endorsement, detailed in a press release available here, highlights the crucial role of memory safety in mitigating software vulnerabilities. By pinpointing memory unsafe languages as a primary source of these vulnerabilities, the statement underscores the necessity for increased public and private sector collaboration to elevate the prominence and implementation of memory safe practices. This initiative marks a significant stride towards bolstering memory safety, but it raises questions: What exactly is memory safety, and why is it so crucial for the future of software development?
What is memory safety?
Memory safety can be thought of as a set of guarantees and assurances made by a programming language that prevent a class of vulnerabilities. In other words, by choosing to develop an application in a memory safe language, software can be immunized against an entire type of vulnerability. Unfortunately, that does not guarantee that your software is entirely bug-free or prevent other classes of vulnerabilities, but research indicates that memory vulnerabilities are the root cause of up to 70% of CVEs (Common Vulnerabilities and Exposures).
How does memory safety work?
There’s two primary ways that memory safety is implemented. Some languages, like Rust, heavily scrutinize code at build-time to ensure that checks are in place to prevent unsafe behavior. Other languages, like Python, provide “garbage collectors” at run time to prevent erroneous memory conditions. Both of these approaches are hugely beneficial to developers because the final product is robust by default.
Why do we use unsafe languages?
To fully grasp the new prioritization of memory safe languages, it’s important to understand why developers use unsafe languages. Historically, unsafe languages have been the best way to maximize software performance on bare metal systems. In particular, some hardware cannot run the requisite garbage collecting services that some memory safe languages require. This can be either due to resource limitations or compatibility issues. These issues, combined with the maturity if unsafe languages in other development respects, make unsafe languages viable for software developers and end users in some contexts.
Industrial control systems that operate critical infrastructure are often among the resource constrained hardware that has leveraged unsafe code to allow developers to squeeze system resources for performance. Paradoxically, this means that some of our most important systems are the most vulnerable to exploitation as a result of this. However, memory safe languages offer varying degrees of compatibility with unsafe, allowing new code to be built on top of unsafe code bases. This can potentially ensure that as developers continue to build repositories, new code can provide assurances that were not integrated in earlier versions.
Why should companies care?
The software development landscape is rapidly evolving, with a marked increase in the demand for memory safety, driven by government advocacy and customer expectations. Consequently, software development companies must adapt their strategies, moving away from unsafe codebases and development environments towards embracing memory safe programming languages wherever feasible.
Currently, customers are not likely to find vendors who supply software exclusively developed in memory safe languages that meets all their needs. The sheer volume and inherent value of existing unsafe code make it daunting for many projects to transition. Yet, as memory safe languages evolve and their interoperability with traditional code improves, the barriers to adopting these languages are decreasing. Enhanced interoperability means that integrating new, memory-safe features into legacy systems becomes more straightforward, mitigating the costs and complexities of upgrading large-scale projects.
However, it's crucial to recognize that not all projects can seamlessly switch to memory safe languages due to compatibility issues or the specific resource demands of certain platforms. Despite these challenges, the steady push from governmental bodies, the rising demand from consumers, and the overarching necessity for enhanced security protocols make the shift toward memory safety an imperative for businesses. In the foreseeable future, companies may have to attest to degree of adoption of memory safe languages as part of security, compliance, or insurance requirements.
Conclusion
Memory safety represents not just a technological shift but a cultural one in the world of software development. As the White House's push for memory safe programming languages underscores, there's a growing recognition of the role that software infrastructure plays in national and global security. The pivot towards memory safety is more than just an attempt to curb the frequency and severity of cyber-attacks; it's a proactive measure to build a more resilient digital future. As developers and companies navigate this shift, they're not only responding to immediate security concerns but also contributing to a foundation that will support safer, more reliable software systems for years to come. This move towards memory safe languages, therefore, isn't just about preventing the next big data breach; it's about ensuring that our digital infrastructure can support the increasingly complex, interconnected world we live in. By prioritizing memory safety, we're investing in a future where technology can continue to advance without being undermined by fundamental security flaws.
-
Steve Salinas, Captain (Former), USMC
March 2024
What is Generative AI?
Generative Artificial Intelligence, sometimes called “Gen AI”, is a newly popular technology that is revolutionizing content creation and has implications that extend into various disciplines. The technology has become so popular a new executive order, Executive Order 14110, recently provided guidance on AI security considerations. Generative AI is trained on openly available samples, and can provide a user with text, image, or even video based on prompts entered through a chat-like interface. Even though artificial intelligence has been a growing field of research for decades, only for the past couple of years have accessible interfaces made these generative products viable for commercial use.
How can I use Generative AI?
A plethora of use cases exist for generative AI, and best leveraging the technology depends not only on desired outcomes, but on the availability of quality data used to train the model. Commercially deployed models can typically provide immediate value by augmenting existing human capital and improving efficiency in development workflows, information digests, or even marketing imagery. But it’s important to understand that the data gathered by these platforms can sometimes be used to train future models. Most vendors provide commercial offerings that prohibit use of a user’s inputs for training, but like any other security context, it remains important to understand what information the vendor may be retaining so that an organization can properly assess the risk of integrating generative solutions into sensitive workspaces.
Open-source models offer a different solution and enable tech-savvy organizations to train and deploy their own models to meet specific needs. This may give an organization increased control over the data used to train the model, the usage data gathered while the model is deployed, and the protocols used to interact with the model. Open-source models can also be run locally and can therefore enable AI integration in systems with limited connectivity to the internet. The potential of an open-source implementation is the technical difficulty of implementing, deploying, and maintaining the solution which may be prohibitive requirements for some organizations.
What are the security concerns?
Like any other technology, Artificial Intelligence brings with it security considerations that are worth thinking about. Research indicates that some types of adversarial attacks can result in models leaking data they were trained on, or behaving in unexpected ways that can potentially damage a company’s reputation.
If you plan on using a third-party model through a programming interface, it can be important to understand where and how your usage data is being stored, for how long, and what mitigations and safeguards are in place to ensure the model behaves as expected. If you’re training your own model for a tailored use case, it’s important to understand that malicious users or unexpected input can result in anomalous behavior that you should account for during the design of your implementation.
One of the benefits of the public internet is that data for training generative algorithms is plentiful. AI companies are therefore able to scrape the internet for human-generated content that’s ultimately used to train large proprietary generative systems. This development approach has raised copyright and privacy concerns from several groups, and the exact consequences and legal nuances are still being worked out. However, companies can be sure that the environment will continue to evolve in the coming years as these issues are resolved.
Research is increasingly demonstrating that content output by Generative AI, when used to train successive generative models, produces worse results. As generative algorithms integrate into existing platforms, the synthetically generated content of earlier generative algorithms can potentially result in the degenerative performance of new models. Research into this topic is ongoing, but this facet of Generative AI indicates that datasets of human-generated content will be valuable to future development efforts.
Conclusion
There’s a lot to think about as a company looks to leverage generative solutions in the marketplace. The topics discussed here are actively being discussed in policy as demand for risk mitigations for AI technologies continues to increase. How can you tell if your AI solution is secure? Contact us and find out!