Skip to content

How ISMS can help developers follow NCSC's new secure AI guidelines

Louis Strauss |

February 1, 2024
How ISMS can help developers follow NCSC's new secure AI guidelines

Contents

The UK National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA), and 21 other international agencies have released new guidelines for AI-enabled systems built from scratch or on top of existing services and tools.

These guidelines aim to guide AI developers through the design, development, deployment, and operation of AI systems to ensure that security remains a core component throughout their life cycle.

The UK NCSC and US CISA published these guidelines during the first AI Safety Summit held in early November 2023, where leading tech organizations, government agencies, and world leaders committed to the responsible and secure development of artificial intelligence.

Here, we will dive into the guidelines' scope and application and how information security management systems are crucial.

What are NCSC's new guidelines?

The Guidelines for Secure AI System Development are recommendations for organizations developing and deploying AI systems. They aim to help organizations build and maintain AI systems that are:

  • Secure: Resistant to attacks and other threats. 
  • Reliable: Functioning as intended and delivering expected results. 
  • Fair: Operating in a way that is unbiased and non-discriminatory.

NCSC's new guidelines have four key areas: secure design, secure development, secure deployment, and secure operation and maintenance.

 

 

Secure design

 

 

Secure design

Secure design focuses on building security into the AI system from the very beginning. That involves several key steps:

  • Raising staff awareness of threats and risks: Educate everyone involved in the development process about potential security vulnerabilities and the impact they can have.
  • Modeling the threats to your system: Conduct a thorough threat modeling exercise to identify the threats your AI system will likely face. That will help you to prioritize your security efforts and focus on the most critical risks. 
  • Designing your system for security, functionality, and performance: Don't just focus on how well your AI system performs its intended task; make sure it's also designed to be secure from attack. That may involve using secure coding practices, choosing secure algorithms, and protecting your data from unauthorized access. 

The guidelines also recommend considering security benefits and trade-offs when selecting your AI model. That includes factors such as:

  • Model architecture: Some models are more susceptible to attacks than others. For example, deep neural networks may be more vulnerable to adversarial examples, where subtle modifications to input data can cause the model to make incorrect predictions. Secure design advocates selecting architectures less prone to such attacks, like decision trees or support vector machines.
  • Configuration: How you configure your model can also affect its security. Certain configurations can include built-in security features like input validation, anomaly detection, and data sanitization. Choosing models with readily available or implementable defense mechanisms strengthens the AI system's security posture.
  • Training data: The data you use to train your model can influence its biases and vulnerabilities. Biases and inaccuracies in training data can directly translate into biased and inaccurate outputs from the model. Secure design emphasizes using high-quality, well-curated data representative of the intended use case and minimizes potential biases that could lead to discriminatory or unfair outcomes.
  • Training algorithm and hyperparameters: The algorithm you use to train your model and the hyperparameters you choose can also affect its security. Optimizing hyperparameters, such as learning rate and network depth, can affect the model's robustness and susceptibility to noise. Secure design prioritizes configurations that enhance data resilience and minimize the risk of model instability, which could lead to unpredictable or erroneous outputs.

Ensure your AI system is designed with security in mind from the start by complying with these recommendations. That will make it more resilient to attacks and less likely to cause harm.

 

 

Secure development

 

 

Secure development

Secure development emphasizes secure coding practices to minimize vulnerabilities in your AI system during implementation. Key recommendations include:

Using secure coding techniques

  • Input validation: Sanitize and validate all user input to prevent attacks like SQL injection or cross-site scripting. 
  • Data sanitization: Clean and remove unnecessary data from input and outputs to minimize risks like exposure of sensitive information. 
  • Memory management: Use memory management techniques like bounds checking and memory deallocation to avoid buffer overflows and data leaks. 
  • Secure authentication and authorization: Implement strong authentication and authorization mechanisms to control access to your system and data.

Managing vulnerabilities in dependencies

  • Identify and update vulnerable dependencies: Regularly scan your code and dependencies for known vulnerabilities and promptly apply security patches. 
  • Use secure libraries and frameworks: Choose libraries and frameworks with a strong security track record. 
  • Minimize dependency footprint: Use only the required dependencies and maintain a minimal software footprint to reduce exposure to potential vulnerabilities.

Testing for security vulnerabilities

  • Conduct static and dynamic code analysis: Use static code analysis tools to identify potential vulnerabilities in your code and perform dynamic testing to simulate real-world attack scenarios. 
  • Penetration testing: Involve ethical hackers to identify and exploit vulnerabilities in your system before attackers do. 
  • Adhere to secure coding standards: Follow established secure coding standards like CWE Top 25 or MISRA to reduce common coding errors and vulnerabilities.

Additional recommendations

  • Utilize code review processes: Implement secure code review practices to identify and address security issues before deployment. 
  • Use secure development tools and frameworks: Leverage tools and frameworks that promote secure coding practices and automate vulnerability scanning. 
  • Maintain secure development configurations: Ensure your development environment is properly configured to prevent accidental errors and malicious attacks.

These guidelines minimize the risk of vulnerabilities in your AI system and build a more secure and reliable solution.

 

 

Secure deployment

 

 

Secure deployment

Secure deployment focuses on protecting your AI system after development and before full public access. The key points are:

Infrastructure and model protection

  • Deploy in secure environments: Use secure cloud platforms or on-premises infrastructure with strong physical and logical security measures. 
  • Encrypt models and data: Protect access to models and sensitive data with encryption at rest and in transit. 
  • Implement access controls: Limit access to models and data based on the principle of least privilege. 
  • Monitor infrastructure and logs: Monitor suspicious activity and log all access and events for analysis.

Incident management and responsible release

  • Develop incident response plans: Create a plan for identifying, containing, and remediating security incidents. 
  • Conduct beta testing: Release the AI system to a limited audience for thorough testing and feedback before general deployment. 
  • Monitor feedback and performance: Continuously monitor the performance and user feedback of the deployed system to identify and address any security concerns. 
  • Communicate vulnerabilities responsibly: Disclose vulnerabilities responsibly and promptly to mitigate risks and maintain user trust.

Additional recommendations

  • Use containerization or virtualization: Deploy models as containers or virtual machines to enhance isolation and resource management. 
  • Regularly update models and software: Maintain the latest versions of models and software to patch vulnerabilities and address security improvements. 
  • Automate security testing: Integrate automated security testing into the deployment process for continuous security validation. 
  • Build security awareness: Train staff in the deployment and maintenance of secure practices and incident response procedures.

Through these measures, lessen the risk of vulnerabilities, ensure responsible release, and foster a culture of security throughout the AI system lifecycle.

 

 

Secure operation and maintenance

 

 

Secure operation and maintenance

Secure operation and maintenance emphasize the need for vigilance after deployment. Here are the key aspects:

Monitoring and logging

  • Monitor system behavior: Track overall performance, usage patterns, and any deviations from the expected behavior to identify potential threats or anomalies. 
  • Monitor system input: Scrutinize user input and data feeding the system to detect malicious manipulations or biases. 
  • Log activity and events: Record all access, changes, and occurrences within the system for forensic analysis and anomaly detection.

Update management

  • Follow a secure-by-design approach to updates: Ensure updates and patches are rigorously tested and applied in a controlled manner to minimize risk. 
  • Prioritize timely updates: Address vulnerabilities promptly by applying security patches without delay. 
  • Maintain version control: Implement a system for tracking, managing, and reverting to known good versions if issues arise.

Information sharing

  • Collect and share lessons learned: Gather and share experiences and insights within your organization and through relevant communities to continuously improve security practices. 
  • Contribute to vulnerability databases: Report discovered vulnerabilities to relevant vendors and databases to foster collective awareness and mitigation efforts. 
  • Collaborate with security experts: Engage with security specialists and researchers for valuable guidance and threat intelligence.

Additional recommendations

  • Perform regular security assessments: Conduct periodic security audits and penetration testing to identify and address weaknesses. 
  • Maintain staff training: Continuously educate staff involved in system maintenance on secure practices and incident response procedures. 
  • Automate security tasks: Employ tools and automation to enhance efficiency and accuracy in security monitoring and mitigation efforts. 
  • Regularly review and update security policies: Adapt security measures based on evolving threats and changing risk landscapes.

With these guidelines, maintain a proactive and vigilant approach to operating and maintaining your AI system, fostering a secure environment and minimizing the risk of compromising its functionality or safety.

 

How can an ISMS help?

An Information Security Management System (ISMS) is a framework for managing an organization's information security risks. In particular, it's a set of policies, processes, and procedures designed to protect information assets from unauthorized access, use, disclosure, disruption, modification, or destruction. That makes it a valuable tool for developers when complying with NCSC's new secure AI guidelines. Here's how:

Implementing secure design principles

  • Threat modeling: ISMS can facilitate threat modeling workshops and provide templates and tools to identify potential threats and vulnerabilities specific to AI systems.
  • Secure coding practices: ISMS can integrate secure coding guidelines and best practices into development processes, promoting the use of static code analysis tools and secure coding libraries.
  • Security reviews: ISMS can establish security review processes for AI system designs, ensuring vulnerabilities are identified and addressed before deployment.

Supporting secure development

  • Vulnerability management: ISMS can automate vulnerability scanning of dependencies and libraries used in AI development, promptly alerting developers to known security issues.
  • Penetration testing: ISMS can facilitate penetration testing activities involving ethical hackers to discover and exploit vulnerabilities in AI systems before attackers do.
  • Secure development training: ISMS can provide training programs for developers on secure coding practices, threat modeling, and secure development tools relevant to AI.

Enabling secure deployment

  • Access control management: ISMS can enforce access control policies for models and data, ensuring only authorized individuals have access based on the principle of least privilege.
  • Logging and monitoring: ISMS can centralize logging and monitoring for AI systems, providing developers with insights into system behavior and potential security events.
  • Incident response management: ISMS can integrate incident response procedures for AI systems, enabling developers to identify, contain, and remediate security incidents effectively.

Facilitating secure operation and maintenance

  • Continuous monitoring: ISMS can provide continuous monitoring of AI system performance, data inputs, and security logs, enabling developers to detect anomalies and potential threats.
  • Patch management: ISMS can automate patch management processes for AI systems and dependencies, ensuring timely application of security updates.
  • Security awareness programs: ISMS can support ongoing security awareness programs for developers, ensuring they stay updated on the latest threats and best practices for secure AI operations.

To summarize, an ISMS can serve as a central hub for coordinating and implementing NCSC's secure AI guidelines throughout the development lifecycle. ISMS can empower developers to build and maintain secure and reliable AI systems by providing tools, processes, and training.

It's important to remember that ISMS is a framework, and its effectiveness depends on how it's implemented and tailored to your specific organization and AI development projects.

 

Take the next steps with 6clicks

6clicks is pivotal in supporting organizations complying with NCSC's new secure AI guidelines by providing tools and functionalities that streamline and automate various aspects of secure AI development through:

Aligning with security principles

Our software solutions can help organizations align with the security principles advocated by the NCSC guidelines, such as secure design, threat modeling, and human oversight. Build systems that adhere to these security best practices and make compliance with the guidelines smoother.

Proactive risk management

6clicks also encourages proactive risk assessment and mitigation throughout the AI development lifecycle. That aligns with the NCSC guidelines' emphasis on secure design, where potential vulnerabilities are identified and addressed early on. This proactive approach minimizes the risk of non-compliance later and ensures organizations are prepared to address potential security concerns as outlined by the guidelines.

Data governance and privacy

6clicks emphasizes data governance and privacy, with practices like data minimization, anonymization, and robust security measures being essential. These practices align with the NCSC guidelines' focus on data security and privacy, emphasizing encryption, access control, and data minimization. Organizations can ensure compliance with data privacy regulations embedded within the NCSC guidelines by adopting responsible data management practices.

Continuous improvement and monitoring

Monitoring and improving AI systems is also crucial, ensuring they remain ethical, safe, and aligned with regulations. That aligns with the NCSC guidelines' emphasis on ongoing maintenance and monitoring of AI systems to identify and address potential security vulnerabilities or compliance issues. By embracing continuous improvement, organizations can proactively adapt their AI systems to evolving regulations and address any non-compliance concerns raised by the NCSC.

We are essential partners in facilitating secure AI development by offering a centralized platform that integrates, automates, and streamlines various security processes. That empowers organizations to follow NCSC's guidelines effectively, build secure AI systems, and manage information security risks.

 

 





Louis Strauss

Written by Louis Strauss

Louis began his career in Berlin where he also founded Dobbel Berlin – Berlin’s curated search engine. Returning to Melbourne to join KPMG, Louis lead the development of software designed to distribute IP and create a platform for us by advisors and clients. While at KPMG, Louis also co-authored Chasing Digital: A Playbook for the New Economy. Louis is accomplished in stakeholder management, requirements gathering, product testing, refinement and project implementation. Louis also holds a Bachelor of Engineering and a Masters of Information Systems from the University of Melbourne.