Security by Design and evolving global regulations are redefining how manufacturers and developers secure emerging technologies like IoT and AI.
This Security Compass webinar explores how legislative frameworks and industry standards are influencing secure software practices in the age of AI and IoT. Featuring expert insights from Donald Johnston, a partner and technology lawyer at Aird & Berlis LLP, the session unpacks key developments from North America and Europe and offers practical advice for security leaders, developers, and decision-makers.
What Is the Role of Security by Design in Regulatory Guidance?
Security by Design shifts responsibility for cybersecurity from consumers to manufacturers and is central to new regulatory frameworks.
The paper “Shifting the Balance of Cybersecurity Risk” co-authored by agencies across the US, Canada, Europe, and Australasia, promotes:
-
Building security features into products from the outset
-
Reducing the need for consumers to manage configurations or updates
-
Using secure programming languages like Rust to eliminate common vulnerabilities
-
Adopting tailored threat models early in the SDLC
The ultimate goal is to make secure engineering a core business priority rather than a reactive measure.
How Does Security by Default Complement This Approach?
Security by Default ensures products are resilient to cyber threats out of the box, with minimal user intervention required.
Products should be shipped with:
-
No default passwords
-
Mandatory multifactor authentication
-
Secure logging and configuration standards
-
Pre-hardened systems to reduce reliance on post-deployment security tuning
This parallels the airline model of security: consumers follow basic rules, but system-wide security is ensured by design.
What Are the Four Core Objectives of the EU Cyber Resilience Act?
The EU Cyber Resilience Act aims to improve the security lifecycle of embedded devices and IoT products.
Objective | Description |
---|---|
Secure by Design | Security is built in from development through the product lifecycle |
Regulatory Coherence | Aligned cybersecurity compliance across member states |
Transparency | Clear disclosure of product security characteristics |
Usability | Enable consumers and businesses to operate products securely |
These objectives target known weaknesses in consumer-grade connected devices and emphasize the need for robust update mechanisms and vendor accountability.
How Should Executives and Developers Approach These Changes?
Securing buy-in from leadership requires framing security as a long-term business value, not just a technical task.
To gain executive support:
-
Emphasize security’s role in protecting brand reputation and shareholder value
-
Highlight risks of deploying vulnerable products, including legal and financial exposure
-
Leverage frameworks like NIST’s Secure Software Development Framework (SSDF) to guide implementation
Security-conscious development should be incentivized and operationalized across the organization.
What Is the US Cyber Trust Mark and Why Does It Matter?
The US Cyber Trust Mark signals product security compliance to consumers and encourages industry adoption of best practices.
Launched by the Biden-Harris administration, this voluntary label:
-
It is based on NIST standards for secure consumer devices
-
Includes a QR code linking to a national registry
-
Aims to increase consumer trust and market competitiveness
Manufacturers like Amazon, Best Buy, LG, and Samsung have already signed on, indicating growing momentum.
How Is AI Being Regulated? Insights from Canada and Beyond
AI regulation is evolving cautiously, focusing on transparency, risk assessment, and human rights protections.
Canada’s proposed Artificial Intelligence and Data Act (AIDA) includes:
-
Classification of systems as high or low impact
-
Mandatory risk assessments and plain-language summaries
-
Administrative monetary penalties for non-compliance
Meanwhile, NIST’s AI Risk Management Framework offers a principle-based approach to trustworthy AI, supporting:
-
Secure architectures (e.g., memory-safe languages and bounded pointers)
-
Mitigation of bias and vulnerabilities
-
Continuous monitoring and iterative development
What About Copyright and AI-Generated Content?
Under current legal interpretations, AI-generated content is not eligible for copyright protection unless a human is directly involved in the creative process.
A landmark U.S. case reaffirmed that creativity must be human-driven for copyright to apply. While this may evolve, for now:
-
AI systems cannot hold IP rights
-
Human oversight remains essential for legal attribution
Final Takeaway
Security and privacy are no longer optional—they’re foundational.
From secure firmware in smart refrigerators to trustworthy generative AI systems, regulators are pushing for enforceable, industry-wide standards. Organizations that embrace Security by Design, adhere to frameworks like NIST’s SSDF, and prepare for upcoming certification programs (like the US Cyber Trust Mark or ENISA’s EU equivalents) will be better positioned to innovate responsibly and retain customer trust.