The future of security rests at the intersection of human and computer intelligence, and our ability to leverage AI is a critical component driving the industry’s transformation.
But the speed of innovation isn’t the only hurdle. While many organizations are already experiencing the positive impact of AI-powered security, the industry is facing an important juncture.
As with other innovations throughout history – such as the internet, social media, and e-commerce – the technological advancements can sometimes outpace governance and regulation, leading to a period of adjustment, as people, organizations, and policymakers catch up.
Such governance is necessary not to deter innovation, but to protect it. The implementation and widespread adoption of AI comes with certain risks that, if left unchecked, pose serious privacy and security issues. For example, some risks include potential misuse by threat actors, questions about intellectual property ownership, and bias. Regulations and guidelines help identify and mitigate these risks for responsible AI use.
But AI regulations are still under development, with some regions taking a more proactive stance than others. Earlier this year, the EU adopted the world's first comprehensive AI regulation, the EU AI Act, but in other regions, such as North America and Asia Pacific, governance is more fragmented.
Additionally, some companies, including Securitas, are establishing their own guardrails to ensure they develop and deploy AI responsibly. Using AI responsibly means both complying with applicable laws and regulations as well as upholding ethical standards.
“The power of AI comes with a profound responsibility – in our industry and beyond,” says Daniel Sandberg, Securitas’ director of artificial intelligence. “As we harness these advanced technologies to enhance security and safety, we must collectively commit to upholding the highest standards of ethical practice. Doing so also unlocks tremendous innovation and value for our communities.”
Securitas’ Responsible AI Framework
To ensure we use AI in an ethical and responsible way – where we adhere to the highest standards of quality, privacy, and security – Securitas has established a Responsible AI Framework. This framework provides governance for how we execute our broader AI strategy, while keeping our core values of integrity, vigilance, and helpfulness at the heart of all we do.
According to the framework, responsible AI rests on five pillars:
- Privacy and transparency: Trust, transparency, and integrity are important to us. We provide clear and accurate information about our AI solutions, and we respect privacy.
- Equality and fairness: We ensure that our AI solutions are inclusive and diverse. We evaluate the impact on society and the environment.
- Data quality and bias: We are vigilant, and we take steps to mitigate bias or harm. We work actively to ensure data used in key processes is of good quality.
- Safety and security: Systems are robust and secure to ensure responsible AI practices for our clients, employees, and partners. We help ensure a safer society.
- Innovation and business value: We combine human and computer intelligence. We harness the power of responsible AI to unlock innovation and tangible business value.
“Our Responsible AI Framework helps ensure our AI systems are transparent in their operations, accountable for their actions, and respectful of privacy and individual rights,” Daniel says. “Our goal is to foster trust and reliability in every AI solution we deploy, contributing positively to our clients and communities.”
A future built on responsible AI
AI will continue to play a central role in security innovation for the foreseeable future, but as an industry we must remember: It isn’t just about the tech, but about how we use it.
As regulations around AI evolve, a collective awakening is emerging within the industry. It's becoming clear that sustainable progress hinges on our responsible use of AI, ensuring the technology that's developed and deployed is done so with a deep commitment to ethical standards.
Indeed, responsible AI is the key to unlocking a future in which we’re creating smarter and more intuitive solutions that not only think but also care – about privacy, fairness, and the impact they have on the real world.