The co-founder of TechNovelle, Rianat Abbas, has raised the alarm over rising risks of unchecked AI systems in critical sectors of the economy.
At a virtual flagship programme under the HackTheFest 2025 AI Bias Bounty Hackathon on Sunday, Abbas pointed out the need for a bold new initiative to address one of AI’s most urgent and unregulated threats, which she described as systemic bias.
She highlighted the growing risks of unchecked AI systems and pushed for a community-led response grounded in governance, transparency, and security.
She explained, “With the rise of AI in critical sectors like health, finance, and manufacturing, there is no news that risk is growing faster than the governance we have in place.
“These systems are becoming more accessible to cyber threats, both from inside and outside.”
Abbas also unveiled a global, community-driven framework designed to help teams identify, document, and support security and mitigate bias in machine learning systems.
The event drew more than 300 participants across 22 countries, including teams from Nigeria, the United States, Canada, India, the United Kingdom, Kenya, Brazil, and Germany.
“For us to build secure and well-governed AI models that actually work for critical sectors, we need a standardised structure that helps teams identify risks, like bias in AI models, and follow through with clear reporting and mitigation,” she explained.
The two-day virtual event brought together a mix of emerging talent, researchers, tech professionals, and a distinguished lineup of tech experts like AI/ML engineers, cybersecurity engineers, and business leaders as reviewers and keynote speakers from global tech companies, including Yetunde Adekoya, quantitative risk analyst, Citibank; Madhu Ramanathan, principal group engineering manager, Microsoft; Rajesh Sura, head of data engineering and analytics, Amazon; Harpreet Singh, executive director, JPMorgan Chase & Co.; Jenna Cavelle, founder, One Woman Show AI; and Santosh Bompally, information security executive, Humana.
Participants identified problems in AI models, like differences in scores between different groups using this framework.
Abbas said the winning teams proposed bias detection pipelines, bias-aware model auditing tools, and interactive dashboards that could flag disparities in real time.
Final reports submitted through the Risk Intelligence Framework are being curated into a public repository for research and benchmarking.
The AI Risk Intelligence Framework will be released publicly later this year. It will include templates, a glossary of risk types, and examples from the hackathon.
Abbas noted that the goal is to support a wide range of teams, from researchers to engineering managers.
At the close of the event, she said she is currently in conversations with collaborators across sectors to help teams adopt and adapt the framework to their environments. She emphasised that clear reporting is the foundation of strong AI governance, and teams should equip themselves with tools that fulfil this requirement.
