Registration is wrong! Please check form fields.
Publication detail:
Cover.jpg

Introduction

In the rapidly evolving landscape of Artificial Intelligence (AI), the distinction between "Good AI" and "Bad AI" has become increasingly critical, not just technologically but also ethically and legally.  As government bodies begin to adopt broad AI regulations and implement legal frameworks focused on governing these advances, it is becoming increasingly important for organizations to prioritize privacy, security, compliance, and responsible development practices when embracing new AI technologies.  The recent passage of the European Union Artificial Intelligence Act (EU AI Act) is poised to set a new global standard for the responsible adoption of AI.  It is also challenging businesses and organizations worldwide to distinguish between "Good AI" – prioritizing privacy and security – and "Bad AI" - focused on data exploitation, and to strike a balance between innovation and the ethics of progress.   In this piece, we investigate the concepts of Good AI and Bad AI from regulatory and forward-thinking pragmatic perspectives.

Regulatory & Legislative.jpg

Regulatory & Legislative

The European Parliament's recent approval of the EU AI Act marks a significant milestone in the global effort to ensure the safe and responsible development of AI technologies. The act aims to protect citizens' rights, democracy, and environmental sustainability from the dangers posed by high-risk AI applications. It establishes obligations tailored to the risk and impact level of AI systems, with the goal of positioning Europe as a global leader in responsible AI innovation.

The Act applies to providers and developers of AI systems that are marketed or used within the EU, regardless of whether those providers or developers are established in the EU or another country – such as Switzerland.  It adopts a risk-based approach for categorizing AI systems into four tiers, corresponding to the sensitivity of the data involved and the particular AI use case or application. 

The legislation introduces stringent bans on AI applications deemed harmful, such as biometric categorization systems, untargeted facial image scraping, emotion recognition in workplaces and schools, social scoring, and predictive policing based solely on profiling. It also outlines specific conditions for the use of biometric identification in law enforcement, and demands transparency and accuracy for high-risk AI systems.

Practical & Pragmatic.jpg

Practical & Pragmatic

As organizations grapple with these new challenges, frameworks like Dataiku's RAFT (Reliable, Accountable, Fair, and Transparent) are being developed to help provide a comprehensive corporate and R&D roadmap to building AI systems responsibly, addressing potential risks and looking ahead to future regulatory developments.

The RAFT framework emphasizes the critical need for organizations to consider the role of accountability and governance in the use of AI systems, particularly in the context of the rapid evolution and adoption of Generative AI and Large Language Models (LLMs).  It stresses that the deployment and governance of AI must consider socio-technical dynamics, legal considerations, and emerging issues like privacy and copyright infringements.  The goal of this proactive approach is to broadly unify the emerging consensus on this technology and provide businesses and research institutions with a forward-looking approach to start preparing their organizations even when the impact of future legislation is still uncertain.

Misuse of Generative AI poses specific risks, such as toxicity, polarization, discrimination, over-reliance on AI, disinformation, as well as data privacy, model security, and copyright infringements.  These risks can manifest across different types of AI technology and vary depending on use cases.  

For example at Spitch.ai we take special care to consider both the necessity and the implications of Generative AI tools when we incorporate them into existing services or utilize them within our own organization.  When integrating these tools into our Contact Center solutions like Quality Management, the focus is on responsibly improving the Agent experience, reducing stress, and streamlining customer interactions. We believe it is paramount that humans remain in the driver's seat.

As organizations navigate the responsible adoption of AI, they must consider the audience for AI model outputs – whether business users, consumers, or individuals – and meet core criteria for deployment that further the goals of reliability and transparency – we need to build Good AI.  The potential impacts of AI systems should be evaluated based on their direct and indirect effects on individuals and groups, whether these impacts are immediate or unfold over time.  

We need to ensure that our solutions neither intentionally nor indirectly result in outcomes that systematically harvest data, or introduce bias, unnecessary polarity or misinformation into personal interactions or public discourse.  It is incumbent upon us to build Good AI.

Conclusion.jpg

Conclusion

We are at the early stages of understanding the full risk potential of Generative AI.  Organizations need to embrace ongoing adaptation and refinement of responsible AI practices in response to the emerging challenges posed by the evolving AI.  At Spitch we are committed to prioritizing privacy, security, compliance, and responsible development practices. By adopting and developing our own forward-thinking framework for doing so, we aim to harness the power of Generative AI and the many new opportunities it is bound to continue providing, while also mitigating the risks associated with "Bad AI" and aligning with emerging global standards for responsible AI innovation.

Acknowledgment: This work is the result of multiple iterations of review, synthesis, and analysis between human authors and Generative AI tools.

Publications