Introduction to Responsible AI

34.92k views1382 WordsCopy TextShare
Google Cloud
At Google, we take responsible AI seriously. Discover why responsible AI matters more than ever, and...
Video Transcript:
[Music] AI is being discussed a lot but what does it mean to use AI responsibly not sure that's great that's what I'm here for I'm Manny and I'm a security engineer at Google I'm going to teach you how to understand why Google has put AI principles in place identify the need for responsible AI practice within an organization recognize that responsible AI affects all decisions made at all stages of a project and recognize that organizations can design their AI tools to fit their own business needs and values sounds good let's get into it you might not
realize it but many of us already have daily interactions with artificial intelligence or AI from predictions for traffic and weather to recommendations for TV shows you might like to watch next as AI becomes more common many technologies that aren't AI enabled they start to seem inadequate like having a phone that can't access the internet now ai systems are enabling computers to see understand and interact with the world in ways that were unimaginable just a decade ago and these systems are developing at an extraordinary Pace what we've got to remember though is that despite these remarkable
advancements AI is not infallible developing responsible AI requires an understanding of the possible issues limitations or unintended consequences technology is a reflection of what exists in society so without good practices AI May replicate existing issues or bias and amplify them this is where things get tricky because there isn't a universal definition of responsible AI nor is there a simple checklist or formula that defines how responsible AI practices should be implemented instead organizations are developing their own a principles that reflect their mission and value luckily for us though while these principles are unique to every organization
if you look for common themes you find a consistent set of ideas across transparency fairness accountability and privacy let's get into how we view things at Google our approach to responsible AI is rooted in a commitment to strive towards AI That's built for everyone that's accountable and safe that respects privacy and that is driv by scientific Excellence we've developed our own AI principles practices governance processes and tools that together embody our values and guide our approach to responsible AI we've Incorporated responsibility by Design into our products and even more importantly organization like many companies we
use our AI principles as a framework to guide responsible decision making we all have a role to play in how responsible AI is applied whatever stage in the AI process you're involved with from design to deployment or application the decisions you make have an impact and that's why it's so important that you two have a defined and repeatable process for using AI responsibly there's a common misconception with artificial intelligence that machines play the central decision-making role in reality it's people who design and build these machines and decide how they're used let explain people are involved
in each aspect of AI development they collect or create the data that the model is trained on they control the deployment of the AI and how it's applied in a given context essentially human decisions are threaded throughout our technology products and every time a person makes a decision they're actually making a choice based on their own values whether it's a decision to use generative AI to solve a problem as opposed to other methods or anywhere throughout the machine learning life cycle that person introduces their own set of values this means that every decision Point requires
consideration and evaluation to ensure that choices have been made responsibly from concept through deployment and maintenance because there's a potential to impact many areas of society not to mention people's daily lives it's important to develop these Technologies with ethics in mind responsible AI doesn't mean to focus only on the obviously controversial use cases without responsible AI practices even seemingly innocuous AI use cases or those with good intent could still cause ethical issues or unintended outcomes or not be as beneficial as they could be ethics and responsibility are important not just because they represent the right
thing to do but also because they can guide AI design to be more beneficial for people's lives so how does this relate to Google we've learned that building responsibility into any AI deployment makes better models and builds trust with our customers and our customers customers if at any point that trust is broken we run the risk of AI deployments being stalled unsuccessful or at worst harmful to stakeholders those products effects and tying it all together this all fits into our belief at Google that responsible AI equals successful AI we make our product and business decisions
around AI through a series of Assessments and reviews these instill rigor and consistency in our approach across product areas and geographies these assessments and reviews begin with ensuring that any project aligns with our AI principles while AI principles help ground a group in shared commitments not everyone will agree with every decision made about how products should be designed responsibly this is why it's important to develop robust processes that people can trust so even if they don't agree with the end decision they trust the process process that drove the decision so we've talked a lot about
just how important guiding principles are for AI in theory but what are they in practice let's get into it in June 2018 we announced seven AI principles to guide our work these are concrete standards that actively govern our research and product development and affect our business decisions here's an overview of each one one AI should be socially beneficial any project should take into account a broad range of Social and economic factors and will proceed only where we believe that the overall likely benefits substantially exceed the foreseeable risk and downsides two AI should avoid creating or
reinforcing unfair bias we seek to avoid unjust effects on people particularly those related to sensitive characteristics such as race ethnicity gender nationality income sexual orientation ability and political or religious belief three AI should be built and tested for safety we will continue to develop and apply strong Safety and Security practices to avoid unintended results that create risk of harm four AI should be accountable to people we will Design AI systems that provide appropriate opportunities for feedback relative explanations and appeal five AI I should incorporate privacy design principles we will give opportunity for notice and consent
encourage architectures with privacy safeguards and provide appropriate transparency and control over the use of data six AI should uphold high standards of scientific Excellence we'll work with a range of stakeholders to promote thoughtful leadership in this area drawing on scientifically rigorous and multi-disciplinary approaches and we will responsibly share AI knowledge as publishing educational materials best practices and research that enable more people to develop useful AI applications seven AI should be made available for uses that Accord with these principles many technologies have multiple uses so we'll work to limit potentially harmful or abusive applications so those
are the seven principles we have but in addition to these seven principles there are certain AI applications we will not pursue we will will not design or deploy AI in these four application areas technologies that cause or are likely to cause overall harm weapons or other Technologies whose principal purpose or implementation is to cause or directly facilitate injury to people technologies that gather or use information for surveillance that violates internationally accepted norms and Technologies whose purpose contravenes widely accepted principles of international law and human rights establishing principles was a starting point rather than an end
what remains true is that our AI principles rarely give us direct answers to our questions how to build our products they don't and shouldn't allow us to sidestep hard conversations they are a foundation that establishes what we stand for what we build and why we build it and they're core to the success of our Enterprise AI offerings thanks for watching and if you want to learn more about AI make sure to check out our other vide [Music] else
Copyright © 2024. Made with ♥ in London by YTScribe.com