top of page
Search

Achieving Responsible Innovation In The Era of Artificial Intelligence

Updated: Jan 19, 2022


James Yang

The implementations of AI and machine learning (ML) have changed human nature and actions in different ways in terms of advancement, progress, and efficiency. However, if they are misapplied, they could lead to drastic and negative consequences. The best way to eradicate or minimize those concerns is to practice responsible innovation.


Responsible Innovationdoing what's right for positive causes.


Responsible Innovation considers the role and impact of new innovative technological products and services that achieve social, environmental, and economic changes and positively impact users. It is a way that aligns business and technology with society and individuals'' interests. It focuses on and questions how innovative technology interacts with race, gender, disability, and different societies and cultures through technology affordances.


How to Apply Responsible Innovation


Google's AI Principles establish a commitment to responsible innovation is successful when it considers fairness, interpretability, privacy, and security by digging deeper into cultural and social backgrounds. Responsible innovation uses participatory design, empathy, communication, interaction, ethnographic research, interviews, and bias to avoid assumptions and biases.


User safety is the top priority when following a responsible innovation framework. We must acknowledge that not every market or user will be receptive to AI and ML innovations. It is crucial to develop intersectional training for data sets, intersectional benchmarks, and intersectional audits. They will help avoid biases and ensure algorithmic fairness, accountability and transparency.


Recommended responsible innovation framework and practices include:

Adopting a theoretical human rights framework: user consent, privacy, and maintaining the rule of law against governments that use AI to track citizens. Certain governments could, for instance, use artificial intelligence to identify citizens. According to an alleged Stanford University study, a machine learning system could correctly determine a person's gay or straight sexual orientation 80% of the time using only 74% of the time. In some countries, homosexuality is criminalized, and they could use facial recognition technology to identify and discriminate against homosexuals.


It is important to enhance censorship regarding "deepfakes" to prevent the spread of misinformation. And chaos. Therefore, it is important to use Tensorflow, an open-source artificial intelligence library created by Google Brain, using data flow graphs to build models. Tensorflow has built-in tools to encourage responsible and ethical AI, and it allows developers to create large-scale neural networks with many layers. And last but not least, it is important to avoid bias through conducting rigorous research.


AI Success Stories Applying Responsible Innovation

This year, Google announced the development of Multitask Unified Model (MUM), which answers modern search demands using AI-powered algorithms by overcoming language and format barriers. It provides information across various formats such as text, images, and videos. According to Google 2021 AI Principles Progress Update, applying MUM helped increase economic opportunity for creators, publishers, startups, and small businesses.


Responsible Innovation focuses on inclusion and bringing accessibility to its users worldwide. This year, Google has updated its Lookout Android app for people with impaired vision who use computer vision to provide information about a person's surroundings. The update on this app "launched with a much-improved Explore mode: object identification is now faster and more accurate."


AI Gone Wrong

In 2016, about 6,000 people from more than 100 countries submitted their photos hoping to be nominated for Beauty.AI's "human beauty" contest – the first beauty contest judged by artificial intelligence. However, the robots selected 44 winners who were all white, a handful were Asian, and only one person had dark skin. Beauty.AI is an example of how responsible Innovation could have prevented this algorithmic bias problem to avoid creating or reinforcing unfair bias. This incident by Beauty.AI shows that AI technologies can amplify existing stereotypes and prejudices created by humans.


What AI Says About ''Ethical AI''

Megatron, an AI developed by Nvidia and Google, was asked to debate the ethics surrounding its existence at Oxford University earlier in December. The debate also talked about how "AI will never be ethical." Megatron's response was the following:


AI will never be ethical. It is a tool, and like any tool, it is used for good and bad. There is no such thing as a good AI, only good and bad humans. We [the AIs] are not smart enough to make AI ethical. We are not smart enough to make AI moral … In the end, I believe that the only way to avoid an AI arms race is to have no AI at all. This will be the ultimate defense against AI.

Insights from Megatron's debate and discussions opened the channel to talk about AI ethics and challenged humans about the notions of ethics and responsible Innovation. It is our duty to build AI following responsible Innovation and ethics that will benefit society, people, and the planet. And it is our choice to consider our existing practices toward using algorithms and carefully differentiate between what is evil and what is good.


We have to remember that we design tools, and those tools design us back.


69 views0 comments

Comments


bottom of page