Five of the world's largest technology companies have come together to shed light on the ongoing development of artificial intelligence. The entities involved are Facebook, Google and its subsidiary DeepMind, Amazon, Microsoft, and IBM, and the group's new nonprofit organisation will grow to include a number of AI research groups and academics.
Announced on Wednesday, the coalition will be known as the Partnership on AI and it will be co-chaired by Microsoft Research chief Eric Horvitz and DeepMind co-founder Mustafa Suleyman. Apple is in talks with the group, but has not yet decided to join the organization.
The partnership has two main focuses. The first will be educating the public about AI, especially as the technology becomes more deeply embedded in everyday life, threatening to automate human labour and take over complex tasks like driving vehicles. The second is to ensure some of the industry's biggest players are collaborating on AI best practices and receiving input from non-corporate experts. Those include philosophers and ethicists dedicated to complex questions about machine intelligence in life-threatening situations like health care and transportation.
"The reason we all work on AI is because we passionately believe its ability to transform our world," Suleyman said in a conference call with the media. "The positive impact of AI will depend not only on the quality of our algorithms, but the on the amount of public discussion ... to ensure AI is understood by and benefits as many people as possible."
Horvitz says recent hyperbole concerning AI's threats amounted to an "echo chamber of anxiety", and that is one factor that pushed him and his colleagues to form the nonprofit. Also looming is the possibility of government regulation that may slow down AI developments.
"Questions come up about the transparency of our systems and the ability for us to explain ourselves," Horvitz said, highlighting the possibility of human bias finding its way into algorithms. "The best way forward is with an inclusiveness and open dialogue." He added that the partnership is not opposed to including government bodies or working with public agencies.
Horvitz and Suleyman are inviting any tech company, research group, or nonprofit to join the partnership, and leadership will be shared equally among both corporate founding members and non-corporate participants. In the coming months, the group expects to announce these new members, and it has already begun discussions with organizations like the Elon Musk-backed nonprofit OpenAI and the Association for the Advancement of Artificial Intelligence.
While details on what the partnership will look like are sparse, Horvitz says the group plans to begin holding meetings very soon and publishing details of those get-togethers online.
In a list of tenets posted to the Partnership on AI's new website, the group outlined eight main principles to help guide its discussions. Those include obvious goodwill measures like "we will educate and listen to the public" and "we strive to create a culture of cooperation, openness, and trust". Others openly acknowledge the tricky nature of working on such transformative technology without the necessary input of more public-minded experts.
"We are committed to open research and dialogue on the ethical, social, economic, and legal implications of AI," reads the second tenet.
The partnership cannot and won't try to police AI research. However, some appear to veer into the territory of self-regulation. The sixth item on the list of tenets includes statements like protecting user privacy and, "Opposing development and use of AI technologies that would violate international conventions on human rights, and promoting safeguards and technologies that do no harm."
This would appear to include automated weapons systems and surveillance operations, yet it's unclear how the coalition would compel members not to work on government contracts with defence agencies or contribute to AI-based policing technologies.
When pressed on whether the partnership seeks to self-govern the field, Horvitz, Suleyman, and other representatives were quick to point out that the group is in no way a regulatory body. It is also incapable of policing how the world decides to develop AI systems, even with much of the research open source and available for anyone to use.
Instead, the group hopes to set an example the industry will follow. "We're really not here to serve those kinds of functions," Suleyman said. "We're here to learn from one another about things that are working well and not working well, and to be open about the areas of work we're struggling on."