viralamo

Menu
  • Technology
  • Science
  • Money
  • Culturs
  • Trending
  • Video

Subscribe To Our Website To Receive The Last Stories

Join Us Now For Free
Home
Technology
Researchers propose framework to measure AI’s social and environmental impact
Technology

Researchers propose framework to measure AI’s social and environmental impact

12/06/2020

In a newly published paper on the preprint server Arxiv.org, researchers at the Montreal AI Ethics Institute, McGill University, Carnegie Mellon, and Microsoft propose a four-pillar framework called SECure designed to quantify the environmental and social impact of AI. Through techniques like compute-efficient machine learning, federated learning, and data sovereignty, the coauthors assert scientists and practitioners have the power to cut contributions to the carbon footprint while restoring trust in historically opaque systems.

Sustainability, privacy, and transparency remain underaddressed and unsolved challenges in AI. In June 2019, researchers at the University of Massachusetts at Amherst released a study estimating that the amount of power required for training and searching a given model involves the emission of roughly 626,000 pounds of carbon dioxide — equivalent to nearly 5 times the lifetime emissions of the average U.S. car. Partnerships like those pursued by DeepMind and the U.K.’s National Health Service conceal the true nature of AI systems being developed and piloted. And sensitive AI training data often leaks out into the public web, usually without stakeholders’ knowledge.

SECure’s first pillar, then — compute-efficient machine learning — aims to lower the computation burdens that typically make access inequitable for researchers who aren’t associated with organizations that have heavy compute and data processing infrastructures. It proposes creating a standardized metric that could be used to make quantified comparisons across hardware and software configurations, allowing people to make informed decisions in choosing one system over another.

The second pillar of SECure proposes the use of federated learning approaches as a mechanism to perform on-device training and inferencing of machine learning models. (In this context, federated learning refers to training an AI algorithm across decentralized devices or servers holding data samples without exchanging those samples, enabling multiple parties to build a model without liberally sharing data.) As the coauthors note, federated learning can decrease carbon impact if computations are performed where electricity is produced using clean sources. As a second-order benefit, it mitigates the risks and harm that arise from data centralization, including data breaches and privacy intrusions.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

SECure’s third pillar — data sovereignty — refers to the idea of strong data ownership and affording individuals control over how their data is used, for what purposes, and for how long. It also allows users to withdraw consent if they see fit while respecting differing norms regarding ownership typically ignored in discussions around diversity and inclusion as they relate to AI. The coauthors point out that some indigenous perspectives on data require that data be maintained on indigenous land or used, for example, or processed in ways consistent with certain values.

“In the domain of machine learning, especially where large data sets are pooled from numerous users, the withdrawal of consent presents a major challenge,” wrote the researchers. “Specifically, there are no clear mechanisms today that allow for the removal of data traces or of the impacts of data related to a user … without requiring a retraining of the system.”

The last pillar of SECure — LEED-esque certification — draws on the Leadership in Energy and Environmental Design for inspiration. The researchers propose a certification process that’d provide metrics allowing users to assess the state of an AI system in comparison with others, including measures of the cost of data tasks and custom workflows (in terms of storage and compute power). It’d be semi-automated to reduce administrative costs, with the tools enabling organizations to become compliant developed and made available in open source. And it’d be intelligible to a wide group of people, informed by a survey designed to determine the information users seek from certifications and how it can be best conveyed.

The researchers believe that if SECure were deployed at scale, it’d create the impetus for consumers, academics, and investors to demand more transparency on the social and environmental impacts of AI. They’d use their purchasing power to steer the progress of technological progress, ideally accounting for those two impacts. “Responsible AI investment, akin to impact investing, will be easier with a mechanism that allows for standardized comparisons across various solutions, which SECure is perfectly geared toward,” the coauthors wrote. “From a broad perspective, this project lends itself well to future recommendations in terms of public policy.”

The trick is adoption, of course. SECure competes with Responsible AI Licenses (RAIL), a set of end-user and source code license agreements with clauses restricting the use, reproduction, and distribution of potentially harmful AI technology. IBM has separately proposed voluntary factsheets that would be completed and published by companies that develop and provide AI, with the goal of increasing the transparency of their services.

Source link

Share
Tweet
Pinterest
Linkedin
Stumble
Google+
Email
Prev Article
Next Article

Related Articles

Minecraft Dungeons builds a May 26 release date
Microsoft announced today that Minecraft Dungeons will release on May …

Minecraft Dungeons builds a May 26 release date

PlayStation 5 gets Godfall looter-slasher from Gearbox Publishing
TechCrunch ist Teil von Verizon Media. Klicken Sie auf ‘Ich …

Speedinvest raises new €190M fund to continue backing early-stage European tech startups

Leave a Reply Cancel reply

Find us on Facebook

Related Posts

  • Phantasy Star Online 2 comes to Steam on August 5
    Phantasy Star Online 2 comes to Steam …
    28/07/2020
  • How Adobe is using an AI chatbot to support its 22,000 remote workers
    How Adobe is using an AI chatbot …
    05/09/2020
  • Star Wars: Squadrons leaks on Microsoft’s Xbox website
    EA reports strong June quarter with revenue …
    31/07/2020
  • Fintech firms, if you’re relying on a ‘ghost’ CCO, expect trouble
    Fintech firms, if you’re relying on a …
    23/08/2020
  • LeanTaaS raises $40 million to optimize health clinic operations with AI
    A clinical team used MIT CSAIL’s AI …
    14/04/2020

Popular Posts

  • US government strikes back at Kremlin for SolarWinds hack campaign
    US government strikes back at Kremlin for …
    15/04/2021 0
  • Top 10 Things You Should Know About …
    17/03/2021 0
  • I was a teenage Twitter hacker. Graham Ivan Clark gets 3-year sentence
    I was a teenage Twitter hacker. Graham …
    17/03/2021 0
  • DDoSers are abusing Microsoft RDP to make attacks more powerful
    ~4,300 publicly reachable servers are posing a …
    18/03/2021 0
  • Top 10 Bizarre Facts About Talking – …
    18/03/2021 0

viralamo

Pages

  • Contact Us
  • Privacy Policy
Copyright © 2021 viralamo
Theme by MyThemeShop.com

Ad Blocker Detected

Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker.

Refresh