Researchers at the Montreal AI Ethics Institute and Microsoft propose using machine learning to build comprehensive archives that could bridge gaps in cultural understanding, knowledge, and views. It’s their assertion that including others’ voices in archival processes with the help of machine learning can have positive ramifications on communities, particularly those which archivists have historically marginalized.
Rarely do people mistrust centralized and accessible directories of records, even when those records contain explicit or implicit biases. For instance, ten years after South Africa ended years of white rule and racial segregation under apartheid, books being used in the country’s schools were still not reflecting the history of what marginalized people experienced. Unfortunately, archives play a crucial role in the advancement of human society; we depend on them to craft public policies and to preserve languages, cultures, self-identity, views, and values.
The coauthors of the study sought to explore how technology like AI can address challenges around and amplify the usefulness of community databases and archives. They began by identifying areas where current archival practices fall short in serving the needs of underserved populations, finding that indigenous peoples, women, children, LGBTQIA2+, senior citizens, victims of genocides, racial minorities, cultural minorities, military veterans, and disabled populations often fall victim to oversights of archival tools and historians.
“Vocal minorities continue to be less discoverable online and in part due to skews in the automated archiving process towards a biased and narrow subset of content creators who know how to gamify online algorithms and increase their content’s visibility online,” the researchers wrote. “This skew in content discoverability has dramatic implications for what the systems identify high-value archives.”
This is where thoughtfully-applied AI comes into play. The coauthors say it stands to maximize the diversity of viewpoints within archives, for example scouring for content that goes beyond what indexes well on the internet and enhancing the discoverability of low-visibility communities that self-document. AI chatbots could interact with knowledge seekers to bolster their ability to discover a less-obvious, relevant artifacts, while at the same time allowing them to develop better digital literacy skills and exposing them to diverse historical perspectives.
The coauthors don’t address the potential for bias within these systems themselves, it’s worth noting. In the nonprofit Partnership on AI’s first-ever research report last April, the coauthors characterized AI now in use as unfit to automate the pretrial bail process, label some people as high risk, or declare others low risk and fit for release from prison. Other ill-fated experiments to predict things like GPA, grit, eviction, job training, layoffs, and material hardship reveal the prejudicial nature of AI algorithms; a recent study that attempted to use AI to predict which college students might fail physics classes found that accuracy tended to be lower for women.
Despite this, the researchers maintain a positive view of AI and its potential to “provide a fuller picture” to those seeking to build better understandings of cultures.
“Benefits of higher discoverability do not only accrue to marginalized communities; they also create positive knock-on effects for others who gain a better understanding of these cultures and are thus able to truly appreciate our shared cultural heritage in its entirety,” the coauthors wrote. “On the subject of comprehensiveness, collation of content from automated systems will enhance the available corpus in the archives … We find that modern AI-enabled approaches can create wider participation in shaping our shared cultural heritage while empowering monorities to have greater control over knowledge and artifacts that serve to represent their past and shape their present and future identities.”