The tech industry can’t agree on what open source AI means. That’s a problem.

Ultimately, the community needs to decide what it’s trying to achieve, says Zacchiroli: “Are you just following where the market is going so that they don’t essentially co-opt the term ‘open-source AI,’ or are you trying to pull the market toward being more open and providing more freedoms to the users?”

What’s the point of open source?

It’s debatable how much any definition of open-source AI will level the playing field anyway, says Sarah Myers West, co–executive director of the AI Now Institute. She coauthored a paper published in August 2023 exposing the lack of openness in many open-source AI projects. But it also highlighted that the vast amounts of data and computing power needed to train cutting-edge AI creates deeper structural barriers for smaller players, no matter how open models are.

Myers West thinks there’s also a lack of clarity regarding what people hope to achieve by making AI open source. “Is it safety, is it the ability to conduct academic research, is it trying to foster greater competition?” she asks. “We need to be way more precise about what the goal is, and then how opening up a system changes the pursuit of that goal.”

The OSI seems keen to avoid those conversations. The draft definition mentions autonomy and transparency as key benefits, but Maffulli demurred when pressed to explain why the OSI values those concepts. The document also contains a section labeled “out of scope issues” that makes clear the definition won’t wade into questions around “ethical, trustworthy, or responsible” AI.

Maffulli says historically the open-source community has focused on enabling the frictionless sharing of software and avoided getting bogged down in debates about what that software should be used for. “It’s not our job,” he says.

But those questions can’t be dismissed, says Warso, no matter how hard people have tried over the decades. The idea that technology is neutral and that topics like ethics are “out of scope” is a myth, she adds. She suspects it’s a myth that needs to be upheld to prevent the open-source community’s loose coalition from fracturing. “I think people realize it’s not real [the myth], but we need this to move forward,” says Warso.

Beyond the OSI, others have taken a different approach. In 2022, a group of researchers introduced Responsible AI Licenses (RAIL), which are similar to open-source licenses but include clauses that can restrict specific use cases. The goal, says Danish Contractor, an AI researcher who co-created the license, is to let developers prevent their work from being used for things they consider inappropriate or unethical.

“As a researcher, I would hate for my stuff to be used in ways that would be detrimental,” he says. And he’s not alone: a recent analysis he and colleagues conducted on AI startup Hugging Face’s popular model-hosting platform found that 28% of models use RAIL. 

Source link

We will be happy to hear your thoughts

Leave a reply
Reset Password
Compare items
  • Total (0)
Shopping cart