Editor’s note: Este artículo está traducido al español en la página 8.

Last month, four Las Vegas teenagers attended a Nevada Senate Committee on Commerce and Labor meeting to support a proposed bill that would require social media platforms to verify the ages of teenage users and limit certain features for those accounts during school hours.

During public comment, the students from the East Las Vegas charter school Equipo Academy argued that social media sites like Facebook and Instagram provided ample opportunity for young people to acquire drugs and firearms, organize fights and bully their peers.

“They prioritize their own interests and profits over the well-being of their users. They allow anything to happen on their site because they want our attention and our time, no matter the cost,” student Eduardo Guillen said.

The question of how legislators can best act to keep minors safe on the internet has persisted at both the state and federal levels for years, but the more recent emergence of artificial intelligence technology has further intensified those debates.

In Carson City, Nevada’s political leaders have found some common ground on a handful of legislative efforts designed to address some of their shared concerns.

Social media restrictions Nevada Attorney General Aaron Ford sued social media companies Meta, Snapchat and TikTok in early 2024 over “the impact their algorithms were having on our youth(s).” The case, which Ford says is in litigation, gave rise to “ideas about other ways that we could protect children in our state,” he said.

Ford’s ruminations led him to file Senate Bill 63, or the Nevada Youth Online Safety Act. If passed, it would require social media platforms operating in the state to establish an age verification system, obtain parental consent before minors can create an account and limit certain features like notifications during hours that are “typically reserved for sleep or school,” among other restrictions.

The bill was initially scheduled for a hearing at the aforementioned March 19 committee meeting, but Ford had it pulled from the agenda so he could continue working on it. He says he wants to ensure that the final version addresses a “twopronged goal” of “cutting down on predation” and addressing mental health concerns associated with youth social media use. A 2023 report from the U.S. surgeon general cites a study finding that “adolescents who spent more than three hours per day on social media faced double the risk of experiencing poor mental health outcomes.”

One aspect that Ford is still deliberating involves the method through which social media companies will verify user ages. Alex Ambrose, a policy analyst with the Information Technology and Innovation Foundation, says requiring a photo ID or utilizing AI-assisted facial recognition programs for this purpose may be problematic.

“Many people do not have a government-issued ID. And that’s not to mention the privacy concerns that could come with giving up that information to a private entity. So, there’s sort of a sliding scale of concerns with age verification tools and there hasn’t really been a solution that seems to be the most effective quite yet,” Ambrose says.

Utah lawmakers may have found a compromise when they passed a first-ofits-kind bill March 26 putting the onus of age-verification on the Google and Apple app stores.

The work-around could provide a roadmap for Nevada and other states, like Indiana and Nebraska, looking at similar proposals.

“We’ve looked at outcomes of litigation that have been waged against some of these actions to ensure that we could formulate a bill that would withstand constitutional muster. Whatever we pass out of this Legislature will withstand judicial scrutiny. I’m confident of that,” Ford said.

AI-generated child pornography

Nevada legislators are also looking to crack down on the emerging threat of AI-generated child pornography. Last year, John Shehan of the National Center for Missing & Exploited Children told members of the U.S. House of Representatives that the issue represented a “new juncture in the evolution of child sexual exploitation.”

Senate Bill 263 and Assembly Bill 35 both look to criminalize that type of content as if it were legitimate child sexual abuse material (CSAM). SB 263 is spearheaded by Democratic state Sen. Nicole Cannizzaro of Las Vegas, while Ford proposed AB 35.

If passed, the bills would criminalize the creation or possession of AI-generated child pornography that’s “virtually indistinguishable” from real CSAM, which is already illegal.

In some cases, perpetrators utilize existing child pornography to create AI composite images based on one or several real-life victims, but others are completely computer-generated.

Cannizzaro’s definition includes any content that “an ordinary person” would “conclude is of an actual minor.” Under her bill, first-time offenders would face up to 15 years in state prison, while additional convictions could net them a life sentence.

She drew bipartisan support from state Sen. Ira Hansen, R-Reno, during a March 19 Senate Judiciary Committee hearing. But critics like Paloma Guerrero of the Clark County Public Defender’s Office say it may not withstand constitutional and free speech challenges.

“We’re not up here saying ‘don’t do thi.s. All we’re saying is that the bill needs clearer, better language, because it can all be left up to discretion,” Guerrero told the committee.

Cannizzaro and Ford disagreed. Ford called the bills “thoroughly researched” and “ready to withstand any constitutional scrutiny that might arise out of the passage of the legislation.”

With regard to constitutional concerns, Ambrose says those “may not be as valid.”

“At the core, we’re still talking about illegal content,” she says.

Hansen expects many of his Republican peers to join him in support.

“I guarantee you everyone from the governor on down wants to have this addressed,” he told the committee. “This is a very good bill — maybe not perfect — but we need to move this idea forward.” tyler.schneider@gmgvegas.com / @DatSchneids