Source: Hot Air
Back in 2018, the parent company of Madison Square Garden and Radio City Music Hall in New York City (MSG) began using facial recognition software tied into its security cameras at those venues. They claimed that the system would help them identify suspected domestic terrorists and other security threats. At the time, that sort of made sense, because those are both large arenas that pack in a lot of people, so either could be a tempting terror target. But since then, it has come to light that MSG has been identifying and prohibiting access to more people than just ISIS wannabes. They have singled out attorneys from every law firm representing plaintiffs who have brought lawsuits against the venues. Others have reportedly been locked out as well. Now New York’s Attorney General has sent them a warning letter and they may have a date in court with her before long. (Regulatory Oversight)
In January 2023, New York Attorney General Letitia James sent a letter to Madison Square Garden Entertainment Corporation (MSG), seeking information about its use of facial recognitional technology to prohibit ticketholders from entering its venues, such as Madison Square Garden and Radio City Music Hall. New York’s biometric identifier law requires companies using facial recognition technology to disclose the use to consumers. Madison Square Garden started using this technology in 2018 to identify security threats — a practice under prior scrutiny.
The inquiry followed reports that MSG used facial recognition software to identify “lawyers in all law firms representing clients engaged in any litigation against MSG” and deny them entry to MSG’s venues before they even presented their ticket(s). According to the AG’s letter, MSG’s policy impacted approximately 90 law firms.
That was a pretty bold move on MSG’s part, you have to admit. The company isn’t even denying they did it, either. They originally claimed that it was a legal strategy employed because the attorneys might have been able to “gather evidence to support their lawsuits while inside.” That explanation seems to be landing with a thud inside the AG’s office.
But the story gets worse. The targeting wasn’t just limited to the law firms seeking judgments against them. They’ve reportedly also refused entry to people who negatively commented about MSG on social media. So this obviously wasn’t just some legal strategy or glitch in the software. If you tweeted something negative about either of their venues, someone from MSG would have had to track down your identity, find a picture of you, feed it into the system, and then let the facial recognition system put you on the “do not admit” list.
New York already has a digital privacy law in place that requires private-sector companies to inform the public when facial recognition software is being used. They also have to comply with other provisions that prevent the software from being used for discriminatory purposes. MSG seems to have tramped over those laws on several occasions.
As regular readers are probably aware, I’ve long been a supporter of facial recognition software being used to solve more crimes quickly. But that’s when it’s in the hands of law enforcement officials. Concerns over the police using the software to target people based on their race or incorrectly convicting someone based on an error made by the software have never come to fruition that I’m aware of. But MSG is not a law enforcement agency. They are not in the business of identifying criminals. And people who may be involved in a lawsuit against them or who posted something mean on Twitter are not criminals either.
When used responsibly, facial recognition software can be a positive influence on law enforcement. When placed in the wrong hands and used to carry out personal vendettas, it clearly is open to abuse and it complicates the entire public debate over the use of this technology. I suspect MSG will wind up paying a steep penalty for this and probably be forced to discontinue these practices.