Jump to content

Using facial images harvested from social media, A.I. facial recognition tech usage being ramped up by military, federal agencies, local police departments


KILZ FILLZ

Recommended Posts

 

 

 

https://www.cnn.com/2021/03/09/tech/clearview-ai-mijente-lawsuit/index.html

 

Clearview AI sued in California by immigrant rights groups, activists

 

Clearview AI, the controversial firm behind facial-recognition software used by law enforcement, is being sued in California by two immigrants' rights groups to stop the company's surveillance technology from proliferating in the state.

 

The complaint, which was filed Tuesday in California Superior Court in Alameda County, alleges Clearview AI's software is still used by state and federal law enforcement to identify individuals even though several California cities have banned government use of facial recognition technology.

 

The lawsuit was filed by Mijente, NorCal Resist, and four individuals who identify as political activists. The suit alleges Clearview AI's database of images violates the privacy rights of people in California broadly and that the company's "mass surveillance technology disproportionately harms immigrants and communities of color."

 

Sejal Zota, a lawyer for the parties who brought the suit and the legal director at Just Futures Law, told CNN Business that the parties that brought the suit seek an injunction to prevent Clearview AI from being used in California, along with the deletion of face scans of Californians that the company has collected.

 

Founded in 2017, Clearview AI compiles billions of photos into a database for its software, which can use these images to identify individual people. The company has claimed to have scraped over 3 billion photos from the internet, including photos from popular social media platforms like Facebook, Instagram, Twitter and YouTube. Major tech companies have sent the company cease-and-desist notices in the past, arguing its photo snagging practices violate their terms of service.

"Clearview AI complies with all applicable law and its conduct is fully protected by the First Amendment," Floyd Abrams, a lawyer for the company, said in a statement to CNN Business on Tuesday.

 

Facial recognition technology has grown in prevalence — and controversy — in recent years, popping up everywhere from airport check-in lines to police departments and drugstores. And while it could add a sense of security and convenience for businesses that roll it out, the technology has been widely criticized by privacy advocates who are concerned that it may include racial biases and have the potential for misuse.

 

The lawsuit is the latest attempt by grassroots groups to clamp down on facial-recognition software, which is not widely regulated in the United States. In the absence of clear federal rules regarding the usage of the technology, a number of cities — such as San Francisco, Boston, and Portland, Oregon — have banned the technology in some capacity. A few states, including Illinois, California, and Washington, have related legislation that limits its use.

 

Zota said the parties that brought the lawsuit see Clearview's technology "as a terrifying leap toward a mass surveillance state where people's movements are tracked the moment they leave their homes." The individual plaintiffs participated in political movements that are critical of the police and of US Immigration and Customs Enforcement, he said.

 

"The ability to control their likenesses and biometric identifiers — and to continue to engage in political speech critical of the police and immigration policy, free from the threat of clandestine and invasive surveillance — is vital to Plaintiffs, their members, and their missions," the lawsuit states. 

 

Clearview was sued last year in Illinois by the American Civil Liberties Union, which alleged in its complaint that the company's technology violates that state's 2008 Biometric Information Privacy Act. In a statement, the ACLU alleged Clearview participated in "unlawful, privacy-destroying surveillance activities."

 

At the time, a lawyer for Clearview AI responded by saying the ACLU lawsuit was "absurd." 

That lawsuit is ongoing; Clearview filed a motion to dismiss the suit in December, which the ACLU replied to in a legal brief, an ACLU spokesperson told CNN Business. 

 

More recently, Clearview AI has also been declared illegal in Canada. The company was told to remove Canadian faces from its database.

 

 

 

 

 

https://gizmodo.com/clearview-ai-facial-recognition-end-of-anonymity-us-age-1848507135

 

Lawmakers Warn Clearview AI Could End Public Anonymity if Feds Don't Ditch It

 

Democratic lawmakers are ratcheting up efforts to limit the federal government’s work withnotorious surveillance firm Clearview AI. In a series of letters addressed to the Departments of Justice, Defense, Homeland Security, and the Interior on Wednesday, the lawmakers called on the agencies to end their use of the company’s tech, arguing the tools “pose a serious threat to the public’s civil liberties and privacy rights.” The agencies named in the letters were all identified in a Government Accountability Office reportreleased last year as having used Clearview AI tools in domestic law enforcement activities.

 

The letters were co-signed by four progressive politicians, Sens. Ed Markey and Jeff Merkley and Reps. Pramila Jayapal and Ayanna Pressley. In their letter to the DHS, the lawmakers claimed Clearview AI’s tech—which reportedly relies on a database of more than 4 billion faces, many of which are scraped from the open internet—could effectively eliminate the notion of public anonymity if left unchecked.

 

“In conjunction with the company’s facial recognition capabilities, this trove of personal information is capable of fundamentally dismantling Americans’ expectation that they can move, assemble, or simply appear in public without being identified,” the lawmakers wrote.

 

Clearview AI’s partnerships with government agencies are of particular concern, the authors argued, because a public that believes they are being surveilled by their government may be less likely to engage in civic discourse or other activities protected by the First Amendment. The lawmakers went on to express concerns over facial recognition’s “unique threats to marginalized communities,” citing previous research showing how the technology performs worse when trying to identify people with darker complexions and Black women in particular.

 

In an emailed statement to Gizmodo, Clearview AI CEO Hoan Ton-That said a National Institute of Standards and Technology test of the company’s tech “shows no detectable racial bias,” and said he wasn’t aware of any instance where Clearview AI’s technology has resulted in a wrongful arrest. In his statement, Ton-That pointed to data from the Innocence Project, which claims 70% of wrongful convictions result from eyewitness lineups, a figure he used to argue in favor of Clearview’s comparatively higher accuracy rates.

 

“Clearview AI is able to help create a world of bias-free policing,” Ton-That claimed. “As a person of mixed race this is highly important to me.”

 

While those figures on their own seem informative, they fail to account for the sheer scope and scale of Clearview’s pervasive technology. They also fail to address really any of the broader privacy or civil liberties concerns that have most troubled advocates, particularly as it pertains to Clearview AI.
 

“We are proud of our record of achievement in helping over 3,100 law enforcement agencies in the United States solve heinous crimes, such as crimes against children and seniors, financial fraud and human trafficking,” Hoan Ton-That added.

 

In their letters, the lawmakers partially addressed these points, arguing the potential threats posed by facial recognition extend beyond accuracy claims.

 

“Communities of color are systematically subjected to over-policing, and the proliferation of biometric surveillance tools is, therefore, likely to disproportionately infringe upon the privacy of individuals in Black, Brown, and immigrant communities,” the lawmakers wrote. “With respect to law enforcement use of biometric technologies specifically, reports suggest that use of the technology has been promoted among law enforcement professionals and reviews of deployment of facial recognition technology show that law enforcement entities are more likely to use it on Black and Brown individuals than they are on white individuals.”

 

This isn’t the first time these lawmakers have taken on facial recognition. Back in 2020, the same Democrats authored the Facial Recognition and Biometric Technology Moratorium Act, which sought to end federal use of real-time facial recognition technology. That bill would have also limited the state’s access to federal grants if they chose to continue using facial recognition. At the time, the legislation gained the endorsement of a litany of civil liberty and privacy groups, including the American Civil Liberties Union, Electronic Frontier Foundation, Fight for the Future, Color of Change, MediaJustice, Electronic Privacy Information Center, and Georgetown University Law Center’s Center on Privacy & Technology,among others.

 

Around two dozen cities and states across the U.S, including San Francisco, Boston, and Minneapolis, have stepped up their efforts to curtail public facial recognition use in recent years, though a federal data privacy law has remained elusive.

 

 

 

 

 

 

 

https://gizmodo.com/can-police-use-facial-recognition-scans-at-traffic-stop-1848581619

 

What to Do If a Cop Tries to Scan Your Face During a Traffic Stop 

 

Law enforcement’s use of facial recognition technology during investigations has blossomed in recent years thanks in no small part to a booming surveillance industry built on the back of an ever-expanding buffet of publicly available biometric data. The limits on where and how that technology can be used though remain legally murky and are constantly evolving. Now, it appears at least some law enforcement agents are flirting with the idea of using facial recognition at otherwise seemingly benign traffic stops, a potential loosening of the tech’s use that has legal and privacy experts on edge.
 

As first reported earlier this month by Insider’s Caroline Haskins, that hypothetical was floated during a 2021 episode of the Street Cop Training podcast, a program intended for police officers looking to learn new investigative techniques. In the episode, the show’s host, Dennis Benigno poses a scenario to his guest, Nick Jerman.

 

“Let’s say you are on a traffic stop and we have someone in the car who we suspect may be wanted?” Benigno asked. “How would you go about investigating somebody who you think may be trying to hide their apprehension and hide out who they are?”

 

Jerman, who had spent the rest of the episode describing ways to use publicly available social media tools to identify potential targets during a police investigation, responded by saying, “there are a couple paid programs you can use, [presumably referring to Clearview AI and apps like it] where you can take their [the driver’s] picture and it will put it [the photo] in.”
 

In other words, if a police officer feels uncertain of a driver, or possibly even a passenger’s identity, they could quickly snap a photo of their face and feed it into a facial recognition database to gather more information.
 

Though said in passing, the situation laid out by Jerman could represent a radical shift in the ease and frequency with which police use facial recognition, a technology many privacy advocates warn lacks sufficient accuracy, particularly when identifying people of color.

 

The situation may also violate U.S. laws.

 

Gizmodo spoke to Nate Wessler, the Deputy Director of the American Civil Liberties Union’s Speech, Privacy, and Technology Project, who pointed to a growing patchwork of cities and states all around the country that have passed local legislation limiting public facial recognition use. In many of those cases, from San Francisco to Portland, this type of brazen, fire from the hip style use of the technology would likely violate local laws. Wessler said that traffic stop facial recognition could also potentially violate laws of states like Maine where law enforcement areallowed to use facial recognition but only for serious felonies and with a warrant.

 

Speaking more broadly, Wessler said error rates inherent in the technology, while improving, are still too high for any match to act as probable cause to arrest someone.

 

“These are probabilistic algorithms that are making their best guess based on the quality of the photo that’s uploaded and what’s in the database and how the algorithm was trained,” Wessler said. It should be noted Gizmodo was not able to independently confirm any cases of this technique being used by law enforcement in any known arrests, so far.

 

Images collected by police at traffic stops, likely captured in haste using a smartphone under imperfect conditions, would also be unlikely to replicate the same levels of accuracy seen in more recent, high-profile facial recognition tests. “So if you had police pull someone over and use face recognition technology, and then it spits out a purported match to somebody who they think has an outstanding warrant and then they arrest that person on the spot just based on that face recognition result that would not be probable cause to arrest,” Wessler added.

 

In that situation, Wessler warned police using facial recognition at a traffic stop could open themselves up to a false arrest lawsuit under the Fourth Amendment. Still, Wessler said case law around law enforcement use of facial recognition remains relatively sparse due to the newness of the technology. 
 

Greg Nojeim, a Senior Counsel and Co-Director of the Security and Surveillance Project at the Center for Democracy & Technology told Insider he be believed Jerman’s recommendation to use facial recognition at a traffic stop would cross the line into illegality if the police didn’t have “reasonable suspicion” if the targeted individual had committed a crime. Reasonable suspicion though is a notoriously fraught term and can vary widely in its interpretation.

 

The New York Police Department, for its part, claims matching photos from a facial recognition search aren’t enough on their own to justify an arrest but instead should serve as a “lead” for further investigation.

 

“The detective assigned to the case must establish, with other corroborating evidence, that the suspect identified by the photo match is the perpetrator in the alleged crime,” the NYPD states. Yet, in other cases, faulty facial recognition matches have reportedly led to false arrests. In one of the most notable cases, a 43-year-old year father named Robert Williams was forced to spend 18 hours behind bars without being told why after Detroit police arrested him based on a faulty facial recognition match.

 

If you do happen to find yourself in the undesirable position of having a police officer shove a smartphone camera in your face, there are several things you can do. Wessler said drivers caught in that situation have the right to verbally tell an officer that they do not consent to have their biometrics (in this case, their face scan) collected. That might not mean shit to an officer at the moment, but Wessler said it could help down the line for people trying to litigate over their rights. People also have the legal right to record a police officer with their own phone which, in some cases, could deter police from crossing the line into scanning your face.

 

All that said, if recent cultural touchstones are any reminder, even the most basic traffic stops in the U.S. have the potential of rapidly escalating from routine to potentially deadly in just seconds. With that nightmare in mind, Wessler said people need to make their own decision about what actions or responses feel safe at the time.

 

On a more practical level, Wessler said it’s unclear what real policing advantage law enforcement hopes to gain by using facial recognition at a traffic stop. In that situation, police already have the authority to ask a driver for their driver’s license, which they can then run against their own databases. The privacy tradeoff, in other words, just isn’t worth it.
 

“This seems unnecessary,” Wessler said, “It’s putting this incredibly powerful and unregulated surveillance tool in the hands of beat cops to use with no oversight, no rules, no predicate level of suspicion, no confirmatory steps. And that’s just a recipe for disaster.”



—————

 

verbally refuse……. Shit

 

 

——————

 

 

https://www.forbes.com/sites/thomasbrewster/2022/02/03/clearview-ai-glasses-with-facial-recognition-are-here-and-the-air-force-is-using-them/?sh=789bef5a43b2
 

 

Clearview: Glasses With Facial Recognition Are Here—And The Air Force Is Buying

 

Clearview AI, the facial recognition company backed by Facebook and Palantir investor Peter Thiel, has been contracted to research the use of augmented reality glasses combined with facial recognition for the U.S. Air Force.

 

It’s a technology that had many privacy activists concerned when it was first proposed by the company, as revealed in a New York Times article in 2020, which analyzed the startup’s code and found it was designed to work with glasses. Clearview had already raised alarms in harvesting billions of images of people’s faces from social media sites like Facebook, creating a massive database for law enforcement and private buyers to identify individuals. 

 

The contract with the Air Force is just $50,000 and promises to help protect “airfields with augmented reality facial recognition glasses.” From the contracting records, first highlighted by Jack Poulson from technology industry accountability nonprofit Tech Inquiry, there’s little more information on just how many pairs of glasses will be provided or how they will be used.

 

The Air Force hadn’t responded to a request for comment at the time of publication. “We hold the Air Force in high esteem and would be honored to work with them in ways that meet their needs,” said Clearview CEO Hoan Ton-That, an Australian entrepreneur who founded the company with backing from Thiel in 2017. He said that the technology remains in research and development, “with the end goal being to leverage emerging capabilities to improve overall security.” He confirmed it would not be using the huge 10 billion image dataset of facial photos. “Once realized, we believe this technology will be an excellent fit for numerous security situations.”

 

His company faced severe criticism from privacy advocates since it was forced out of stealth in 2020 by media reports, with the American Civil Liberties Union suing the business to “bring an end to the company’s unlawful, privacy-destroying surveillance activities.” Chief amongst the concerns is that people’s faces have been put in databases that can be checked by law enforcement, even though they’ve never been linked to a crime, while facial recognition systems have long been criticized for being racially bias with cases of mistaken identity (and therefore mistaken suspicion) more common amongst Black and non-white ethnicities.

 

Despite the controversies around Clearview, it’s continued to get work with the U.S. federal government. Contract records show the FBI made an $18,000 order for a one-year subscription in December, while Immigration Customs Enforcement (ICE) has put at least $1.5 million on the table for Clearview tools for an unspecified number of enterprise licenses. 

“Since September, the Biden Administration’s ICE has more than doubled spending on Clearview AI, the FBI has publicly procured Clearview AI for the first time, the USPTO granted Clearview’s patent for augmented reality facial recognition, and now the company has a small business grant with the Air Force for augmented reality,” Poulson told Forbes, highlighting how successful Ton-That’s business has been, even with the negative press surrounding its collection of citizens’ facial images.

 

The company raised an additional $30 million last year, showing investors also haven’t been deterred, while the private market for facial recognition tech looks to be a lucrative one. Reporting from Buzzfeed previously revealed that private companies including the NBA, Macy’s and Walmart had used Clearview’s facial recognition.

 

It should perhaps come as no surprise the augmented reality tech is being sold to a government agency. As Forbes reported last year, facial recognition is being applied in myriad ways, including via drones and other unmanned vehicles. In 2020, OneZero reported the U.S. Air Force gave $2 million to RealNetworks, best known for video-streaming software RealPlayer, to put facial recognition on drones and body-worn cameras.

Link to comment
Share on other sites

  • 4 weeks later...
This forum is supported by the 12ozProphet Shop, so go buy a shirt and help support!
This forum is brought to you by the 12ozProphet Shop.
This forum is brought to you by the 12oz Shop.

4/1/22 Update:

 

https://apnews.com/article/russia-ukraine-technology-business-europe-national-governments-4a4db5b7340792f8a8b08c41c4653f5a
 

NEW YORK (AP) — A controversial face recognition company that’s built a massive photographic dossier of the world’s people for use by police, national governments and — most recently — the Ukrainian military is now planning to offer its technology to banks and other private businesses.

 

Clearview AI co-founder and CEO Hoan Ton-That disclosed the plans Friday to The Associated Press in order to clarify a recent federal court filing that suggested the company was up for sale.

 

“We don’t have any plans to sell the company,” he said. Instead, he said the New York startup is looking to launch a new business venture to compete with the likes of Amazon and Microsoft in verifying people’s identity using facial recognition.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...