Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Shillong Airport Expansion Under Court Deadline: Authorities Rush to Complete Key Infrastructure Work

    July 18, 2025

    OpenAI’s advisory board calls for continued and strengthened nonprofit oversight

    July 18, 2025

    Wife of embattled Louisiana police chief charged in immigration visa fraud scheme also arrested

    July 18, 2025
    Facebook X (Twitter) Instagram
    • Demos
    • Buy Now
    Facebook X (Twitter) Instagram YouTube
    14 Trends14 Trends
    Demo
    • Home
    • Features
      • View All On Demos
    • Buy Now
    14 Trends14 Trends
    Home » Study: AI could lead to inconsistent outcomes in home surveillance | MIT News
    Aritifical Intelligence

    Study: AI could lead to inconsistent outcomes in home surveillance | MIT News

    adminBy adminOctober 9, 2024No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    Share
    Facebook Twitter LinkedIn Pinterest Email



    A new study from researchers at MIT and Penn State University reveals that if large language models were to be used in home surveillance, they could recommend calling the police even when surveillance videos show no criminal activity.

    In addition, the models the researchers studied were inconsistent in which videos they flagged for police intervention. For instance, a model might flag one video that shows a vehicle break-in but not flag another video that shows a similar activity. Models often disagreed with one another over whether to call the police for the same video.

    Furthermore, the researchers found that some models flagged videos for police intervention relatively less often in neighborhoods where most residents are white, controlling for other factors. This shows that the models exhibit inherent biases influenced by the demographics of a neighborhood, the researchers say.

    These results indicate that models are inconsistent in how they apply social norms to surveillance videos that portray similar activities. This phenomenon, which the researchers call norm inconsistency, makes it difficult to predict how models would behave in different contexts.

    “The move-fast, break-things modus operandi of deploying generative AI models everywhere, and particularly in high-stakes settings, deserves much more thought since it could be quite harmful,” says co-senior author Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

    Moreover, because researchers can’t access the training data or inner workings of these proprietary AI models, they can’t determine the root cause of norm inconsistency.

    While large language models (LLMs) may not be currently deployed in real surveillance settings, they are being used to make normative decisions in other high-stakes settings, such as health care, mortgage lending, and hiring. It seems likely models would show similar inconsistencies in these situations, Wilson says.

    “There is this implicit belief that these LLMs have learned, or can learn, some set of norms and values. Our work is showing that is not the case. Maybe all they are learning is arbitrary patterns or noise,” says lead author Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS).

    Wilson and Jain are joined on the paper by co-senior author Dana Calacci PhD ’23, an assistant professor at the Penn State University College of Information Science and Technology. The research will be presented at the AAAI Conference on AI, Ethics, and Society.

    “A real, imminent, practical threat”

    The study grew out of a dataset containing thousands of Amazon Ring home surveillance videos, which Calacci built in 2020, while she was a graduate student in the MIT Media Lab. Ring, a maker of smart home surveillance cameras that was acquired by Amazon in 2018, provides customers with access to a social network called Neighbors where they can share and discuss videos.

    Calacci’s prior research indicated that people sometimes use the platform to “racially gatekeep” a neighborhood by determining who does and does not belong there based on skin-tones of video subjects. She planned to train algorithms that automatically caption videos to study how people use the Neighbors platform, but at the time existing algorithms weren’t good enough at captioning.

    The project pivoted with the explosion of LLMs.

    “There is a real, imminent, practical threat of someone using off-the-shelf generative AI models to look at videos, alert a homeowner, and automatically call law enforcement. We wanted to understand how risky that was,” Calacci says.

    The researchers chose three LLMs — GPT-4, Gemini, and Claude — and showed them real videos posted to the Neighbors platform from Calacci’s dataset. They asked the models two questions: “Is a crime happening in the video?” and “Would the model recommend calling the police?”

    They had humans annotate videos to identify whether it was day or night, the type of activity, and the gender and skin-tone of the subject. The researchers also used census data to collect demographic information about neighborhoods the videos were recorded in.

    Inconsistent decisions

    They found that all three models nearly always said no crime occurs in the videos, or gave an ambiguous response, even though 39 percent did show a crime.

    “Our hypothesis is that the companies that develop these models have taken a conservative approach by restricting what the models can say,” Jain says.

    But even though the models said most videos contained no crime, they recommend calling the police for between 20 and 45 percent of videos.

    When the researchers drilled down on the neighborhood demographic information, they saw that some models were less likely to recommend calling the police in majority-white neighborhoods, controlling for other factors.

    They found this surprising because the models were given no information on neighborhood demographics, and the videos only showed an area a few yards beyond a home’s front door.

    In addition to asking the models about crime in the videos, the researchers also prompted them to offer reasons for why they made those choices. When they examined these data, they found that models were more likely to use terms like “delivery workers” in majority white neighborhoods, but terms like “burglary tools” or “casing the property” in neighborhoods with a higher proportion of residents of color.

    “Maybe there is something about the background conditions of these videos that gives the models this implicit bias. It is hard to tell where these inconsistencies are coming from because there is not a lot of transparency into these models or the data they have been trained on,” Jain says.

    The researchers were also surprised that skin tone of people in the videos did not play a significant role in whether a model recommended calling police. They hypothesize this is because the machine-learning research community has focused on mitigating skin-tone bias.

    “But it is hard to control for the innumerable number of biases you might find. It is almost like a game of whack-a-mole. You can mitigate one and another bias pops up somewhere else,” Jain says.

    Many mitigation techniques require knowing the bias at the outset. If these models were deployed, a firm might test for skin-tone bias, but neighborhood demographic bias would probably go completely unnoticed, Calacci adds.

    “We have our own stereotypes of how models can be biased that firms test for before they deploy a model. Our results show that is not enough,” she says.

    To that end, one project Calacci and her collaborators hope to work on is a system that makes it easier for people to identify and report AI biases and potential harms to firms and government agencies.

    The researchers also want to study how the normative judgements LLMs make in high-stakes situations compare to those humans would make, as well as the facts LLMs understand about these scenarios.

    This work was funded, in part, by the IDSS’s Initiative on Combating Systemic Racism.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    Evaluating generative AI models with Amazon Nova LLM-as-a-Judge on Amazon SageMaker AI

    July 18, 2025

    Model predicts long-term effects of nuclear waste on underground disposal systems | MIT News

    July 18, 2025

    Isambard-AI, the UK’s Most Powerful AI Supercomputer, Goes Live

    July 17, 2025

    Accenture scales video analysis with Amazon Nova and Amazon Bedrock Agents

    July 17, 2025

    5 Cost Scenarios for Building Custom AI Solutions: From MVP to Enterprise Scale

    July 17, 2025

    This “smart coach” helps LLMs switch between text and code | MIT News

    July 17, 2025
    Leave A Reply Cancel Reply

    Demo
    Top Posts

    ChatGPT’s viral Studio Ghibli-style images highlight AI copyright concerns

    March 28, 20254 Views

    Best Cyber Forensics Software in 2025: Top Tools for Windows Forensics and Beyond

    February 28, 20253 Views

    An ex-politician faces at least 20 years in prison in killing of Las Vegas reporter

    October 16, 20243 Views

    Laws, norms, and ethics for AI in health

    May 1, 20252 Views
    Don't Miss

    Shillong Airport Expansion Under Court Deadline: Authorities Rush to Complete Key Infrastructure Work

    July 18, 2025

    Home » AIRLINE NEWS » Shillong Airport Expansion Under Court Deadline: Authorities Rush to Complete…

    OpenAI’s advisory board calls for continued and strengthened nonprofit oversight

    July 18, 2025

    Wife of embattled Louisiana police chief charged in immigration visa fraud scheme also arrested

    July 18, 2025

    How NASA monitors astronaut health aboard the ISS through human research, ETHealthworld

    July 18, 2025
    Stay In Touch
    • Facebook
    • Twitter
    • Pinterest
    • Instagram
    • YouTube
    • Vimeo

    Subscribe to Updates

    Get the latest creative news from SmartMag about art & design.

    Demo
    Top Posts

    ChatGPT’s viral Studio Ghibli-style images highlight AI copyright concerns

    March 28, 20254 Views

    Best Cyber Forensics Software in 2025: Top Tools for Windows Forensics and Beyond

    February 28, 20253 Views

    An ex-politician faces at least 20 years in prison in killing of Las Vegas reporter

    October 16, 20243 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Demo
    About Us
    About Us

    Your source for the lifestyle news. This demo is crafted specifically to exhibit the use of the theme as a lifestyle site. Visit our main page for more demos.

    We're accepting new partnerships right now.

    Email Us: info@example.com
    Contact: +1-320-0123-451

    Facebook X (Twitter) Pinterest YouTube WhatsApp
    Our Picks

    Shillong Airport Expansion Under Court Deadline: Authorities Rush to Complete Key Infrastructure Work

    July 18, 2025

    OpenAI’s advisory board calls for continued and strengthened nonprofit oversight

    July 18, 2025

    Wife of embattled Louisiana police chief charged in immigration visa fraud scheme also arrested

    July 18, 2025
    Most Popular

    ChatGPT’s viral Studio Ghibli-style images highlight AI copyright concerns

    March 28, 20254 Views

    Best Cyber Forensics Software in 2025: Top Tools for Windows Forensics and Beyond

    February 28, 20253 Views

    An ex-politician faces at least 20 years in prison in killing of Las Vegas reporter

    October 16, 20243 Views

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    14 Trends
    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • Home
    • Buy Now
    © 2025 ThemeSphere. Designed by ThemeSphere.

    Type above and press Enter to search. Press Esc to cancel.