AI offers tremendous potential to make society safer by keeping guns, racism and trolls at bay
Summer’s arrival in America was a terrifying start. Our nation was rocked by mass shootings in Buffalo, Uvalde, Highland Park, and elsewhere. As a country, we struggled with easy access to and proliferation of hate content on social media, which continues to derail a meaningful move towards acceptance, mutual respect, and openness to varying opinions. This is particularly true for young adults, whom studies have shown are particularly susceptible to assessing online content as factual versus constructed perceptions or opinions.
How AI could make us safer
First, for the incidents in Buffalo and Uvalde, AI can enhance humans’ abilities to quickly detect security or safety threats. Any public establishment that uses video surveillance as part of its security monitoring can enable AI-based technology simply and inexpensively with Netra’s technology.
In a school scenario, if the camera were powered with Netra’s CV, Netra’s frame-by-frame analysis would have identified objects like “gun,” “running,” or emotions such as “fear” within milliseconds of its appearance in the video feed.
In retail environments, such as in Buffalo, the targeted grocery store had invested heavily in security. Many smaller “Mom and Pop” stores, however, have little to no resources to prepare for major safety threats. AI provides enhanced scrutiny for stores that have otherwise limited resources. With AI, humans are alerted to threats efficiently and quickly, which can be used to relay information directly to a security partner or to alert the police, thereby improving reaction times and reducing loss of life.
How AI identifies hate content more efficiently
Many have cited that an improved vision for the web’s future includes stronger content moderation. Elon Musk stated as he contemplated Twitter “My strong intuitive sense is that having a public platform that is maximally trusted and broadly inclusive is extremely important to the future of civilization. I don’t care about the economics at all,” Musk said.
If the large walled gardens are truly committed to enhancing their platform’s trust and inclusivity, the solution is to invest in AI and Computer Vision (CV). The current lackluster approach, which relies on users to flag content, is controversial, inconsistent, and easily and quickly biased.
Through AI, a rules-based approach can create more consistency by removing human biases while scaling efficiently and inexpensively. If these platforms are to remain the leaders in serving a firehose of news, entertainment, and other content, then the volume of information processed through these systems will be unsustainable without AI playing a key role in ensuring that things like hate speech and racism are not promoted through their algorithms.
AI/CV can be deployed intelligently to a host of difficult challenges. When done responsibly and thoughtfully, it has the potential to usher in a safer future for humanity, where opinions are openly expressed but hate is not promoted to vulnerable audiences. Further, we can better protect individuals against immediate physical threats to reduce the loss of life inflicted by individuals that intend harm.
The smoke detector was invented in 1965, which is now standard practice and has saved countless lives. AI/CV has the potential to become such a system for our increasingly online and video-monitored world.
Netra’s technology offers a full comprehension of all text and imagery on each page and extends further through our Computer Vision (CV) technology into the full depth of context within video assets through frame-by-frame analysis on content detection, brand safety, emotion, and affinity.
Our vision is to empower entities with access and ownership to massive amount of proliferating livestream and historical video content to turn it into an advantage. As AI emerges as a potential safety solution, its applications can not only improve ROI but also provide a solution to make us safer and better informed.
Contact firstname.lastname@example.org if you like to discuss opportunities to create breakthrough applications in security and safety.