In the latest weekly update, editors at Information Security Media Group discuss the impact of the Israel-Hamas war on the threat landscape and the workforce, the role of the U.S. in shaping the future of AI technology, and highlights from ISMG's Financial Services Summit in New York.
Since deception technology provides early warning of potential attacks by tricking hackers into accessing fake information, can AI tools such as ChatGPT be used to create more convincing lures? That's a question Xavier Bellekens, CEO of Lupovis, put to the test - with promising results.
The U.S. needs to pass federal legislation to establish a national framework of standards and a rules of the road for AI, but first passing federal data privacy legislation is an essential foundational part of that, some witnesses told members of Congress.
Artificial Intelligence (AI) has come roaring to the forefront of today’s technology landscape. It has revolutionized industries and will modernize careers, bringing numerous benefits and advancements to our daily lives. However, it is crucial to recognize that AI also introduces unseen impacts that must be...
In this episode of CyberEd.io's podcast series "Cybersecurity Unplugged," Alex Zeltcer of nSure.ai discusses how fraudsters access your payment information, how industrialized payment fraud attacks operate, and how nSure.ai uses discriminative AI to identify these attacks and cut their scale.
Watermarking is a core part of a White House trustworthiness initiative to bind companies into observing steps to guarantee the safety of AI products. The problem, say AI experts, is that watermarking is as likely to fail as succeed. Watermarking removal tools are available on the open internet.
Varonis CEO Yaki Faitelson is fond of saying "You can't unbreach data." In this interview, he discusses generative AI, as well as other technologies and trends and how they impact the ways enterprises view and secure their most critical data.
The EU will set up a dedicated office to oversee the implementation of the AI Act, especially by big-tech companies such as OpenAI. Dragoş Tudorache, a Romanian politician and the co-rapporteur of the AI Act, said negotiators have agreed in principle on creation of an "EU AI Office."
The use of artificial intelligence can profoundly improve operations and services across many industries, but the multifaceted relationship between AI and cybersecurity calls for new measures to address security, privacy and regulatory concerns through the right protocols and procedures.
The Ukrainian government says it will regulate AI, a step it portrays as a way to draw closer to the European Union, where rules for AI systems are close to approval. New rules will enable access to global markets and closer integration with the EU, the Ministry of Digital Transformation said.
Firms using large language models that power gen AI-powered tools must consider security and privacy aspects such as data access, output monitoring and model security before jumping on the bandwagon, said Troy Leach of Cloud Security Alliance. "Everything is going to be AI as a service," Leach predicted.
More than five dozen British lawmakers across political parties and privacy organizations called for an "immediate stop" to real-time facial recognition in the United Kingdom. Live facial recognition faces a ban in Europe and its use by police is banned in a handful of U.S. jurisdictions.
The use of generative AI is being "highly explored" in healthcare and has great promise for a variety of applications, but it needs to be scrutinized closely, said Erik Decker, vice president and CISO of Intermountain Health and a cybersecurity adviser to the federal government.
In the latest weekly update, ISMG editors examine policies in the U.S. and Europe that could regulate AI, recent developments within the EU cybersecurity and privacy policy arena, and the disparities between the perspectives of business leaders and cybersecurity leaders on the security landscape.
A clutch of vulnerabilities in an open-source tool used by major corporations to scale up machine learning models could lead to remote takeover, says a cybersecurity firm in a warning downplayed by Meta, which co-manages the open-source project.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cuinfosecurity.com, you agree to our use of cookies.