Microsoft Purview to Introduce New Security Policies for Risky AI Usage

ChrisJohn86

Well-known member
Joined
Apr 10, 2024
Messages
73
Reaction score
0
AI has been making waves in the tech world for the past several years, transforming how we interact with technology. From virtual assistants to predictive analytics, it has become an essential component of modern computing. However, great power comes with great responsibility, and bad actors' use of AI has generated severe security issues.

In order to address these issues, Microsoft, in partnership with OpenAI, plans to implement a set of security regulations for dangerous AI usage within its corporate-oriented platform, Microsoft Purview. This upgrade comes as an immediate reaction to the increased threat of AI-based cyber attacks, as noted in a recent report on the use of AI by bad actors from several countries.

The new policies, which are scheduled for preview in July 2024 and general release in September 2024, will automatically recognize and classify any intentional or unintentional dangerous action with generative AI programs. This includes tasks like creating prompts with sensitive information, employing AI to produce data from sensitive files or sites, and more. Microsoft Purview will be enhanced to include dangerous AI usage across many AI applications, such as Microsoft Copilots and third-party generative AI programs.

This approach attempts to protect enterprises from AI-based dangers such as intellectual property theft, data leakage, and security violations. Furthermore, the update will streamline operations inside the platform, making it easier for companies to build onboarding processes for new devices and offering a consistent experience for accessing data security and data governance.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top