Menlo Labs Threat Research team finds PII the most frequent instance of potential exposure and data loss, even as organizational security policies increase by 26%
Menlo Security, a leader in browser security, today released its latest report “The Continued Impact of Generative AI on Security Posture”. This report marks the second installment of generative AI reports which analyzes the changing behavior of employee usage of generative AI and the subsequent security risks these behaviors pose to organizations. In the last thirty days, over half (55%) of Data Loss Prevention events detected by Menlo Security included attempts to input personally identifiable information. The next most common type of data that triggered DLP detections included confidential documents, which represented 40% of input attempts.
From July to December 2023, the market and nature of generative AI usage have transformed considerably. New platforms and features are becoming popular, leading to a diverse and specialized market. However, the enterprise has introduced new cybersecurity risks with it.
For example, according to the Menlo report, there was an 80% increase in attempted file uploads to generative AI websites. Researchers attribute this increase partly to the many AI platforms that have added file upload features within the past six months. Once users were introduced to it, however, they quickly took advantage. While copy and paste attempts to generative AI sites decreased minimally, it’s still a frequent occurrence, highlighting the need to implement technology to control these actions. These two generative AI uses present the largest impact on data loss due to the ease and speed at which data could be uploaded and input, such as source code, customer lists, roadmap plans, or personally identifiable information (PII).
"Our latest report highlights the swift evolution of generative AI, outpacing organizations' efforts to train employees on data exposure risks and update security policies," said Pejman Roshan, Chief Marketing Officer at Menlo Security. "While we've seen a commendable reduction in copy and paste attempts in the last six months, the dramatic rise of file uploads poses a new and significant risk. Organizations must adopt comprehensive, group-level security policies to effectively eliminate the risk of data exposure on these sites.”
Enterprises do recognize the risk and are increasingly focused on securing against data loss and data leakage resulting from rising generative AI usage. In the last six months, the Menlo Labs Threat Research team discovered a 26% increase in organizational security policies for generative AI sites. However, the majority are doing so on an application by application basis rather than by establishing policies across generative AI applications as a whole. If policies are applied on application by application basis, organizations must either constantly update their application list or risk gaps in safeguards to generative AI sites that employees are using. This necessitates the need for a scalable and efficient way to monitor employee behavior, adapt to evolving functionalities introduced by generative AI platforms, and address the resulting cybersecurity risks.
Key findings that point towards a need for group level security rather than domain level security include:
- For organizations that have security policies on an application basis, 92% have security-focused policies in place around generative AI usage while 8% allow unrestricted generative AI usage
- For organizations that have security policies on generative AI apps as a group, 79% have security-focused security policies in place while 21% allow unrestricted usage
- While most traffic is directed towards the main six generative AI sites, when looking at generative AI as an entire category, file uploads are 70% higher, highlighting the unreliability of ensuring security policies on a application by application basis
In June 2023, Menlo Security issued its first generative AI report analyzing generative AI interactions from a sample size of 500 global organizations. This report compares the data from the previous findings to data collected between July and December 2023 from the same sample of organizations.
Download The Continued Impact of Generative AI on Security Posture Report to read the full findings and learn how employee usage of generative AI is contributing to an expanding browser attack surface.
To learn more about the role of browser security in eliminating data leakage via generative AI platforms, visit Menlo Security’s platform overview page or schedule a demo.
About Menlo Security
Menlo Security protects organizations from cyber threats that attack web browsers. Menlo Security’s patented Cloud-Browser Security Platform scales to provide comprehensive protection across enterprises of any size, without requiring endpoint software or impacting the end user-experience. Menlo Security is trusted by major global businesses, including Fortune 500 companies, eight of the ten largest global financial services institutions, and large governmental institutions. The company is backed by Vista Equity Partners, Neuberger Berman, General Catalyst, American Express Ventures, Ericsson Ventures, HSBC, and JPMorgan Chase. Menlo Security is headquartered in Mountain View, California. For more information, please visit www.menlosecurity.com.
View source version on businesswire.com: https://www.businesswire.com/news/home/20240214574249/en/
Contacts
Emily Ashley
Lumina Communications for Menlo Security
Menlo@luminapr.com