According to new research from a cybersecurity firm, Microsoft Corp.’s AI research team accidentally exposed a large cache of private data on GitHub, a software development platform.
With insights from a report by Wiz, a cloud security company, the exposure of cloud-hosted data on the AI training platform was conducted through a misconfigured link. The data is expected to get leaked by Microsoft’s research team while publishing open-source training data on GitHub.
The exposed data included Microsoft employees’ personal computer backups, which contained passwords to Microsoft services, secret keys and more than 30,000 internal Microsoft Teams messages from 359 Microsoft employees, according to Wiz.
Furthermore, when asked for an explanation, “We have confirmed that no customer data was exposed, and no other internal services were put at risk,” a Microsoft spokesperson concluded.