Bumble announced their new safety update which will let users report the accounts that look AI generated. According to a survey conducted by Bumble, it was found that 71% of Gen Z and Millennial respondents felt that AI should be limited to only a certain extent as far as profile pictures and bios are concerned. 

Continuing the track of AI entering the dating app industry, Tinder announced its new launch – an AI backed ‘Photo Selector’ that will reportedly help the users in selecting the best pictures for their dating profile, earlier this week. From what is understood, Tinder does not have the feature for users to report AI generated accounts.

“An essential part of creating a space to build meaningful connections is removing any element that is misleading or dangerous. We are committed to continually improving our technology to ensure that Bumble is a safe and trusted dating environment. By introducing this new reporting option, we can better understand how bad actors and fake profiles are using AI disingenuously, so our community feels confident in making connections.” Risa Stein, VP, product, Bumble, said. 

Reportedly, Bumble’s new update is not the first time they are using AI to offer services to customers. In the past, the dating app has rolled out several features in the interest of the customers like a tool that identifies fake profiles before it reaches other customers, a tool that blurs potential nude images and a feature that analyses your preferences and shows you potential matches. 

 Follow us on TwitterInstagramLinkedIn, Facebook