With Machine Learning and Artificial Intelligence at the core, the Android P Beta version brings many features aimed at reducing the complexity of apps, providing a maximised output
Android P is the successor to the Android Oreo and comes with more intelligent features onboard to the first batch of devices that are now eligible for the beta build. Android P outlines three basic work principles, as defined by Google – Intelligence, Simplicity, and Digital Wellbeing. Google has developed new features on the Android P to allow the users “do more by using less”.
With Machine Learning and Artificial Intelligence at the core, the Android P Beta version brings many features aimed at reducing the complexity of apps, providing a maximised output, and simultaneously keeping a track of any overuse of the phone. Here are the top features of the Android P that are now available on select eligible devices.
Android P is introducing a new app called Adaptive Battery that will take care of the battery usage by apps. In lieu of strangling the apps of the battery power or vice versa, the Adaptive Battery will read the patterns of how you use apps on your phone and chalk out an algorithm to squeeze out more power for the apps that you use most of the time, thanks to the partnership with DeepMind.
Adaptive Battery will prioritise the battery power for the apps and services that are more actively used. It will categorise the apps on the basis of their usage into active, frequent, rare, and working set. Google Vice President of Product Management, Sameer Samat, said that Adaptive Battery was able to produce results that looked promising, backed by a statistic revealing the feature to have reduced the CPU usage by up to 30 per cent on the device, in addition to minimising the background processes on small CPU cores.
Much like Adaptive Battery, Google is now reinventing that way you look at the phone’s screen at various times of the day. Android P brings Adaptive Brightness onboard that will learn how you set the brightness according to the surroundings and will automatically set the brightness to that level the next time same surroundings are detected by it.
This is possible because of the deeper integration of Machine Learning (ML) right into the core of the Android P. Samat stressed that while the auto-brightness feature has been available to the Android users for a long time, it has not truly been able to justify its purpose when the users still need to manually adjust the brightness using the slider, despite automatic settings. Adaptive Brightness enters the picture to alleviate this discomfort by learning the patterns in various surroundings and times of the day.
With Android P, the users will be able to do more on their devices without actually opening the app. Google has introduced App Actions that offers the users predictions on what they want to do next, based on the past usage patterns. With the help of Machine Learning, Google will keep a track of the activities of the user on the phone, for example making a call to a particular person at around 2 pm, or listening to music while commuting in the evening. It will then automatically suggest these actions in the app drawer as the conditions seem favourable to the user’s preferences.
Sample this as an example. Let’s say a user plugs his/her headphones into the Android phone, Google will read what is the most tentative action the user will take next and offer a suggestion. So, if the user usually opens Spotify to listen to music, the same will be offered as an App Action in the drawer. The App Actions will show up in the Launcher, Smart Text Selection, Play store, Google Search app, and Google Assistant.
Slices is a new feature that’s actually a building block for the App Actions. Slices is what the name suggests, the most used feature and activities inside a particular app will be presented as Slices to the users so that they get “even deeper look” into their favourite apps. For example, searching Lyft in Google Search will show an interactive Slice that will show the price and ETA for a trip to work, and tapping on it would eliminate the need to go through the process required to make that happen inside the app.
The App Actions and Slices are two smaller components of a larger system that Google has developed and embedded in Android P. Google has extensively deployed Machine Learning for that and it wants the developers to make the best use of it for their apps, as well. ML Kit is a set of cross-platform APIs that will available through Firebase to the developers on both Android and iOS platforms.
ML Kit will allow the developers to use the predefined Machine Learning call functions for their apps to make them show up in the results of the queries made by the user. ML Kit will offer developers users on-device APIs for text recognition, image labelling, and face detection among others.
New Gestures and Buttons
Android P is not only banking upon the suggestions and patterns, read and proposed by Machine Learning, but also making the user experience more simplistic. With Android P, the users would not need to press the traditional home, back, and menu buttons to browse through their phones. Google has introduced Gestures in the Android P to substitute the buttons for various actions such as swipe up to open the recent apps menu called Overview while doubling it opens the app drawer, swipe down to return to the home screen, and swipe left and right to switch between the recently-opened apps.
In addition, the Smart Text Selection now reads the context to offer app suggestions and related actions. For example, if you select a name of a singer, the Smart Text Selection box will offer to open the artist’s page on one of your favourite music apps on your phone.
Android P is also getting the new Quick Settings to offer more visually appealing icons and toggles for the settings in the notifications shade. Besides, the screenshots have now been relocated to the right-hand side in a vertical bar to make the settings in a more reachable position than before.
Google is realising that technology is overshadowing the prospects for which it was originally conceived. People today are glued to their screens every minute, every hour, so much so that they begin to lose out on things that are a whole lot worthy to be a part of. Google is helping the users maintain their ‘digital wellbeing’ by explaining how JOMO (Joy of Missing Out) is better FOMO (Fear of Missing Out), a term which is a fad on our times.
There are three ways Google is ensuring the user’s digital wellbeing – the new Do Not Disturb mode, Android Dashboard, and Wind Down mode. The three apps make sure that a user’s daily intake of apps, videos, and other content consumed on a device stays below the threshold so that the user can take time out for other things. While the new Do Not Disturb mode offers a complete no-distraction mode on your phone, the rest two are largely dependent on how you use your phone to prescribe switch off times.
Android Dashboard keeps a track of when and how much an app has been used by the user over time so that the total consumption time can be recorded. It also shows the number of times a user has unlocked the phone or read notifications. The App Timer in the app will let a user set a time limits on apps that will nudge him/her when the limit is close to getting over. The app icon will be grayed out as a reminder to turn off the consumption.
The Wind Down mode will essentially enter the phone into the Do Not Disturb mode while fading the display colours to black & white to remind the user to go to sleep. In the nighttime, this feature will turn on the Night Light feature and activate the Do Not Disturb mode.