In this blog post, we’ll cover the essential security features that the Android and iOS platforms come with out of the box.
Over the years, operating systems have continuously received additional security features with each new release and have become more and more robust and secure. However, not only the developers at Google and Apple but also security researchers and attackers are busy, always actively working to uncover possible vulnerabilities.
A look at the functions integrated out of the box shows that essential basic protection for users and data is available. So why do data breaches and security incidents keep happening, especially in the mobile world?
Although Android and iOS come with numerous useful features and are equipped with good default settings, users like to turn these security screws. Mostly for reasons of convenience, predefined security levels may be deliberately turned down. Often a user doesn’t see himself or herself as a focus of attackers, assuming that seemingly more attractive targets are available out there in the world—a widespread misconception!
The fact that security incidents don’t happen more often is partly due to existing security features. An overview of the most important features can be found ahead. Make sure that you and your users use these features and only turn them off when there are good reasons to do so.
Sandboxing
A crucial element of mobile operating systems (and now also of desktop operating systems) is sandboxing. Sandboxing allows an app to operate exclusively in its own sandbox. The app is literally locked in its sandbox and is only allowed to read and write files in this specially allocated memory area. Access to data of other apps or to data of the operating system is only possible via defined interfaces provided by the operating system and requires further authorizations.
Authorization Concept
If an app wants to use more functions than just acting in its own memory area—and this is the case in 99.9% of all apps because even accessing the internet is a process in which data leaves the sandbox—it must request authorizations.
Some authorizations are granted automatically (such as access to the internet); for other authorizations, the user must explicitly grant them. This is the case, for example, when accessing the address book, calendar, or camera. Although the individual permissions on Android and iOS have strong similarities, the principle was still radically different until Android 6 (Marshmallow).
Prior to Android 6, all requested authorizations had to be agreed to when installing an app (the all-or-nothing principle). If a user didn’t agree with an authorization, they had to accept it reluctantly or, alternatively, refrain from using the app. Since Android 6.0, Android has adapted to Apple’s policy and now assigns authorizations dynamically and individually at runtime. The dynamic assignment of authorizations provides two significant advantages:
- Granularity: Apps can be granted or revoked authorizations individually.
- Comprehensibility: Authorizations that must be granted before an app can be installed often cannot be estimated by the user.
With the introduction of Android 11, the authorization concept was further refined. Since then, permissions for location, camera, or microphone can be used once. Moreover, these permissions are automatically revoked if the app isn’t used for a long period of time (a few months). A comparable concept was introduced for iOS with iOS 13.
In theory, this is certainly a good approach to more privacy, but in real life it again requires more effort and actions from the user.
Protection against Brute-Force Attacks when the Screen Is Locked
Often, the only protection for data on a mobile device from an attacker is to lock the device. To unlock, the user must enter a PIN or password, trace a pattern, or scan their fingerprint or face, depending on the settings. If the user selects a password, PIN, and/or fingerprint/face scan as the locking method (a PIN must be set at the same time for the biometric methods), Android and iOS provide integrated protection mechanisms in the form of timeouts. If the PIN is entered incorrectly too often, no entries can be made on the devices for certain time intervals.
Rising Intervals on iOS: Android sets the timeout as 30 seconds until new input is allowed by the user. iOS increases the timeouts and uses the following staggered levels: 1 minute, 5 minutes, 15 minutes, 60 minutes, infinite. If the device is locked, it must be connected to macOS or iTunes on Windows.
Device Encryption
iOS has provided a native, hardware-controlled encryption of the integrated flash memory for a few years that can’t be deactivated. Android has also had a device encryption option for quite some time. Depending on the Android version and the device, encryption is automatically enabled after device setup or must be enabled manually. With a lack of awareness for security, this step is certainly likely to be skipped by many users.
Unlike iOS, Android encryption is software-based in most cases, and the cryptographic key material resides in the device’s memory and is thus potentially readable. This is probably due to the different builds of Android devices.
Android generates a 128-bit strong symmetric master key when the device is first started, which is stored in encrypted format. A secret of the user (password, PIN, unlock pattern) is added to this encryption. The strength of the encrypted storage of the master key thus depends directly on the strength of the user lock. Without a lock such as PIN, password, or pattern, a general default password is used, which doesn’t provide sufficient security but at least prevents the master key from being stored in plain text.
iOS takes device encryption a step further, both conceptually and hardware-wise. Between the persistent flash memory and the RAM is an AES-256 crypto engine that takes care of transparent encryption. Among other things, a globally unique ID (UID), which is burned into the application processor during production, flows into the encryption.
In addition to full hardware-based device encryption, other file-level encryption mechanisms are also available. For example, each newly created file is encrypted with its own 256-bit AES key. Basically, file encryption is hierarchical. Depending on the file, the user’s code lock also flows into the encryption so that the user can actively intervene in the strength of the encryption by using a strong code. Apple’s security whitepaper shows the encryption hierarchy in a diagram.
If Touch ID or Face ID (authentication on the device via fingerprint or face recognition, respectively) are used, the encryption is stronger because the entropy (i.e., the measure of the random amount of data contained therein) of a fingerprint or face, respectively, is greater than that of a chosen password of an acceptable length.
Patch Days
How do you define secure software? This question is usually responded to with different answers. If you were to conduct a qualitative survey of the most insecure software products, companies like Microsoft or Adobe would be high on the list. Apple and Linux operating systems, on the other hand, are widely regarded as supposedly secure software. If one looks for justifications for these theses, you come across the numerous security updates that Microsoft and Adobe roll out on their monthly patch days.
However, the quantity of vulnerabilities isn’t a measure of the security level of a software. All software has vulnerabilities; some software is just increasingly the focus of attackers and security researchers, inevitably exposing more vulnerabilities and closing them by releasing security updates. Vulnerabilities are cataloged according to the Common Vulnerabilities and Exposures (CVE) system and are given a metric rating.
Provided a vulnerability is identified, a CVE ID can be requested from MITRE, the administrator of CVE IDs. This process is optional. A myriad of statistics on MITREmaintained CVE IDs can be viewed at http://www.cvedetails.com. In 2021, for example, Android ranked second among systems with the highest number of vulnerabilities (http://s-prs.co/v5696170; http://s-prs.co/v5696171; see figures below).
Security Indicators: The number of reported vulnerabilities isn’t a criterion for evaluating the security level of software. What’s much more relevant is the establishment of a process that covers vulnerability management, such as regular patch days.
Although the number of vulnerabilities doesn’t yet say anything about the quality of the software, the mass of Android gaps indicates that there is a problem in dealing with security vulnerabilities here. If vulnerabilities become known, this doesn’t mean that updates also can be provided automatically. The reason is simple, but the solution to the problem is all the more difficult or impossible to implement.
Android with its open-source code allows customizing by device manufacturers and providers, which is also readily implemented in practice. As a result, a central operating system such as Windows or iOS hasn’t developed over the years. Instead, the Android landscape is enormously heterogeneous. Although Google has now introduced a regular patch day and releases monthly security updates, only unmodified Android systems (stock Android) receive the patches.
Customized Android versions such as those from Samsung, Huawei, or numerous other manufacturers are much more common. These versions only receive updates when the manufacturer and provider themselves release them. This may happen late—which is a problem in itself—or not at all.
Google introduced so-called Google Play System Updates with Android 10. This allows Google to patch some central and important core components without having to wait for an official update from the manufacturer. While these security updates don’t replace a full firmware update from a manufacturer, it’s a good step toward better security in the Android world. Some critical vulnerabilities, such as Stagefright, could have been closed this way.
The wide spread of Android versions from Android 6.x to 12.x complicates the issue even more. And the number of vulnerabilities that are closed (or not) on such a patch day is sometimes immense. For example, you can find up-to-date information about issues fixed during patch days at https://source.android.com/security/bulletin.
The problem is now common knowledge and should be a decisive reason that companies usually prefer iOS devices over Android devices. Although the situation has at least partially improved in the last few years (initially, there were no patch days for Android systems, but individual manufacturers like Samsung have introduced their own patch days, and important core components can be patched directly by Google), it’s far from ideal. Many Android users have to live with the presence of vulnerabilities.
In real life, it looks like users are running an Android operating system with more than 100 unpatched vulnerabilities. That’s a bizarre scenario, considering that once a critical vulnerability in Windows, Office, or Adobe Acrobat Reader becomes known, for example, all the alarm bells start ringing and the application of a security update supersedes almost all other tasks. This is especially true as more and more personal and protectable data accumulates on a mobile device.
Editor’s note: This post has been adapted from a section of the book Hacking and Security: The Comprehensive Guide to Penetration Testing and Cybersecurity by Michael Kofler, Klaus Gebeshuber, Peter Kloep, Frank Neugebauer, André Zingsheim, Thomas Hackner, Markus Widl, Roland Aigner, Stefan Kania, Tobias Scheible, and Matthias Wübbeling.
Comments