Android developers have a lot to look forward to in 2021, 2022, and beyond. Blockchain may decentralize how Android apps are developed, Flutter will see increased adoption for cross-platform development, and we expect big strides in AR and VR for the platform. Among the top trends in Android development, one potential innovation has caught the attention of savvy app developers: Microdroid.
Android developers and blogs were astir earlier this year when Google engineer Jiyong Park announced via the Android Open Source Project that they are working on a new, minimal Android-based Linux image called Microdroid.
Details about the project are scant, but it’s widely believed that Microdroid will essentially be a lighter version of the Android system image designed to function on virtual machines. Google is preparing for a world in which even smartphone OS’s require a stripped-down version that can be run through the cloud.
Working from a truncated Linux, Microdroid will pull the system image from the device (tablet or phone), creating a simulated environment accessible from any remote device. It has the ability to enable a world in which users can access Google Play and any Android app using any device.
What does this mean for developers?
Microdroid will open up new possibilities for Android apps in embedded and IoT spaces which require potentially automated management and a contained virtual machine which can mitigate security risks. Cloud gaming, cloud computing—even smartphones with all features stored in the cloud—are possible. Although we will have to wait and see what big plans Google has for Microdroid and how Android developers capitalize on it, at this juncture, it’s looking like the shift to the cloud may entail major changes in how we interact with our devices. App developers are keen to keep their eyes and heads in the cloud.
Although no timeline for release has been revealed yet, we expect more on Microdroid with the announcement of Android 12.
All signs point toward continued growth in the Augmented Reality space. As the latest generations of devices are equipped with enhanced hardware and camera features, applications employing AR have seen increasing adoption. While ARCore represents a breakthrough for the Android platform, it is not Google’s first endeavor into building an AR platform.
HISTORY OF GOOGLE AR
In summer 2014, Google launched their first AR platform Project Tango.
Project Tango received consistent updates, but never achieved mass adoption. Tango’s functionality was limited to three devices which could run it, including the Lenovo Phab 2 Pro which ultimately suffered from numerous issues. While it was ahead of its time, it didn’t receive the level of hype ARKit did. In March 2018, Google announced that it will no longer support Project Tango and that the tech titan will be continuing AR Development with ARCore.
ARCore uses three main technologies to integrate virtual content with the world through the camera:
It tracks the position of the device as it moves and gradually builds its own understanding of the real world. As of now, ARCore is available for development on the following devices:
ARCore and ARKit have quite a bit in common. They are both compatible with Unity. They both feature a similar level of capability for sensing changes in lighting and accessing motion sensors. When it comes to mapping, ARCore is ahead of ARKit. ARCore has access to a larger dataset which boosts both the speed and quality of mapping achieved through the collection of 3D environmental information. ARKit cannot store as much local condition data and information. ARCore can also support cross-platform development—meaning you can build ARCore applications for iOS devices, while ARKit is exclusively compatible with iOS devices.
The main cons of ARCore in relation to ARKit mainly have to do with their adoption. In 2019, ARKit was on 650 million devices while there were only 400 million ARCore-enabled devices. ARKit yields 4,000+ results on GitHub while ARCore only contains 1,400+. Ultimately, iOS devices are superior to software-driven Android devices—particularly given the TrueDepth Camera—meaning that AR applications will run better on iOS devices regardless of what platform they are on.
It is safe to say that ARCore is the more robust platform for AR development; however, ARKit is the most popular and most widely usable AR platform. We recommend spending time determining the exact level of usability you need, as well as the demographics of your target audience.
In an era of rapid technological growth, certain technologies, such as artificial intelligence and the internet of things, have received mass adoption and become household names. One up-and-coming technology that has the potential to reach that level of adoption is LiDAR.
WHAT IS LIDAR?
LiDAR, or light detection and ranging, is a popular remote sensing method for measuring the exact distance of an object on the earth’s surface. Initially used in the 1960s, LiDAR has gradually received increasing adoption, particularly after the creation of GPS in the 1980s. It became a common technology for deriving precise geospatial measurements.
LiDAR requires three components: the scanner, laser, and GPS receiver. The scanner sends a pulsed laser to the GPS receiver to calculate an object’s variable distances from the earth surface. The laser emits light which travels to the ground and reflects off things like buildings, tree branches and more. The reflected light energy then returns to the LiDAR sensor where the associated information is recorded. In combination with photodetector and optics, it allows for an ultra-precise distance detection and topographical data.
WHY IS LIDAR IMPORTANT?
As we covered in our rundown of the iPhone 12, new iOS devices come equipped with a brand new LiDAR scanner. LiDAR now enters the hands of consumers who have Apple’s new generation of devices, enabling enhanced functionality and major opportunities for app developers. The proliferation of LiDAR signals toward the technology finding mass adoption and household name status.
There are two different types of LiDAR systems: Terrestrial and Airborne. Airborne LiDAR are installed on drones or helicopters for deriving an exact measurement of distance, while Terrestrial LiDAR systems are installed on moving vehicles to collect pinpoints. Terrestrial LiDAR systems are often used to monitor highways and have been employed by autonomous cars for years, while airborne LiDAR are commonly used in environmental applications and gathering topographical data.
With the future in mind, here are the top LiDAR trends to look out for moving forward:
SUPERCHARGING APPLE DEVICES
LiDAR enhances the camera on Apple devices significantly. Auto-focus is quicker and more effective on those devices. Moreover, it supercharges AR applications by greatly enhancing the speed and quality of a camera’s ability to track the location of people as well as place objects.
One of the major apps that received a functionality boost from LiDAR is Apple’s free Measure app, which can measure distance, dimensions, and even whether an object is level. The measurements determined by the app are significantly more accurate with the new LiDAR scanner, capable of replacing physical rulers, tape measures, and spirit levels.
Microsoft’sSeeing AI application is designed for the visually impaired to navigate their environment, however, LiDAR takes it to the next level. In conjunction with artificial intelligence, LiDAR enables the application to read text, identify products and colors, and describe people, scenes, and objects that appear in the viewfinder.
BIG INVESTMENTS BY AUTOMOTIVE COMPANIES
LiDAR plays a major role in autonomous vehicles, relying on a terrestrial LiDAR system to help them self-navigate. In 2018, reports suggest that the automotive segment acquired a business share of 90 percent. With self-driving cars inching toward mass adoption, expect to see major investments in LiDAR by automotive companies in 2021 and beyond.
Beyond commercial applications and the automotive industry, LiDAR is gradually seeing increased adoption for geoscience applications. The environmental segment of the LiDAR market is anticipated to grow at a CAGR of 32% through 2025. LiDAR is vital to geoscience applications for creating accurate and high-quality 3D data to study ecosystems of various wildlife species.
One of the main environmental uses of LiDAR is for soliciting topographic information on landscapes. Topographic LiDAR is expected to see a growth rate of over 25% over the coming years. These systems can see through forest canopy to produce accurate 3D models of landscapes necessary to create contours, digital terrain models, digital surface models and more.
Geospatial technology describes a broad range of modern tools which enable the geographic mapping and analysis of Earth and human societies. Since the 19th century, geospatial technology has evolved as aerial photography and eventually satellite imaging revolutionized cartography and mapmaking.
Contemporary society now employs geospatial technology in a vast array of applications, from commercial satellite imaging, to GPS, to Geographic Information Systems (GIS) and Internet Mapping Technologies like Google Earth. The geospatial analytics market is currently valued between $35 and $40 billion with the market projected to hit $86 billion by 2023.
GEOSPATIAL 1.0 VS. 2.0
Geospatial technology has been in phase 1.0 for centuries; however, the boon of artificial intelligence and the IoT has made Geospatial 2.0 a reality. Geospatial 1.0 offers valuable information for analysts to view, analyze, and download geospatial data streams. Geospatial 2.0 takes it to the next level–harnessing artificial intelligence to not only collect data, but to process, model, analyze and make decisions based on the analysis.
When empowered by artificial intelligence, geospatial 2.0 technology has the potential to revolutionize a number of verticals. Savvy application developers and government agencies in particular have rushed to the forefront of creating cutting edge solutions with the technology.
PLATFORM AS A SERVICE (PaaS) SOLUTIONS
Effective geospatial 2.0 solutions require a deep vertical-specific knowledge of client needs, which has lagged behind the technical capabilities of the platform. The bulk of currently available geospatial 2.0 technologies are offered as “one-size-fits-all” Platform as a Service (PaaS) solutions. The challenge for PaaS providers is that they need to serve a wide collection of use cases, harmonizing data from multiple sensors together while enabling users to simply understand and address the many different insights which can be gleaned from the data.
In precision agriculture, FarmShots offers precise, frequent imagery to farmers along with meaningful analysis of field variability, damage extent, and the effects of applications through time.
In the disaster management field, Mayday offers a centralized artificial intelligence platform with real-time disaster information. Another geospatial 2.0 application Cloud to Street uses a mix of AI and satellites to track floods in near real-time, offering extremely valuable information to both insurance companies and municipalities.
The growing complexity of environmental concerns have led to a number of applications of geospatial 2.0 technology to help create a safer, more sustainable world. For example, geospatial technology can measure carbon sequestration, tree density, green cover, carbon credit & tree age. It can provide vulnerability assessment surveys in disaster-prone areas. It can also help urban planners and governments plan and implement community mapping and equitable housing. Geospatial 2.0 can analyze a confluence of factors and create actionable insight toward analyzing and honing our environmental practices.
As geospatial 1.0 models are upgraded to geospatial 2.0, expect to see more robust solutions incorporating AI-powered analytics. A survey of working professionals conducted by Geospatial World found that geospatial technology will likely make the biggest impact in the climate and environment field.
Geospatial 2.0 platforms are very expensive to employ and require quite a bit of development. The technology offers great potential to increase revenue and efficiency for a number of verticals. In addition, it may be a key technology to help cut down our carbon footprint and create a safer, more sustainable world..
Mobile app marketing is an elusive and constantly evolving field. For mobile app developers, getting new users to install games is relatively cheap at just $1.47 per user, while retaining them is much more difficult. It costs on average $43.88 to prompt a customer to make an in-app purchase according to Liftoff. An effective advertising strategy will make or break your UI—and your bank. In 2019, in-game ads made up 17% of all revenue. By 2024, that number is expected to triple.
2020 was a year that saw drastic changes in lifestyle—mobile app users were no exception. What trends are driving app developers to refine their advertising and development tactics in 2021? Check out our rundown below.
Real Time Bidding
In-app bidding is an advanced advertising method enabling mobile publishers to sell their ad inventory in an automated auction. The technology is not new—it’s been around since 2015 when it was primarily used on a desktop. However, over the past few years, both publishers and advertisers have benefited from in app-bidding, eschewing the traditional waterfall method.
In-app bidding enables publishers to sell their ad space at auction. Advertisers simultaneously bid against one another. The dense competition enables a higher price (CPM) for publishers. For advertisers, bidding decreases fragmentation between demand sources since they can bid on many at once. In the traditional waterfall method, ad mediation platforms prioritize ad networks they’ve worked with in the past before passing it on the premium ad networks. In-app bidding changes the game by enabling publishers to offer their inventory to auctions which include a much wider swath of advertisers beyond the traditional waterfall.
Bidding benefits all parties. App publishers see increased demand for ad inventory, advertisers access more inventory, and app users see more relevant ads. In 2021, many expect in-app bidding to gain more mainstream popularity. Check out this great rundown by AdExchanger for more information on this exciting new trend.
Rewarded Ads Still King
We have long championed rewarded ads on the Mystic Media blog. Rewarded ads offer in-game rewards to users who voluntarily choose to view an ad. Everyone wins—users get tangible rewards for their time, publishers get advertising revenue and advertisers get valuable impressions.
App usage data from 2021 only increases our enthusiasm for the format. 71% of mobile gamers desire the ability to choose whether or not to view an ad. 31% of gamers said rewarded video prompted them to browse for products within a month of seeing them. Leyi Games implemented rewarded video and improved player retention while bringing in an additional $1.5 million US.
Facebook’s 2020 report showed that gamers find rewarded ads to be the least disruptive ad format, leading to longer gameplay sessions and more opportunities for content discovery.
Playable ads have emerged as one of the foremost employed advertising tactics for mobile games. Playable ads enable users to sample gameplay by interacting with the ad. After a snippet of gameplay, the ad transitions into a call to action to install the game.
The benefits are obvious. If the game is fun and absorbing to the viewer, it has a much better chance of getting installed. By putting the audience in the driver’s seat, playable ads drive increased retention rates and a larger number of high lifetime value (LTV) players.
As we are bombarded with more and more media on a daily basis, finding a way to deliver a concise message while cutting through the clutter can be exceptionally difficult. However, recent research from MAGNA, IPG Media Lab, and Snap Inc. shows it may be well worth it.
Studies show short-form video ads drive nearly identical brand preference and purchase intent as 15 second ads. Whereas short form ads were predominantly employed to grow awareness, marketers now understand that longer ads are perceived by the user as more intrusive, and they can get just as much ROI out of shorter and less expensive content.
Check out the graph below, breaking down the efficacy of 6 second vs. 15 second ads via Business of Apps.
Mobile advertisers need to think big picture in terms of both their target customer and how they format their ads to best engage their audience. While the trends we outlined are currently in the zeitgeist, ultimately what matters most is engaging app users with effective content that delivers a valuable message without intruding on their experience on the app.
We have covered the evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) over the years as they have gained prominence. IoT devices collect a massive amount of data. Cisco projects by the end of 2021, IoT devices will collect over 800 zettabytes of data per year. Meanwhile, AI algorithms can parse through big data and teach themselves to analyze and identify patterns to make predictions. Both technologies enable a seemingly endless amount of applications retained a massive impact on many industry verticals.
What happens when you merge them? The result is aptly named the AIoT (Artificial Intelligence of Things) and it will take IoT devices to the next level.
WHAT IS AIOT?
AIoT is any system that integrates AI technologies with IoT infrastructure, enhancing efficiency, human-machine interactions, data management and analytics.
IoT enables devices to collect, store, and analyze big data. Device operators and field engineers typically control devices. AI enhances IoT’s existing systems, enabling them to take the next step to determine and take the appropriate action based on the analysis of the data.
By embedding AI into infrastructure components, including programs, chipsets, and edge computing, AIoT enables intelligent, connected systems to learn, self-correct and self-diagnose potential issues.
One common example comes in the surveillance field. Surveillance camera can be used as an image sensor, sending every frame to an IoT system which analyzes the feed for certain objects. AI can analyze the frame and only send frames when it detects a specific object—significantly speeding up the process while reducing the amount of data generated since irrelevant frames are excluded.
While AIoT will no doubt find a variety of applications across industries, the three segments we expect to see the most impact on are wearables, smart cities, and retail.
The global wearable device market is estimated to hit more than $87 billion by 2022. AI applications on wearable devices such as smartwatches pose a number of potential applications, particularly in the healthtech sector.
We’ve previously explored the future of smart cities in our blog series A Smarter World. With cities eager to invest in improving public safety, transport, and energy efficiency, AIoT will drive innovation in the smart city space.
There are a number of potential applications for AIoT in smart cities. AIoT’s ability to analyze data and act opens up a number of possibilities for optimizing energy consumption for IoT systems. Smart streetlights and energy grids can analyze data to reduce wasted energy without inconveniencing citizens.
Some smart cities have already adopted AIoT applications in the transportation space. New Delhi, which boasts some of the worst traffic in the world, features an Intelligent Transport Management System (ITMS) which makes real-time dynamic decisions on traffic flows to accelerate traffic.
One of the big innovations for AIoT involves smart shopping carts. Grocery stores in both Canada and the United States are experimenting with high-tech shopping carts, including one from Caper which uses image recognition and built-in sensors to determine what a person puts into the shopping cart.
The potential for smart shopping carts is vast—these carts will be able to inform customers of deals and promotion, recommend products based on their buying decisions, enable them to view an itemized list of their current purchases, and incorporate indoor navigation to lead them to their desired items.
A smart shopping cart company called IMAGR recently raised $14 million in a pre-Series A funding round, pointing toward a bright future for smart shopping carts.
AIoT represents the intersection of AI, IoT, 5G, and big data. 5G enables the cloud processing power for IoT devices to employ AI algorithms to analyze big data to determine and enact action items. These technologies are all relatively young, and as they continue to grow, they will empower innovators to build a smarter future for our world.
We expect a continued increase in the utilization of AR in 2021. The iPhone 12 contains LiDAR technology, which enables the use of ARKit 4, greatly enhancing the possibilities for developers. When creating an AR application, developers must consider a variety of methods for triggering the experience and answer several questions before determining what approach will best facilitate the creation of a digital world for their users. For example, what content will be displayed? Where will this content be placed, and in what context will the user see it?
Markerless AR can best be used when the user needs to control the placement of the AR object. For example, the IKEA Place app allows the user to place furniture in their home to see how it fits.
Location-based AR roots an AR experience to a physical space in the world, as we explored previously in our blog Learn How Apple Tightened Their Hold on the AR Market with the Release of ARKit 4. ARKit 4 introduces Location Anchors, which enable developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). To provide more accuracy than location alone, location anchors also use the device’s camera to capture landmarks and match them with a localization map downloaded from Apple Maps. Location anchors greatly enhance the potential for location-based AR; however, the possibilities are limited within the 50 cities which Apple has enabled them.
Marker-based AR remains the most popular method among app developers. When an application needs to know precisely what the user is looking at, accept no substitute. In marker-based AR, 3D AR models are generated using a specific marker, which triggers the display of virtual information. There are a variety of AR markers that can trigger this information, each with its own pros and cons. Below, please find our rundown of the most popular types of AR markers.
The most popular AR marker is a framemarker, or border marker. It’s usually a 2D image printed on a piece of paper with a prominent border. During the tracking phase, the device will search for the exterior border in order to determine the real marker within.
Framemarkers are similar to QR Codes in that they are codes printed on images that require handheld devices to scan, however, they trigger AR experiences, whereas QR codes redirect the user to a web page. Framemarkers are a straightforward and effective solution.
Framemarkers are particularly popular in advertising applications. Absolut Vodka’s Absolute Truth application enabled users to scan a framemarker on a label of their bottle to generate a slew of more information, including recipes and ads.
NFT, or Natural Feature Tracking, enable camera’s to trigger an AR experience without borders. The camera will take an image, such as the one above, and distill down it’s visual properties as below.
The result of processing the features can generate AR, as below.
The quality and stability of these can oscillate based on the framework employed. For this reason, they are less frequently used than border markers, but function as a more visually subtle alternative. A scavenger hunt or a game employing AR might hide key information in NFT markers.
Advancements in technology have enabled mobile devices to solve the issue of SLAM (simultaneous localization and mapping). The device camera can extract information in-real time, and use it to place a virtual object in it. In some frameworks, objects can become 3D-markers. Vuforia Object Scanner is one such framework, creating object data files that can be used in applications for targets. Virtual Reality Pop offers a great rundown on the best object recognition frameworks for AR.
Although RFID Tags are primarily used for short distance wireless communication and contact free payment, they can be used to trigger local-based virtual information.
While RFID Tags are not widely employed, several researchers have written articles about the potential usages for RFID and AR. Researchers at the ARATLab at the National University of Singapore have combined augmented reality and RFID for the assembly of objects with embedded RFID tags, showing people how to properly assemble the parts, as demonstrated in the video below.
Speech can also be used as a non-visual AR marker. The most common application for this would be for AR glasses or a smart windshield that displays information through the screen requested by the user via vocal commands.
Think like a user—it’s a staple coda for app developers and no less relevant in crafting AR experiences. Each AR trigger offers unique pros and cons. We hope this has helped you decide what is best equipped for your application.
In our next article, we will explore the innovation at the heart of AIoT, the intersection of AI and the Internet of Things.
Since the explosive launch of Pokemon Go, AR technologies have vastly improved. Our review of the iPhone 12 concluded that as Apple continues to optimize its hardware, AR will become more prominent in both applications and marketing.
At the 2020 WWDC in June, Apple announced ARKit 4, their latest iteration of the famed augmented reality platform. ARKit 4 features some vast improvements that help Apple tighten their hold on the AR market.
ARKit 4 introduces location anchors, which allow developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). When rebuilding the data backend for Apple Maps, Apple collected camera and 3D LiDAR data from city streets across the globe. ARKit downloads the virtual map surrounding your device from the cloud and matches it with the device’s feed to determine your location. The kicker is: all processing happens using machine learning within the device, so your camera feed stays put.
Devices with an A12 chip or later, can run Geo-tracking; however, location anchors require Apple to have mapped the area previously. As of now, they are supported in over 50 cities in the U.S. As the availability of compatible devices increases and Apple continues to expand its mapping project, location anchors will find increased usage.
ARKit’s new Depth API harnesses the LiDAR scanner available on iPad Pro and iPhone 12 devices to introduce advanced scene understanding and enhanced pixel depth information in AR applications. When combined with 3D mesh data derived from Scene Geometry, which creates a 3D matrix of readings of the environment, the Depth API vastly improves virtual object occlusion features. The result is the instant placement of digital objects and seamless blending with their physical surroundings.
Face tracking has found an exceptional application in Memojis, which enables fun AR experiences for devices with a TrueDepth camera. ARKit 4 expands support to devices without a camera that has at least an A12. TrueDepth cameras can now leverage ARKit 4 to track up to three faces at once, providing many fun potential applications for Memojis.
VIDEO MATERIALS WITH REALITYKIT
ARKit 4 also brings with it RealityKit, which adds support for applying video textures and materials to AR experiences. For example, developers will be able to place a virtual television on a wall, complete with realistic attributes, including light emission, texture roughness, and even audio. Consequentially, AR developers can develop even more immersive and realistic experiences for their users.
iOS and Android are competing for supremacy when it comes to AR development. While the two companies’ goals and research overlap, Apple has a major leg up on Google in its massive base of high-end devices and its ability to imbue them with the necessary structure sensors like TrueDepth and LiDAR.
ARKit has been the biggest AR development platform since it hit the market in 2017. ARKit 4 provides the technical capabilities tools for innovators and creative thinkers to build a new world of virtual integration.
In 2020, worldwide music streaming revenue hit 11.4 billion dollars, a 2800% growth over the course of a decade. Three hundred forty-one million paid online streaming subscribers get their music from top services like Apple Music, Spotify, and Tidal. The competition for listeners is fierce. Each company looks to leverage every advantage they can in pursuit of higher market share.
Like all major tech conglomerates, music streaming services collect an exceptional amount of user data through their platforms and are creating elaborate AI algorithms designed to improve user experience on a number of levels. Spotify has emerged as the largest on-demand music service active today and bolstered its success through the innovative use of AI.
Here are the top ways in which AI has changed music streaming:
AI has the ability to sift through a plenitude of implicit consumer data, including:
Geographic location of listeners
Most used devices
AI algorithms can analyze user trends and identify users with similar tastes. For example, if AI deduces that User 1 and User 2 have similar tastes, then it can infer that songs User 1 has liked will also be enjoyed by User 2. Spotify’s algorithms will leverage this information to provide recommendations for User 2 based on what User 1 likes, but User 2 has yet to hear.
The result is not only improved recommendations, but greater exposure for artists that otherwise may not have been organically found by User 2.
NATURAL LANGUAGE PROCESSING
Natural Language Processing is a burgeoning field in AI. Previously in our blog, we covered GPT-3, the latest Natural Language Processing (NLP) technology developed by OpenAI. Music streaming services are well-versed in the technology and leverage it in a variety of ways to enhance UI.
Algorithms scan a track’s metadata, in addition to blog posts, discussions, and news articles about artists or songs on the internet to determine connections. When artists/songs are mentioned alongside artists/songs the user likes, algorithms make connections that fuel future recommendations.
GPT-3 is not perfect; its ability to track sentiments lacks nuance. As Sonos Radio general manager Ryan Taylor recently said to Fortune Magazine: “The truth is music is entirely subjective… There’s a reason why you listen to Anderson .Paak instead of a song that sounds exactly like Anderson .Paak.”
As NLP technology evolves and algorithms extend their grasp of the nuances of language, so will the recommendations provided to you by music streaming services.
AI can study audio models to categorize songs exclusively based on their waveforms. This scientific, binary approach to analyzing creative work enables streaming services to categorize songs and create recommendations regardless of the amount of coverage a song or artist has received.
Artist payment of royalties on streaming services poses its own challenges, problems, and short-comings. Royalties are deduced from trillions of data points. Luckily, blockchain is helping to facilitate a smoother artist’s payment process. Blockchain technology can not only make the process more transparent but also more efficient. Spotify recently acquired blockchain company Mediachain Labs, which will, many pundits are saying, change royalty payments in streaming forever.
MORE TO COME
While AI has vastly improved streaming ability to keep their subscribers compelled, a long road of evolution lies ahead before it can come to a deep understanding of what motivates our musical tastes and interests. Today’s NLP capabilities provided by GPT-3 will probably become fairly archaic within three years as the technology is pushed further. One thing is clear: as streaming companies amass decades’ worth of user data, they won’t hesitate to leverage it in their pursuit of market dominance.
On October 23rd, four brand new iPhone 12 models were released to retailers. As the manufacturer of the most popular smartphone model in the world, whenever Apple delivers a new device its front-page news. Mobile app developers looking to capitalize on new devices must stay abreast of the latest technologies, how they empower applications, and what they signal about where the future of app development is headed.
With that in mind, here is everything app developers need to know about the latest iPhone models.
BIG DEVELOPMENTS FOR AUGMENTED REALITY
LiDAR is a method for measuring distances (ranging) by illuminating the target with laser light and measuring the reflection with a sensor
On a camera level, the iPhone 12 includes significant advancements. It is the first phone to record and edit Dolby Vision with HDR. What’s more, Apple has enhanced the iPhone’s LiDAR sensor capabilities with a third telephoto lens.
The opportunities for app developers are significant. For AR developers, this is a breakthrough—enhanced LiDAR on the iPhone 12 means a broad market will have access to enhanced depth perception, enabling smoother AR object placement. The LIDAR sensor produces a 6x increase in autofocus speed in low light settings.
The potential use cases are vast. An enterprise-level application could leverage the enhanced camera to show the inner workings of a complex machine and provide solutions. Dimly lit rooms can now house AR objects, such as Christmas decorations. The iPhone 12 provides a platform for AR developers to count on a growing market of app users to do much more with less light, and scan rooms with more detail.
The iPhone 12’s enhanced LiDAR Scanner will enable iOS app developers to employ Apple’s ARKit 4 to attain enhanced depth information through a brand-new Depth API. ARKit 4 also introduces location anchors, which enable developers to place AR experiences at a specific point in the world in their iPhone and iPad apps.
With iPhone 12, Apple sends a clear message to app developers: AR is on the rise.
ALL IPHONE 12 MODELS SUPPORT 5G
The entire iPhone 12 family of devices supports 5G with both sub-6GHz and mmWave networks. When iPhone 12 devices leverage 5G with the Apple A14 bionic chip, it enables them to integrate with IoT devices, and perform on ML algorithms at a much higher level.
5G poses an endless array of possibilities for app developers—from enhanced UX, more accurate GPS, improved video apps, and more. 5G will reduce dependency on hardware as app data is stored in the cloud with faster transfer speeds. In addition, it will enable even more potential innovation for AR applications.
Beyond the bells and whistles, the iPhone 12 sends a very clear message about what app developers can anticipate will have the biggest impact on the future of app development: AR and 5G. Applications employing these technologies will have massive potential to evolve as the iPhone 12 and its successors become the norm and older devices are phased out.