The Top In-App Purchase Tactics for 2022

According to Sensor Tower, consumers spent $111 billion on in-app purchases, subscriptions, and premium apps in 2020 on the Apple App Store and Google Play Store. How can your app take advantage to maximize revenue? Every app is different and begets a unique answer to the all important question: What’s the best way to monetize?

App Figures recently published a study which showed only 5.9% of Apple App Store apps are paid, compared to a paltry 3.7% on Google Play. Thus, the freemium model reigns supreme—according to app sales statistics, 48.2% of all mobile app revenue derives from in-app purchases.

When creating an in-app purchase ecosystem, many psychological and practical considerations must be evaluated. Below, please find the best practices for setting in-app purchase prices in 2022.

BEHAVIORAL ECONOMICS

Behavioral economics is a method of economic analysis that applies psychological insights into human behavior to explain economic decision-making. Creating an in-app purchase ecosystem begins with understanding and introducing the psychological factors which incentivize users to make purchases. For example, the $0.99 pricing model banks on users perceiving items that cost $1.99 to be closer to a $1 price point than $2. Reducing whole dollar prices by one cent is a psychological tactic proven to be effective for both in-app purchases and beyond.

Another psychological pricing tactic is to remove the dollar sign or local currency symbol from the IAP storefront and employ a purchasable in-app currency required to purchase IAPs. By removing the association with real money, users see the value of each option on a lower stakes scale. Furthermore, in-app currencies can play a major role in your retention strategy.

ANCHORING

Anchoring is a cognitive bias where users privilege an initial piece of information when making purchasing decisions. Generally, this applies to prices—app developers create a first price point as an anchoring reference, then slash it to provide users with value. For example, an in-app purchase might be advertised at $4.99, then slashed to $1.99 (60% off) for a daily deal. When users see the value in relation to the initial price point, they become more incentivized to buy.

Anchoring also relates to the presentation of pricing. We have all seen bundles and subscriptions present their value in relation to higher pricing tiers. For example, an annual subscription that’s $20/year, but advertised as a $36 value in relation to a monthly subscription price of $2.99/month. In order for your users to understand the value of a purchase, you have to hammer the point home through UI design.

OPTIMIZE YOUR UI

UI is very important when it comes to presenting your in-app purchases. A well-designed monetization strategy can be made moot by insufficient UI design. Users should always be 1-2 taps away from the IAP storefront where they can make purchases. The prices and discounts of each pricing option should be clearly delineated on the storefront.

Furthermore, make sure you are putting your best foot forward with how you present your prices. Anchoring increases the appeal of in-app purchases, but in order for the user to understand the deal, you have to highlight the value in your UI design by advertising it front and center in your IAP UI.

OFFER A VARIETY OF CHOICES

There are a number of IAPs trending across apps. In order to target the widest variety of potential buyers, we recommend offering a variety of options. Here are a few commonly employed options:

  • BUNDLES: Offer your IAPs either à la carte or as a bundle for a discount. Users are always more inclined to make a bigger purchase when they understand they are receiving an increased value.
  • AD FREE: Offer an ad-free experience to your users. This is one of the more common tactics and die-hard users will often be willing to pay to get rid of the ad experience.
  • SPECIAL OFFERS: Limited-time offers with major discounts are far more likely to attract user attention. Special offers create a feeling of scarcity as well as instill the feeling of urgency. Consider employing holiday specials and sending personalized push notifications to promote them.
  • MYSTERY BOX: Many apps offer mystery boxes—bundles often offered for cheap that contain a random assortment of IAPs. Users may elect to take a chance and purchase in hopes of receiving a major reward.

While offering users a variety of choices for IAPs is key, having too many choices can cause analysis paralysis and be stultifying to users. Analysis paralysis is when users are hesitant to make an in-app purchase because they’ve been given too many options. Restrict your IAPs to the most appealing options to make decisions easy for your users.

TESTING IS KEY

As with any component of app development, testing is the key to understanding your audience and refining your techniques. We recommend testing your app with a random user group and taking their feedback as well as having them fill out a questionnaire. A/B Testing, or split-run testing, consists of testing two different user groups with two different app experiences. A/B testing enables app developers to see how users react to different experiences and to evaluate what tactics are most user-effective.

There are many tactics to help incentivize users to make that big step and invest capital in an app. Savvy developers innovate every day—stay tuned on the latest trends to keep your in-app purchase strategy on the cutting edge.

How Bluetooth Became the Gold Standard of Wireless Audio Technology

Bluetooth technology has established itself over the years as the premiere wireless audio technology and a staple of every smartphone user’s daily mobile experience. From wireless headphones, to speakers, to keyboards, gaming controllers, IoT devices, and instant hotspots—Bluetooth is used for a growing variety of functions every year.

While Bluetooth is now a household name, the path to popularity was built over the course of over 20 years.

CONCEPTION

In 1994, Dr. Jaap Haartsen—an electrical engineer working for Ericsson’s Mobile Terminal Division in Lund—was tasked with creating an indoor wireless communication system for short-range radio connections. He ultimately created the Bluetooth protocol. Named after the renowned Viking king who united Denmark and Norway in 958 AD, the Bluetooth protocol was designed to replace RS-232 telecommunication cables using short range UHF radio waves between 2.4 and 2.485 GHz.

In 1998, he helped create the Bluetooth Special Interest Group, driving the standardization of the Bluetooth radio interface and obtaining worldwide regulatory approval for Bluetooth technology. To this day, Bluetooth SIG publishes and promotes the Bluetooth standard as well as revisions.

BLUETOOTH REACHES CONSUMERS

In 1999, Ericsson introduced the first major Bluetooth product for consumers in the form of a hands-free mobile headset. The headset won the “Best of Show Technology” award at COMDEX and was equipped with Bluetooth 1.0.

Each iteration of Bluetooth has three main distinguishing factors:

  • Range
  • Data speed
  • Power consumption

The strength of these factors is determined by both the modulation scheme and data packet employed. As you might imagine, Bluetooth 1.0 was far slower than the Bluetooth we’ve become accustomed to in 2021. Data speeds capped at 1Mbps with a range up to 10 meters. While we use Bluetooth to listen to audio on a regular basis today, it was hardly equipped to handle music and primarily designed for wireless voice calls.

THE BLUETOOTH EVOLUTION

The Bluetooth we currently enjoy in 2021 is version 5. Over the years, Bluetooth’s range, data speed, and power consumption have increased dramatically.

In 2004, Bluetooth 2.0 focused on enhancing the data rate, pushing from 0.7Mbps in version 1 to 1-3Mbps while increasing range from 10m to 30m. Bluetooth 3.0 increased speeds in 2009, allowing up to 24Mbps.

In 2011, Bluetooth 4.0 introduced a major innovation in BLE (Bluetooth Low Energy). BLE is an alternate Bluetooth segment designed for very low power operation. It enables major flexibility to build products that meet the unique connectivity requirements of their market. BLE is tailored toward burst-like communications, remaining in sleep mode before and after the connection initiates. The decreased power consumption takes IoT devices like industrial monitoring sensors, blood pressure monitoring, and Fitbit devices to the next level. These devices can employ BLE to run at 1Mbps at very low power consumption rates. In addition to lowering the power consumption, Bluetooth 4.0 doubles the typical maximum range from 30m in Bluetooth 3.0 to 60m.

BLUETOOTH 5

Bluetooth 5 is the latest version of the technology. Bluetooth 5 doubles the bandwidth by doubling the speed of transmission. In addition, it quadruples the typical max range, bringing it up to 240m. Bluetooth 5 also introduces Bluetooth Low Energy audio, which enables one device to share audio with multiple other devices.

CONCLUSION

Bluetooth is a game-changing technology which stands to revolutionize more than just audio. IoT devices, health tech, and more stand to improve as the Bluetooth SIG continues to upgrade the protocol. After thirty years of improvement, the possibilities remain vast for savvy developers to take advantage of the latest Bluetooth protocols to build futuristic wireless technologies.

HL7 Protocol Enhances Medical Data Transmissions–But Is It Secure?

In our last blog, we examined how DICOM became the standard format for transmitting files in medical imaging technology. As software developers, we frequently find ourselves working in the medical technology field navigating new formats and devices which require specialized attention.

This week, we will jump into one of the standards all medical technology developers should understand: the HL7 protocol.

The HL7 protocol is a set of international standards for the transfer of clinical and administrative data between hospital information systems. It refers to a number of flexible standards, guidelines, and methodologies by which various healthcare systems communicate with each other. HL7 connects a family of technologies, providing a universal framework for the interoperability of healthcare data and software.

Founded in 1987, Health Level Seven International (HL7) is a non-profit, ANSI-accredited standards developing organization that manages updates of the HL7 protocol. With over 1,600 members from over 50 countries, HL7 International represents brain trust incorporating the expertise of healthcare providers, government stakeholders, payers, pharmaceutical companies, vendors/suppliers, and consulting firms.

HL7 has primary and secondary standards. The primary standards are the most popular and integral for system integrations, interoperability, and compliance. Primary standards include the following:

  • Version 2.x Messaging Standard–an interoperability specification for health and medical transactions
  • Version 3 Messaging Standard–an interoperability specification for health and medical transactions
  • Clinical Document Architecture (CDA)–an exchange model for clinical documents, based on HL7 Version 3
  • Continuity of Care Document (CCD)–a US specification for the exchange of medical summaries, based on CDA.
  • Structured Product Labeling (SPL)–the published information that accompanies a medicine based on HL7 Version 3
  • Clinical Context Object Workgroup (CCOW)–an interoperability specification for the visual integration of user applications

While HL7 may enjoy employment worldwide, it’s also the subject of controversy due to underlying security issues. Researchers from the University of California conducted an experiment to simulate an HL7 cyber attack in 2019, which revealed a number of encryption and authentication vulnerabilities. By simulating a main-in-the-middle (MITM) attack, the experiment proved a bad actor could potentially modify medical lab results, which may result in any number of catastrophic medical miscues—from misdiagnosis to prescription of ineffective medications and more.

As software developers, we advise employing advanced security technology to protect patient data. Medical professionals are urged to consider the following additional safety protocols:

  • A strictly enforced password policy with multi-factor authentication
  • Third-party applications which offer encrypted and authenticated messaging
  • Network segmentation, virtual LAN, and firewall controls

While HL7 provides unparalleled interoperability for health care data, it does not provide ample security given the level of sensitivity of medical data—transmissions are unauthenticated and unvalidated and subject to security vulnerabilities. Additional security measures can help medical providers retain that interoperability across systems while protecting themselves and their patients from having their data exploited.

HOW DICOM BECAME THE STANDARD IN MEDICAL IMAGING TECHNOLOGY

Building applications for medical technology projects often requires extra attention from software developers. From adhering to security and privacy standards to learning new technologies and working with specialized file formats—developers coming in fresh must do a fair amount of due diligence to get acclimated in the space. Passing sensitive information between systems requires adherence to extra security measures—standards like HIPAA (Health Insurance Portability and Accountability Act) are designed to protect the security of health information.

When dealing with medical images and data, one international standard rises above the rest: DICOM. There are hundreds of thousands of medical imaging devices in use—and DICOM has emerged as the most widely used healthcare messaging standards and file formats in the world. Billions of DICOM images are currently employed for clinical care.

What is DICOM?

DICOM stands for Digital Imaging and Communications in Medicine. It’s the international file format and communications standard for medical images and related information, implemented in nearly every radiology, cardiology, imaging, and radiotherapy devices such as X-rays, CT scans, MRI, ultrasound, and more. It’s also finding increasing adoption in fields such as ophthalmology and dentistry.

DICOM groups information into data sets. Similar to how JPEGs often include embedded tags to identify or describe the image, DICOM files include patient ID to ensure that the image retains the necessary identification and is never separated from it. The bulk of images are single frames, but the attribute can also contain multiple frames, allowing for storage of Cineloops.

The History of DICOM

DICOM was developed by the American College of Radiology (ACR) and the National Electrical Manufacturer’s Association (NEMA) in the 1980s. Technologies such as CT scans and other advanced imaging technologies made it evident that computing would play an increasingly major role in the future of clinical work. The ACR and NEMA sought a standard method for transferring images and associated information between devices from different vendors.

The first standard covering point-to-point image communication was created in 1985 and initially titled ACR-NEMA 300. A second version was subsequently released in 1988, finding increased adoption among vendors. The first large-scale deployment of ACR-NEMA 300 was in 1992 by the U.S. Army and Air Force. In 1993, the third iteration of the standard was released—and it was officially named DICOM. While the latest version of DICOM is still 3.0, it has received constant maintenance and updates since 1993.

Why Is DICOM Important?

DICOM enables the interoperability of systems used to manage workflows as well as produce, store, share, display, query, process, retrieve and print medical images. By conforming to a common standard, DICOM enables medical professionals to share data between thousands of different medical imaging devices across the world. Physicians use DICOM to access images and reports to diagnose and interpret information from any number of devices.

DICOM creates a universal format for physicians to access medical imaging files, enabling high-performance review whenever images are viewed. In addition, it ensures that patient and image-specific information is properly stored by employing an internal tag system.

DICOM has few disadvantages. Some pathologists perceive the header tags to be a major flaw. Some tags are optional, while others are mandatory. The additional tags can lead to inconsistency or incorrect data. It also makes DICOM files 5% larger than their .tiff counterparts.

The Future

The future of DICOM remains bright. While no file format or communications standard is perfect, DICOM offers unparalleled cross-vendor interoperability. Any application developer working in the medical technology field would be wise to take the time to comprehensively understand it in order to optimize their projects.

Cloud-Powered Microdroid Expands Possibilities for Android App Developers

Android developers have a lot to look forward to in 2021, 2022, and beyond. Blockchain may decentralize how Android apps are developed, Flutter will see increased adoption for cross-platform development, and we expect big strides in AR and VR for the platform. Among the top trends in Android development, one potential innovation has caught the attention of savvy app developers: Microdroid.

Android developers and blogs were astir earlier this year when Google engineer Jiyong Park announced via the Android Open Source Project that they are working on a new, minimal Android-based Linux image called Microdroid.

Details about the project are scant, but it’s widely believed that Microdroid will essentially be a lighter version of the Android system image designed to function on virtual machines. Google is preparing for a world in which even smartphone OS’s require a stripped-down version that can be run through the cloud.

Working from a truncated Linux, Microdroid will pull the system image from the device (tablet or phone), creating a simulated environment accessible from any remote device. It has the ability to enable a world in which users can access Google Play and any Android app using any device.

What does this mean for developers?

Microdroid will open up new possibilities for Android apps in embedded and IoT spaces which require potentially automated management and a contained virtual machine which can mitigate security risks. Cloud gaming, cloud computing—even smartphones with all features stored in the cloudare possible. Although we will have to wait and see what big plans Google has for Microdroid and how Android developers capitalize on it, at this juncture, it’s looking like the shift to the cloud may entail major changes in how we interact with our devices. App developers are keen to keep their eyes and heads in the cloud.

Although no timeline for release has been revealed yet, we expect more on Microdroid with the announcement of Android 12.

Learn How Google Bests ARKit with Android’s ARCore

Previously, we covered the strengths of ARKit 4 in our blog Learn How Apple Tightened Their Grip on the AR Market with the Release of ARKit 4. This week, we will explore all that Android’s ARCore has to offer.

All signs point toward continued growth in the Augmented Reality space. As the latest generations of devices are equipped with enhanced hardware and camera features, applications employing AR have seen increasing adoption. While ARCore represents a breakthrough for the Android platform, it is not Google’s first endeavor into building an AR platform.

HISTORY OF GOOGLE AR

In summer 2014, Google launched their first AR platform Project Tango.

Project Tango received consistent updates, but never achieved mass adoption. Tango’s functionality was limited to three devices which could run it, including the Lenovo Phab 2 Pro which ultimately suffered from numerous issues. While it was ahead of its time, it didn’t receive the level of hype ARKit did. In March 2018, Google announced that it will no longer support Project Tango and that the tech titan will be continuing AR Development with ARCore.

ARCORE

ARCore uses three main technologies to integrate virtual content with the world through the camera:

  • Motion tracking
  • Environmental understanding
  • Light estimation

It tracks the position of the device as it moves and gradually builds its own understanding of the real world. As of now, ARCore is available for development on the following devices:

ARCORE VS. ARKIT

ARCore and ARKit have quite a bit in common. They are both compatible with Unity. They both feature a similar level of capability for sensing changes in lighting and accessing motion sensors. When it comes to mapping, ARCore is ahead of ARKit. ARCore has access to a larger dataset which boosts both the speed and quality of mapping achieved through the collection of 3D environmental information. ARKit cannot store as much local condition data and information. ARCore can also support cross-platform development—meaning you can build ARCore applications for iOS devices, while ARKit is exclusively compatible with iOS devices.

The main cons of ARCore in relation to ARKit mainly have to do with their adoption. In 2019, ARKit was on 650 million devices while there were only 400 million ARCore-enabled devices. ARKit yields 4,000+ results on GitHub while ARCore only contains 1,400+. Ultimately, iOS devices are superior to software-driven Android devices—particularly given the TrueDepth Camera—meaning that AR applications will run better on iOS devices regardless of what platform they are on.

OVERALL

It is safe to say that ARCore is the more robust platform for AR development; however, ARKit is the most popular and most widely usable AR platform. We recommend spending time determining the exact level of usability you need, as well as the demographics of your target audience.

For supplementary reading, check out this great rundown of the best ARCore apps of 2021 from Tom’s Guide.

LiDAR: The Next Revolutionary Technology and What You Need to Know

In an era of rapid technological growth, certain technologies, such as artificial intelligence and the internet of things, have received mass adoption and become household names. One up-and-coming technology that has the potential to reach that level of adoption is LiDAR.

WHAT IS LIDAR?

LiDAR, or light detection and ranging, is a popular remote sensing method for measuring the exact distance of an object on the earth’s surface. Initially used in the 1960s, LiDAR has gradually received increasing adoption, particularly after the creation of GPS in the 1980s. It became a common technology for deriving precise geospatial measurements.

LiDAR requires three components: the scanner, laser, and GPS receiver. The scanner sends a pulsed laser to the GPS receiver to calculate an object’s variable distances from the earth surface. The laser emits light which travels to the ground and reflects off things like buildings, tree branches and more. The reflected light energy then returns to the LiDAR sensor where the associated information is recorded. In combination with photodetector and optics, it allows for an ultra-precise distance detection and topographical data.

WHY IS LIDAR IMPORTANT?

As we covered in our rundown of the iPhone 12, new iOS devices come equipped with a brand new LiDAR scanner. LiDAR now enters the hands of consumers who have Apple’s new generation of devices, enabling enhanced functionality and major opportunities for app developers. The proliferation of LiDAR signals toward the technology finding mass adoption and household name status.

There are two different types of LiDAR systems: Terrestrial and Airborne. Airborne LiDAR are installed on drones or helicopters for deriving an exact measurement of distance, while Terrestrial LiDAR systems are installed on moving vehicles to collect pinpoints. Terrestrial LiDAR systems are often used to monitor highways and have been employed by autonomous cars for years, while airborne LiDAR are commonly used in environmental applications and gathering topographical data.

With the future in mind, here are the top LiDAR trends to look out for moving forward:

SUPERCHARGING APPLE DEVICES

LiDAR enhances the camera on Apple devices significantly. Auto-focus is quicker and more effective on those devices. Moreover, it supercharges AR applications by greatly enhancing the speed and quality of a camera’s ability to track the location of people as well as place objects.

One of the major apps that received a functionality boost from LiDAR is Apple’s free Measure app, which can measure distance, dimensions, and even whether an object is level. The measurements determined by the app are significantly more accurate with the new LiDAR scanner, capable of replacing physical rulers, tape measures, and spirit levels.

Microsoft’s Seeing AI application is designed for the visually impaired to navigate their environment, however, LiDAR takes it to the next level. In conjunction with artificial intelligence, LiDAR enables the application to read text, identify products and colors, and describe people, scenes, and objects that appear in the viewfinder.

BIG INVESTMENTS BY AUTOMOTIVE COMPANIES

LiDAR plays a major role in autonomous vehicles, relying on a terrestrial LiDAR system to help them self-navigate. In 2018, reports suggest that the automotive segment acquired a business share of 90 percent. With self-driving cars inching toward mass adoption, expect to see major investments in LiDAR by automotive companies in 2021 and beyond.

As automotive companies look to make major investments in LiDAR, including Volkswagen’s recent investment in Aeva, many LiDAR companies are competing to create the go-to LiDAR system for automotive companies. Check out this great article by Wired detailing the potential for this bubble to burst.

LIDAR DRIVING ENVIRONMENTAL APPLICATIONS

Beyond commercial applications and the automotive industry, LiDAR is gradually seeing increased adoption for geoscience applications. The environmental segment of the LiDAR market is anticipated to grow at a CAGR of 32% through 2025. LiDAR is vital to geoscience applications for creating accurate and high-quality 3D data to study ecosystems of various wildlife species.

One of the main environmental uses of LiDAR is for soliciting topographic information on landscapes. Topographic LiDAR is expected to see a growth rate of over 25% over the coming years. These systems can see through forest canopy to produce accurate 3D models of landscapes necessary to create contours, digital terrain models, digital surface models and more.

CONCLUSION

In March 2020, after the first LiDAR scanner became available in the iPad Pro, The Verge put it perfectly when they said that the new LiDAR sensor is an AR hardware solution in search of software. While LiDAR has gradually found increasing usage, it is still a powerful new technology with burgeoning commercial usage. Enterprising app developers are looking for new ways to use it to empower consumers and businesses alike.

For supplementary viewing on the inner workings of the technology, check out this great introduction below, courtesy of Neon Science.

How AI Fuels a Game-Changing Technology in Geospatial 2.0

Geospatial technology describes a broad range of modern tools which enable the geographic mapping and analysis of Earth and human societies. Since the 19th century, geospatial technology has evolved as aerial photography and eventually satellite imaging revolutionized cartography and mapmaking.

Contemporary society now employs geospatial technology in a vast array of applications, from commercial satellite imaging, to GPS, to Geographic Information Systems (GIS) and Internet Mapping Technologies like Google Earth. The geospatial analytics market is currently valued between $35 and $40 billion with the market projected to hit $86 billion by 2023.

GEOSPATIAL 1.0 VS. 2.0

geospatial

Geospatial technology has been in phase 1.0 for centuries; however, the boon of artificial intelligence and the IoT has made Geospatial 2.0 a reality. Geospatial 1.0 offers valuable information for analysts to view, analyze, and download geospatial data streams. Geospatial 2.0 takes it to the next level–harnessing artificial intelligence to not only collect data, but to process, model, analyze and make decisions based on the analysis.

When empowered by artificial intelligence, geospatial 2.0 technology has the potential to revolutionize a number of verticals. Savvy application developers and government agencies in particular have rushed to the forefront of creating cutting edge solutions with the technology.

PLATFORM AS A SERVICE (PaaS) SOLUTIONS

Effective geospatial 2.0 solutions require a deep vertical-specific knowledge of client needs, which has lagged behind the technical capabilities of the platform. The bulk of currently available geospatial 2.0 technologies are offered as “one-size-fits-all” Platform as a Service (PaaS) solutions. The challenge for PaaS providers is that they need to serve a wide collection of use cases, harmonizing data from multiple sensors together while enabling users to simply understand and address the many different insights which can be gleaned from the data.

shutterstock_754106473-768x576

In precision agriculture, FarmShots offers precise, frequent imagery to farmers along with meaningful analysis of field variability, damage extent, and the effects of applications through time.

Mayday

In the disaster management field, Mayday offers a centralized artificial intelligence platform with real-time disaster information. Another geospatial 2.0 application Cloud to Street uses a mix of AI and satellites to track floods in near real-time, offering extremely valuable information to both insurance companies and municipalities.

SUSTAINABILITY

The growing complexity of environmental concerns have led to a number of applications of geospatial 2.0 technology to help create a safer, more sustainable world. For example, geospatial technology can measure carbon sequestration, tree density, green cover, carbon credit & tree age. It can provide vulnerability assessment surveys in disaster-prone areas. It can also help urban planners and governments plan and implement community mapping and equitable housing. Geospatial 2.0 can analyze a confluence of factors and create actionable insight toward analyzing and honing our environmental practices.

As geospatial 1.0 models are upgraded to geospatial 2.0, expect to see more robust solutions incorporating AI-powered analytics. A survey of working professionals conducted by Geospatial World found that geospatial technology will likely make the biggest impact in the climate and environment field.

CONCLUSION

Geospatial 2.0 platforms are very expensive to employ and require quite a bit of development.  The technology offers great potential to increase revenue and efficiency for a number of verticals. In addition, it may be a key technology to help cut down our carbon footprint and create a safer, more sustainable world..

Top Mobile Marketing Trends Driving Success in 2021

Mobile app marketing is an elusive and constantly evolving field. For mobile app developers, getting new users to install games is relatively cheap at just $1.47 per user, while retaining them is much more difficult. It costs on average $43.88 to prompt a customer to make an in-app purchase according to Liftoff. An effective advertising strategy will make or break your UI—and your bank. In 2019, in-game ads made up 17% of all revenue. By 2024, that number is expected to triple.

2020 was a year that saw drastic changes in lifestyle—mobile app users were no exception. What trends are driving app developers to refine their advertising and development tactics in 2021? Check out our rundown below.

Real Time Bidding

ads-bidding-for-authors-strategy-guide-and-bid-calculator

In-app bidding is an advanced advertising method enabling mobile publishers to sell their ad inventory in an automated auction. The technology is not new—it’s been around since 2015 when it was primarily used on a desktop. However, over the past few years, both publishers and advertisers have benefited from in app-bidding, eschewing the traditional waterfall method.

In-app bidding enables publishers to sell their ad space at auction. Advertisers simultaneously bid against one another. The dense competition enables a higher price (CPM) for publishers. For advertisers, bidding decreases fragmentation between demand sources since they can bid on many at once. In the traditional waterfall method, ad mediation platforms prioritize ad networks they’ve worked with in the past before passing it on the premium ad networks. In-app bidding changes the game by enabling publishers to offer their inventory to auctions which include a much wider swath of advertisers beyond the traditional waterfall.

Bidding benefits all parties. App publishers see increased demand for ad inventory, advertisers access more inventory, and app users see more relevant ads. In 2021, many expect in-app bidding to gain more mainstream popularity. Check out this great rundown by AdExchanger for more information on this exciting new trend.

Rewarded Ads Still King

rewarded ad

We have long championed rewarded ads on the Mystic Media blog. Rewarded ads offer in-game rewards to users who voluntarily choose to view an ad. Everyone wins—users get tangible rewards for their time, publishers get advertising revenue and advertisers get valuable impressions.

App usage data from 2021 only increases our enthusiasm for the format. 71% of mobile gamers desire the ability to choose whether or not to view an ad. 31% of gamers said rewarded video prompted them to browse for products within a month of seeing them. Leyi Games implemented rewarded video and improved player retention while bringing in an additional $1.5 million US.

Facebook’s 2020 report showed that gamers find rewarded ads to be the least disruptive ad format, leading to longer gameplay sessions and more opportunities for content discovery.

Playable Ads

Playable ads have emerged as one of the foremost employed advertising tactics for mobile games. Playable ads enable users to sample gameplay by interacting with the ad. After a snippet of gameplay, the ad transitions into a call to action to install the game.

The benefits are obvious. If the game is fun and absorbing to the viewer, it has a much better chance of getting installed. By putting the audience in the driver’s seat, playable ads drive increased retention rates and  a larger number of high lifetime value (LTV) players.

Check out three examples of impactful playable ads compiled by Shuttlerock.

Short Ads, Big Appeal

As we are bombarded with more and more media on a daily basis, finding a way to deliver a concise message while cutting through the clutter can be exceptionally difficult. However, recent research from MAGNA, IPG Media Lab, and Snap Inc. shows it may be well worth it.

Studies show short-form video ads drive nearly identical brand preference and purchase intent as 15 second ads. Whereas short form ads were predominantly employed to grow awareness, marketers now understand that longer ads are perceived by the user as more intrusive, and they can get just as much ROI out of shorter and less expensive content.

Check out the graph below, breaking down the efficacy of 6 second vs. 15 second ads via Business of Apps.

Screen-Shot-2020-12-15-at-14.37.18

Conclusion

Mobile advertisers need to think big picture in terms of both their target customer and how they format their ads to best engage their audience. While the trends we outlined are currently in the zeitgeist, ultimately what matters most is engaging app users with effective content that delivers a valuable message without intruding on their experience on the app.

For supplementary reading on mobile marketing, check out our blog on the Top Mobile Ad Platforms You Need to Know for 2021

AIoT: How the Intersection of AI and IoT Will Drive Innovation for Decades to Come

We have covered the evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) over the years as they have gained prominence. IoT devices collect a massive amount of data. Cisco projects by the end of 2021, IoT devices will collect over 800 zettabytes of data per year. Meanwhile, AI algorithms can parse through big data and teach themselves to analyze and identify patterns to make predictions. Both technologies enable a seemingly endless amount of applications retained a massive impact on many industry verticals.

What happens when you merge them? The result is aptly named the AIoT (Artificial Intelligence of Things) and it will take IoT devices to the next level.

WHAT IS AIOT?

AIoT is any system that integrates AI technologies with IoT infrastructure, enhancing efficiency, human-machine interactions, data management and analytics.

IoT enables devices to collect, store, and analyze big data. Device operators and field engineers typically control devices. AI enhances IoT’s existing systems, enabling them to take the next step to determine and take the appropriate action based on the analysis of the data.

By embedding AI into infrastructure components, including programs, chipsets, and edge computing, AIoT enables intelligent, connected systems to learn, self-correct and self-diagnose potential issues.

960x0

One common example comes in the surveillance field. Surveillance camera can be used as an image sensor, sending every frame to an IoT system which analyzes the feed for certain objects. AI can analyze the frame and only send frames when it detects a specific object—significantly speeding up the process while reducing the amount of data generated since irrelevant frames are excluded.

CCTV-Traffic-Monitoring-1024x683

While AIoT will no doubt find a variety of applications across industries, the three segments we expect to see the most impact on are wearables, smart cities, and retail.

WEARABLES

Wearable-IoT-Devices

The global wearable device market is estimated to hit more than $87 billion by 2022. AI applications on wearable devices such as smartwatches pose a number of potential applications, particularly in the healthtech sector.

Researchers in Taiwan have been studying the potential for an AIoT wearable system for electrocardiogram (ECG) analysis and cardiac disease detection. The system would integrate a wearable IoT-based system with an AI platform for cardiac disease detection. The wearable collects real-time health data and stores it in a cloud where an AI algorithm detects disease with an average of 94% accuracy. Currently, Apple Watch Series 4 or later includes an ECG app which captures symptoms of irregular, rapid or skipped heartbeats.

Although this device is still in development, we expect to see more coming out of the wearables segment as 5G enables more robust cloud-based processing power, taking the pressure off the devices themselves.

SMART CITIES

We’ve previously explored the future of smart cities in our blog series A Smarter World. With cities eager to invest in improving public safety, transport, and energy efficiency, AIoT will drive innovation in the smart city space.

There are a number of potential applications for AIoT in smart cities. AIoT’s ability to analyze data and act opens up a number of possibilities for optimizing energy consumption for IoT systems. Smart streetlights and energy grids can analyze data to reduce wasted energy without inconveniencing citizens.

Some smart cities have already adopted AIoT applications in the transportation space. New Delhi, which boasts some of the worst traffic in the world, features an Intelligent Transport Management System (ITMS) which makes real-time dynamic decisions on traffic flows to accelerate traffic.

RETAIL

AIoT has the potential to enhance the retail shopping experience with digital augmentation. The same smart cameras we referenced earlier are being used to detect shoplifters. Walmart recently confirmed it has installed smart security cameras in over 1,000 stores.

smart-shopping-cart

One of the big innovations for AIoT involves smart shopping carts. Grocery stores in both Canada and the United States are experimenting with high-tech shopping carts, including one from Caper which uses image recognition and built-in sensors to determine what a person puts into the shopping cart.

The potential for smart shopping carts is vast—these carts will be able to inform customers of deals and promotion, recommend products based on their buying decisions, enable them to view an itemized list of their current purchases, and incorporate indoor navigation to lead them to their desired items.

A smart shopping cart company called IMAGR recently raised $14 million in a pre-Series A funding round, pointing toward a bright future for smart shopping carts.

CONCLUSION

AIoT represents the intersection of AI, IoT, 5G, and big data. 5G enables the cloud processing power for IoT devices to employ AI algorithms to analyze big data to determine and enact action items. These technologies are all relatively young, and as they continue to grow, they will empower innovators to build a smarter future for our world.