Category Archives: iOS Development

HOW DICOM BECAME THE STANDARD IN MEDICAL IMAGING TECHNOLOGY

Building applications for medical technology projects often requires extra attention from software developers. From adhering to security and privacy standards to learning new technologies and working with specialized file formats—developers coming in fresh must do a fair amount of due diligence to get acclimated in the space. Passing sensitive information between systems requires adherence to extra security measures—standards like HIPAA (Health Insurance Portability and Accountability Act) are designed to protect the security of health information.

When dealing with medical images and data, one international standard rises above the rest: DICOM. There are hundreds of thousands of medical imaging devices in use—and DICOM has emerged as the most widely used healthcare messaging standards and file formats in the world. Billions of DICOM images are currently employed for clinical care.

What is DICOM?

DICOM stands for Digital Imaging and Communications in Medicine. It’s the international file format and communications standard for medical images and related information, implemented in nearly every radiology, cardiology, imaging, and radiotherapy devices such as X-rays, CT scans, MRI, ultrasound, and more. It’s also finding increasing adoption in fields such as ophthalmology and dentistry.

DICOM groups information into data sets. Similar to how JPEGs often include embedded tags to identify or describe the image, DICOM files include patient ID to ensure that the image retains the necessary identification and is never separated from it. The bulk of images are single frames, but the attribute can also contain multiple frames, allowing for storage of Cineloops.

The History of DICOM

DICOM was developed by the American College of Radiology (ACR) and the National Electrical Manufacturer’s Association (NEMA) in the 1980s. Technologies such as CT scans and other advanced imaging technologies made it evident that computing would play an increasingly major role in the future of clinical work. The ACR and NEMA sought a standard method for transferring images and associated information between devices from different vendors.

The first standard covering point-to-point image communication was created in 1985 and initially titled ACR-NEMA 300. A second version was subsequently released in 1988, finding increased adoption among vendors. The first large-scale deployment of ACR-NEMA 300 was in 1992 by the U.S. Army and Air Force. In 1993, the third iteration of the standard was released—and it was officially named DICOM. While the latest version of DICOM is still 3.0, it has received constant maintenance and updates since 1993.

Why Is DICOM Important?

DICOM enables the interoperability of systems used to manage workflows as well as produce, store, share, display, query, process, retrieve and print medical images. By conforming to a common standard, DICOM enables medical professionals to share data between thousands of different medical imaging devices across the world. Physicians use DICOM to access images and reports to diagnose and interpret information from any number of devices.

DICOM creates a universal format for physicians to access medical imaging files, enabling high-performance review whenever images are viewed. In addition, it ensures that patient and image-specific information is properly stored by employing an internal tag system.

DICOM has few disadvantages. Some pathologists perceive the header tags to be a major flaw. Some tags are optional, while others are mandatory. The additional tags can lead to inconsistency or incorrect data. It also makes DICOM files 5% larger than their .tiff counterparts.

The Future

The future of DICOM remains bright. While no file format or communications standard is perfect, DICOM offers unparalleled cross-vendor interoperability. Any application developer working in the medical technology field would be wise to take the time to comprehensively understand it in order to optimize their projects.

LiDAR: The Next Revolutionary Technology and What You Need to Know

In an era of rapid technological growth, certain technologies, such as artificial intelligence and the internet of things, have received mass adoption and become household names. One up-and-coming technology that has the potential to reach that level of adoption is LiDAR.

WHAT IS LIDAR?

LiDAR, or light detection and ranging, is a popular remote sensing method for measuring the exact distance of an object on the earth’s surface. Initially used in the 1960s, LiDAR has gradually received increasing adoption, particularly after the creation of GPS in the 1980s. It became a common technology for deriving precise geospatial measurements.

LiDAR requires three components: the scanner, laser, and GPS receiver. The scanner sends a pulsed laser to the GPS receiver to calculate an object’s variable distances from the earth surface. The laser emits light which travels to the ground and reflects off things like buildings, tree branches and more. The reflected light energy then returns to the LiDAR sensor where the associated information is recorded. In combination with photodetector and optics, it allows for an ultra-precise distance detection and topographical data.

WHY IS LIDAR IMPORTANT?

As we covered in our rundown of the iPhone 12, new iOS devices come equipped with a brand new LiDAR scanner. LiDAR now enters the hands of consumers who have Apple’s new generation of devices, enabling enhanced functionality and major opportunities for app developers. The proliferation of LiDAR signals toward the technology finding mass adoption and household name status.

There are two different types of LiDAR systems: Terrestrial and Airborne. Airborne LiDAR are installed on drones or helicopters for deriving an exact measurement of distance, while Terrestrial LiDAR systems are installed on moving vehicles to collect pinpoints. Terrestrial LiDAR systems are often used to monitor highways and have been employed by autonomous cars for years, while airborne LiDAR are commonly used in environmental applications and gathering topographical data.

With the future in mind, here are the top LiDAR trends to look out for moving forward:

SUPERCHARGING APPLE DEVICES

LiDAR enhances the camera on Apple devices significantly. Auto-focus is quicker and more effective on those devices. Moreover, it supercharges AR applications by greatly enhancing the speed and quality of a camera’s ability to track the location of people as well as place objects.

One of the major apps that received a functionality boost from LiDAR is Apple’s free Measure app, which can measure distance, dimensions, and even whether an object is level. The measurements determined by the app are significantly more accurate with the new LiDAR scanner, capable of replacing physical rulers, tape measures, and spirit levels.

Microsoft’s Seeing AI application is designed for the visually impaired to navigate their environment, however, LiDAR takes it to the next level. In conjunction with artificial intelligence, LiDAR enables the application to read text, identify products and colors, and describe people, scenes, and objects that appear in the viewfinder.

BIG INVESTMENTS BY AUTOMOTIVE COMPANIES

LiDAR plays a major role in autonomous vehicles, relying on a terrestrial LiDAR system to help them self-navigate. In 2018, reports suggest that the automotive segment acquired a business share of 90 percent. With self-driving cars inching toward mass adoption, expect to see major investments in LiDAR by automotive companies in 2021 and beyond.

As automotive companies look to make major investments in LiDAR, including Volkswagen’s recent investment in Aeva, many LiDAR companies are competing to create the go-to LiDAR system for automotive companies. Check out this great article by Wired detailing the potential for this bubble to burst.

LIDAR DRIVING ENVIRONMENTAL APPLICATIONS

Beyond commercial applications and the automotive industry, LiDAR is gradually seeing increased adoption for geoscience applications. The environmental segment of the LiDAR market is anticipated to grow at a CAGR of 32% through 2025. LiDAR is vital to geoscience applications for creating accurate and high-quality 3D data to study ecosystems of various wildlife species.

One of the main environmental uses of LiDAR is for soliciting topographic information on landscapes. Topographic LiDAR is expected to see a growth rate of over 25% over the coming years. These systems can see through forest canopy to produce accurate 3D models of landscapes necessary to create contours, digital terrain models, digital surface models and more.

CONCLUSION

In March 2020, after the first LiDAR scanner became available in the iPad Pro, The Verge put it perfectly when they said that the new LiDAR sensor is an AR hardware solution in search of software. While LiDAR has gradually found increasing usage, it is still a powerful new technology with burgeoning commercial usage. Enterprising app developers are looking for new ways to use it to empower consumers and businesses alike.

For supplementary viewing on the inner workings of the technology, check out this great introduction below, courtesy of Neon Science.

Top Mobile Marketing Trends Driving Success in 2021

Mobile app marketing is an elusive and constantly evolving field. For mobile app developers, getting new users to install games is relatively cheap at just $1.47 per user, while retaining them is much more difficult. It costs on average $43.88 to prompt a customer to make an in-app purchase according to Liftoff. An effective advertising strategy will make or break your UI—and your bank. In 2019, in-game ads made up 17% of all revenue. By 2024, that number is expected to triple.

2020 was a year that saw drastic changes in lifestyle—mobile app users were no exception. What trends are driving app developers to refine their advertising and development tactics in 2021? Check out our rundown below.

Real Time Bidding

ads-bidding-for-authors-strategy-guide-and-bid-calculator

In-app bidding is an advanced advertising method enabling mobile publishers to sell their ad inventory in an automated auction. The technology is not new—it’s been around since 2015 when it was primarily used on a desktop. However, over the past few years, both publishers and advertisers have benefited from in app-bidding, eschewing the traditional waterfall method.

In-app bidding enables publishers to sell their ad space at auction. Advertisers simultaneously bid against one another. The dense competition enables a higher price (CPM) for publishers. For advertisers, bidding decreases fragmentation between demand sources since they can bid on many at once. In the traditional waterfall method, ad mediation platforms prioritize ad networks they’ve worked with in the past before passing it on the premium ad networks. In-app bidding changes the game by enabling publishers to offer their inventory to auctions which include a much wider swath of advertisers beyond the traditional waterfall.

Bidding benefits all parties. App publishers see increased demand for ad inventory, advertisers access more inventory, and app users see more relevant ads. In 2021, many expect in-app bidding to gain more mainstream popularity. Check out this great rundown by AdExchanger for more information on this exciting new trend.

Rewarded Ads Still King

rewarded ad

We have long championed rewarded ads on the Mystic Media blog. Rewarded ads offer in-game rewards to users who voluntarily choose to view an ad. Everyone wins—users get tangible rewards for their time, publishers get advertising revenue and advertisers get valuable impressions.

App usage data from 2021 only increases our enthusiasm for the format. 71% of mobile gamers desire the ability to choose whether or not to view an ad. 31% of gamers said rewarded video prompted them to browse for products within a month of seeing them. Leyi Games implemented rewarded video and improved player retention while bringing in an additional $1.5 million US.

Facebook’s 2020 report showed that gamers find rewarded ads to be the least disruptive ad format, leading to longer gameplay sessions and more opportunities for content discovery.

Playable Ads

Playable ads have emerged as one of the foremost employed advertising tactics for mobile games. Playable ads enable users to sample gameplay by interacting with the ad. After a snippet of gameplay, the ad transitions into a call to action to install the game.

The benefits are obvious. If the game is fun and absorbing to the viewer, it has a much better chance of getting installed. By putting the audience in the driver’s seat, playable ads drive increased retention rates and  a larger number of high lifetime value (LTV) players.

Check out three examples of impactful playable ads compiled by Shuttlerock.

Short Ads, Big Appeal

As we are bombarded with more and more media on a daily basis, finding a way to deliver a concise message while cutting through the clutter can be exceptionally difficult. However, recent research from MAGNA, IPG Media Lab, and Snap Inc. shows it may be well worth it.

Studies show short-form video ads drive nearly identical brand preference and purchase intent as 15 second ads. Whereas short form ads were predominantly employed to grow awareness, marketers now understand that longer ads are perceived by the user as more intrusive, and they can get just as much ROI out of shorter and less expensive content.

Check out the graph below, breaking down the efficacy of 6 second vs. 15 second ads via Business of Apps.

Screen-Shot-2020-12-15-at-14.37.18

Conclusion

Mobile advertisers need to think big picture in terms of both their target customer and how they format their ads to best engage their audience. While the trends we outlined are currently in the zeitgeist, ultimately what matters most is engaging app users with effective content that delivers a valuable message without intruding on their experience on the app.

For supplementary reading on mobile marketing, check out our blog on the Top Mobile Ad Platforms You Need to Know for 2021

Learn More About Triggering Augmented Reality Experiences with AR Markers

We expect a continued increase in the utilization of AR in 2021. The iPhone 12 contains LiDAR technology, which enables the use of ARKit 4, greatly enhancing the possibilities for developers. When creating an AR application, developers must consider a variety of methods for triggering the experience and answer several questions before determining what approach will best facilitate the creation of a digital world for their users. For example, what content will be displayed? Where will this content be placed, and in what context will the user see it?

Markerless AR can best be used when the user needs to control the placement of the AR object. For example, the IKEA Place app allows the user to place furniture in their home to see how it fits.

1_0RtFp6lxeJWxcg5EE_wYCg

Location-based AR roots an AR experience to a physical space in the world, as we explored previously in our blog Learn How Apple Tightened Their Hold on the AR Market with the Release of ARKit 4. ARKit 4 introduces Location Anchors, which enable developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). To provide more accuracy than location alone, location anchors also use the device’s camera to capture landmarks and match them with a localization map downloaded from Apple Maps. Location anchors greatly enhance the potential for location-based AR; however, the possibilities are limited within the 50 cities which Apple has enabled them.

Marker-based AR remains the most popular method among app developers. When an application needs to know precisely what the user is looking at, accept no substitute. In marker-based AR, 3D AR models are generated using a specific marker, which triggers the display of virtual information. There are a variety of AR markers that can trigger this information, each with its own pros and cons. Below, please find our rundown of the most popular types of AR markers.

FRAMEMARKERS

5fc9da7d2761437fecd89875_1_gXPr_vwBWmgTN5Ial7Uwhg

The most popular AR marker is a framemarker, or border marker. It’s usually a 2D image printed on a piece of paper with a prominent border. During the tracking phase, the device will search for the exterior border in order to determine the real marker within.

Framemarkers are similar to QR Codes in that they are codes printed on images that require handheld devices to scan, however, they trigger AR experiences, whereas QR codes redirect the user to a web page. Framemarkers are a straightforward and effective solution.

absolut-truths

Framemarkers are particularly popular in advertising applications. Absolut Vodka’s Absolute Truth application enabled users to scan a framemarker on a label of their bottle to generate a slew of more information, including recipes and ads.

GameDevDad on Youtube offers a full tutorial of how to create framemarkers from scratch using Vuforia Augmented Reality SDK below.

 

NFT MARKERS

?????????

NFT, or Natural Feature Tracking, enable camera’s to trigger an AR experience without borders. The camera will take an image, such as the one above, and distill down it’s visual properties as below.

AugementedRealityMarkerAnymotionFeatures

The result of processing the features can generate AR, as below.

ImEinsatz

The quality and stability of these can oscillate based on the framework employed. For this reason, they are less frequently used than border markers, but function as a more visually subtle alternative. A scavenger hunt or a game employing AR might hide key information in NFT markers.

Treasury Wine Estates Living Wine Labels app, displayed above, tracks the natural features of the labels of wine bottles to create an AR experience which tells the story of their products.

OBJECT MARKERS

image1-7

The  toy car above has been converted into an object data field using Vuforia Object Scanner.

image4-1

Advancements in technology have enabled mobile devices to solve the issue of SLAM (simultaneous localization and mapping). The device camera can extract information in-real time, and use it to place a virtual object in it. In some frameworks, objects can become 3D-markers. Vuforia Object Scanner is one such framework, creating object data files that can be used in applications for targets. Virtual Reality Pop offers a great rundown on the best object recognition frameworks for AR.

RFID TAGS

Although RFID Tags are primarily used for short distance wireless communication and contact free payment, they can be used to trigger local-based virtual information.

While RFID Tags are not  widely employed, several researchers have written articles about the potential usages for RFID and AR. Researchers at the ARATLab at the National University of Singapore have combined augmented reality and RFID for the assembly of objects with embedded RFID tags, showing people how to properly assemble the parts, as demonstrated in the video below.

SPEECH MARKERS

Speech can also be used as a non-visual AR marker. The most common application for this would be for AR glasses or a smart windshield that displays information through the screen requested by the user via vocal commands.

CONCLUSION

Think like a user—it’s a staple coda for app developers and no less relevant in crafting AR experiences. Each AR trigger offers unique pros and cons. We hope this has helped you decide what is best equipped for your application.

In our next article, we will explore the innovation at the heart of AIoT, the intersection of AI and the Internet of Things.

Learn How Apple Tightened Their Hold on the AR Market with the Release of ARKit 4

Since the explosive launch of Pokemon Go, AR technologies have vastly improved. Our review of the iPhone 12 concluded that as Apple continues to optimize its hardware, AR will become more prominent in both applications and marketing.

At the 2020 WWDC in June, Apple announced ARKit 4, their latest iteration of the famed augmented reality platform. ARKit 4 features some vast improvements that help Apple tighten their hold on the AR market.

LOCATION ANCHORS

ARKit 4 introduces location anchors, which allow developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). When rebuilding the data backend for Apple Maps, Apple collected camera and 3D LiDAR data from city streets across the globe. ARKit downloads the virtual map surrounding your device from the cloud and matches it with the device’s feed to determine your location. The kicker is: all processing happens using machine learning within the device, so your camera feed stays put.

36431-67814-ARKit-xl

Devices with an A12 chip or later, can run Geo-tracking; however, location anchors require Apple to have mapped the area previously. As of now, they are supported in over 50 cities in the U.S. As the availability of compatible devices increases and Apple continues to expand its mapping project, location anchors will find increased usage.

DEPTH API

ARKit’s new Depth API harnesses the LiDAR scanner available on iPad Pro and iPhone 12 devices to introduce advanced scene understanding and enhanced pixel depth information in AR applications. When combined with 3D mesh data derived from Scene Geometry, which creates a 3D matrix of readings of the environment, the Depth API vastly improves virtual object occlusion features. The result is the instant placement of digital objects and seamless blending with their physical surroundings.

FACE TRACKING

1_tm5vrdVDr2DAulgPvDMRow

Face tracking has found an exceptional application in Memojis, which enables fun AR experiences for devices with a TrueDepth camera. ARKit 4 expands support to devices without a camera that has at least an A12. TrueDepth cameras can now leverage ARKit 4 to track up to three faces at once, providing many fun potential applications for Memojis.

VIDEO MATERIALS WITH REALITYKIT

b3b1c224-5db5-4e38-97de-76f90c32b53a

ARKit 4 also brings with it RealityKit, which adds support for applying video textures and materials to AR experiences. For example, developers will be able to place a virtual television on a wall, complete with realistic attributes, including light emission, texture roughness, and even audio. Consequentially, AR developers can develop even more immersive and realistic experiences for their users.

CONCLUSION

iOS and Android are competing for supremacy when it comes to AR development. While the two companies’ goals and research overlap, Apple has a major leg up on Google in its massive base of high-end devices and its ability to imbue them with the necessary structure sensors like TrueDepth and LiDAR.

ARKit has been the biggest AR development platform since it hit the market in 2017. ARKit 4 provides the technical capabilities tools for innovators and creative thinkers to build a new world of virtual integration.

How App Developers Can Leverage the iPhone 12 to Maximize Their Apps

On October 23rd, four brand new iPhone 12 models were released to retailers. As the manufacturer of the most popular smartphone model in the world, whenever Apple delivers a new device its front-page news. Mobile app developers looking to capitalize on new devices must stay abreast of the latest technologies, how they empower applications, and what they signal about where the future of app development is headed.

With that in mind, here is everything app developers need to know about the latest iPhone models.

BIG DEVELOPMENTS FOR AUGMENTED REALITY

LiDAR is a method for measuring distances (ranging) by illuminating the target with laser light and measuring the reflection with a sensor

LiDAR is a method for measuring distances (ranging) by illuminating the target with laser light and measuring the reflection with a sensor

On a camera level, the iPhone 12 includes significant advancements. It is the first phone to record and edit Dolby Vision with HDR. What’s more, Apple has enhanced the iPhone’s LiDAR sensor capabilities with a third telephoto lens.

The opportunities for app developers are significant. For AR developers, this is a breakthrough—enhanced LiDAR on the iPhone 12 means a broad market will have access to enhanced depth perception, enabling smoother AR object placement. The LIDAR sensor produces a 6x increase in autofocus speed in low light settings.

The potential use cases are vast. An enterprise-level application could leverage the enhanced camera to show the inner workings of a complex machine and provide solutions. Dimly lit rooms can now house AR objects, such as Christmas decorations. The iPhone 12 provides a platform for AR developers to count on a growing market of app users to do much more with less light, and scan rooms with more detail.

The iPhone 12’s enhanced LiDAR Scanner will enable iOS app developers to employ Apple’s ARKit 4 to attain enhanced depth information through a brand-new Depth API. ARKit 4 also introduces location anchors, which enable developers to place AR experiences at a specific point in the world in their iPhone and iPad apps.

With iPhone 12, Apple sends a clear message to app developers: AR is on the rise.

ALL IPHONE 12 MODELS SUPPORT 5G

5G 2

The entire iPhone 12 family of devices supports 5G with both sub-6GHz and mmWave networks. When iPhone 12 devices leverage 5G with the Apple A14 bionic chip, it enables them to integrate with IoT devices, and perform on ML algorithms at a much higher level.

5G poses an endless array of possibilities for app developers—from enhanced UX, more accurate GPS, improved video apps, and more. 5G will reduce dependency on hardware as app data is stored in the cloud with faster transfer speeds. In addition, it will enable even more potential innovation for AR applications.

5G represents a new frontier for app developers, IoT, and much more. Major carriers have been rolling out 5G networks over the past few years, but access points remain primarily in major cities. Regardless, 5G will gradually become the norm over the course of the next few years and this will expand the playing field for app developers.

WHAT DOES IT MEAN?

Beyond the bells and whistles, the iPhone 12 sends a very clear message about what app developers can anticipate will have the biggest impact on the future of app development: AR and 5G. Applications employing these technologies will have massive potential to evolve as the iPhone 12 and its successors become the norm and older devices are phased out.

How to Leverage AR to Boost Sales and Enhance the Retail Experience

The global market for VR and AR in retail will reach $1.6 billion by 2025 according to research conducted by Goldman Sachs. Even after years of growing popularity, effectively employed Augmented Reality experiences feel to the end-user about as explicitly futuristic as any experience created by popular technology.

We have covered the many applications for AR as an indoor positioning mechanism on the Mystic MediaTM blog, but when it comes to retail, applications for AR are providing real revenue boosts and increased conversion rates.

Augmented Reality (AR) History

Ivan Sutherland 1

While working as an associate professor at Harvard University, computer scientist Ivan Sutherland, aka the “Father of Computer Graphics”, created an AR head-mounted display system which constituted the first AR technology in 1968. In the proceeding decades, AR visual displays gained traction in universities, companies, and national agencies as a way to superimpose vital information on physical environments, showing great promise for applications for aviation, military, and industrial purposes.

Fast forward to 2016, the sensational launch of Pokemon GO changed the game for AR. Within one month, Pokemon GO reached 45 million users, showing there is mainstream demand for original and compelling AR experiences.

Cross-Promotions

Several big brands took advantage of Pokemon GO’s success through cross-promotions. McDonald’s paid for Niantic to turn 3,000 Japan locations into gyms and PokeStops, a partnership that has recently endedStarbucks took advantage of Pokemon GO’s success as well by enabling certain locations to function as PokeStops and gyms, and offering a special Pokemon GO Frappucino.

One of the ways retailers can enter into the AR game without investing heavily in technology is to cross-promote with an existing application.

In 2018, Walmart launched a partnership with Jurassic World’s AR game: Jurassic World Alive. The game is similar to Pokemon GO, using a newly accessible Google Maps API to let players search for virtual dinosaurs and items on a map, as well as battle other players. Players can enter select Walmart locations to access exclusive items.

Digital-Physical Hybrid Experiences

The visual augmentation produced by AR transforms physical spaces by leveraging the power of computer-generated graphics, an aesthetic punch-up proven to increase foot traffic. While some retailers are capitalizing on these hybrid experiences through cross-promotions, others are creating their own hybrid experiential marketing events.

Foot Locker developed an AR app that used geolocation to create a scavenger hunt in Los Angeles, leading customers to the location where they could purchase a pair of LeBron 16 King Court Purple shoes. Within two hours of launching the app, the shoes sold out.

AR also has proven potential to help stores create hybrid experiences through indoor navigation. Users can access an augmented view of the store through their phones, which makes in-store navigation easy. Users scan visual markers, recognized by Apple’s ARKitGoogle’s ARCore, and other AR SDKs, to establish their position, and AR indoor navigation applications can offer specific directions to their desired product.

Help Consumers Make Informed Choices

Ikea Place Screenshots

AR is commonly employed to enrich consumers’ understanding of potential purchases and prompt them to buy. For example, the “IKEA Place” app allows shoppers to see IKEA products in a superimposed graphics environment. IKEA boasts the app gives shoppers 98% accuracy in buying decisions.

Converse employs a similar application, the “Converse Sampler App”, which enables users to view what a shoe will look like on their feet through their device’s camera. The application increases customer confidence, helping them make the decision to purchase.

Treasury Wines Estates enhances the consumer experience with “Living Wine Labels”: AR labels that bring the history of the vineyard to life and provide users with supplementary information, including the history of the vineyard the wine came from and tasting notes.

Conclusion

AR enables striking visuals that captivate customers. As a burgeoning tool, AR enables companies to get creative and build innovative experiences that capture their customers’ imagination. Retailers who leverage AR will seize an advantage both in the short term and in the long term as the technology continues to grow and evolve.

How Wearables Help Fight Covid-19

The Covid-19 pandemic forced lifestyle changes to the global population unlike any other event in recent history. As companies like Amazon and Zoom reap major profits from increased demand for online ordering and teleconferencing, wearable app developers are taking a particular interest in how they can do their part to help quell the pandemic.

It’s easy to take a wearable device that tracks key health metrics and market it as helping to detect Covid-19. It’s much harder to create a device with a proven value in helping prevent the spread of the disease. Here’s our rundown of what you need to know about how wearables can help fight the Covid-19 pandemic.

WEARABLES CANNOT DIAGNOSE COVID-19

ows_8eb4b8c9-7adf-45d7-97eb-f04ff7adedd4

In an ideal world, your smartwatch could analyze your body on a molecular level to detect whether you have Covid-19. Technology has not evolved, yet, to where this is possible. The only way to diagnose Covid-19 is through a test administered by a health-care professional.

Fortunately, there are several ways in which wearables can help fight the spread of Covid-19 that do not involve direct diagnosis.

WEARABLES CAN DETECT EARLY SYMPTOMS

Wearables make it easy for their users to monitor general health conditions and deviations from their norms. Although wearables cannot detect the difference between the flu and Covid-19, they can collect data which indicates early symptoms of an illness and warns their users.

Fitbit CEO James Park hopes the device will eventually sense these changes in health data and instruct users to quarantine 1-3 days before symptoms start and to follow-up for confirmation with a coronavirus test.

Oura Ring

Oura Ring

Another big player in the Covid-19 wearables space is the Oura ring. The Oura ring is a smart ring that tracks activity, sleep, temperature, pulse, and heart rate. Since the outbreak, it has emerged as a major tool for detecting early symptoms like increased resting heart rate. Most notably, NBA players in Orlando, Florida use the device to monitor their health and detect early symptoms.

WEARABLES HELP KEEP FRONTLINE HEALTH WORKERS SAFE

John A. Rogers, a biomedical engineer at Northwestern University, has been developing a wearable patch that attaches to the user’s throat and helps monitor coughing and respiratory symptoms like shortness of breath.

Wearable patch developed by John A. Rogers of Northwestern University

Wearable patch developed by John A. Rogers of Northwestern University

One of the planned uses of this wearable is to protect frontline health-care workers by detecting if they contract the virus and become sick.

In addition, wearables can help monitor symptoms in hospitalized patients. This will reduce the chance of spreading the infection and exposing infected patients to workers.

ASYMPTOMATIC CARRIERS ARE ANOTHER STORY

Although wearables can collect and identify health data that points toward potential infections, recognizing asymptomatic carriers of the Coronavirus is another story. When carriers show no symptoms, the only way to determine if they have been infected is through a test.

TAKEAWAY

Unless there are significant technological leaps in Covid-19 testing, wearables will not be able to detect infections directly. However, they can help catch symptoms early to prevent the spread. Their ability to assist the pandemic represents a major growth sector. We look forward to seeing how wearable developers will innovate to protect the health of users and our future.

The Future of Indoor GPS Part 5: Inside AR’s Potential to Dominate the Indoor Positioning Space

In the previous installment of our blog series on indoor positioning, we explored how RFID Tags are finding traction in the indoor positioning space. This week, we will examine the potential for AR Indoor Positioning to receive mass adoption.

When Pokemon Go accrued 550 million installs and made $470 million in revenues in 2016, AR became a household name technology. The release of ARKit and ARCore significantly enhanced the ability for mobile app developers to create popular AR apps. However, since Pokemon Go’s explosive release, no application has brought AR technology to the forefront of the public conversation.

When it comes to indoor positioning technology, AR has major growth potential. GPS is the most prevalent technology navigation space, but it cannot provide accurate positioning within buildings. GPS can be accurate in large buildings such as airports, but it fails to locate floor number and more specifics. Where GPS fails, AR-based indoor positioning systems can flourish.

HOW DOES IT WORK?

AR indoor navigation consists of three modules: Mapping, Positioning, and Rendering.

via Mobi Dev

via Mobi Dev

Mapping: creates a map of an indoor space to make a route.

Rendering: manages the design of the AR content as displayed to the user.

Positioning: is the most complex module. There’s no accurate way of using the technology available within the device to determine the precise location of users indoors, including the exact floor.

AR-based indoor positioning solves that problem by using Visual Markers, or AR Markers, to establish the users’ position. Visual markers are recognized by Apple’s ARKit, Google’s ARCore, and other AR SDKs.  When the user scans that marker, it can identify exactly where the user is and provide them with a navigation interface. The further the user is from the last visual marker, the less accurate their location information becomes. In order to maintain accuracy, developers recommend placing visual markers every 50 meters.

Whereas beacon-based indoor positioning technologies can become expensive quickly, running $10-20 per beacon with a working range of around 10-100 meters of accuracy, AR visual markers are the more precise and cost-effective solution with an accuracy threshold down to within millimeters.

Via View AR

Via View AR

CHALLENGES

Performance can decline when more markers have been into an AR-based VPS because all markers must be checked to find a match. If the application is set up for a small building where 10-20 markers are required, it is not an issue. If it’s a chain of supermarkets requiring thousands of visual markers across a city, it becomes more challenging.

Luckily, GPS can help determine the building where the user is located, limiting the number of visual markers the application will ping. Innovators in the AR-based indoor positioning space are using hybrid approaches like this to maximize precision and scale of AR positioning technologies.

CONCLUSION

AR-based indoor navigation has had few cases and requires further technical development before it can roll out on a large scale, but all technological evidence indicates that it will be one of the major indoor positioning technologies of the future.

This entry concludes our blog series on Indoor Positioning, we hope you enjoyed and learned from it! In case you missed it, check out our past entries:

The Future of Indoor GPS Part 1: Top Indoor Positioning Technologies

The Future of Indoor GPS Part 2: Bluetooth 5.1′s Angle of Arrival Ups the Ante for BLE Beacons

The Future of Indoor GPS Part 3: The Broadening Appeal of Ultra Wideband

The Future of Indoor GPS Part 4: Read the Room with RFID Tags

iOS 14 Revamps the OS While Android 11 Offers Minor Improvements

Every time Apple announces a new device or OS, it is a cultural event for both consumers and app developers. When Apple announced iOS 14 in June 2020 during the WWDC 2020 keynote, few anticipated it would be one of the biggest iOS updates to date. With a host of new features and UI enhancements, the release of iOS 14  has become one of the most hotly anticipated moments of this year in technology.

On the other side of the OS war, Android has released four developer previews in 2020 of their latest OS offering: Android 11. Currently, Android 11 is currently available in a beta release ahead of its target launch in August/September.

The two biggest OS titans have effectively upped the ante on their rivalry. Here is a summary everything you need to know on how they stack up against one another:

iOS 14

iOS 14 is a larger step forward for iOS than Android 11 is for Android. In relation to iOS 13, it revamps the iOS to become smarter and more user-friendly while streamlining group conversations.

While iMessage remains the most popular messaging platform on the market, competitors like WhatsApp, Discord and Signal include a variety of features previously unavailable on iOS devices. iOS 14 closes the gap with its competitors, offering a host of UI enhancements specifically targeting group conversations—one of the most popular features on iMessage:

imessage-ios14

  • Pinned Conversations: Pin the most important conversations to the top of your profile to make them easier to access.
  • Group Photos: iOS 14 enhances group conversations by allowing users to give group conversations a visual identity using a photo, Memoji, or emoji.
  • Mentions: Users can now directly tag users in their messages within group conversations. When a user is mentioned, their name will be highlighted in the text and users can customize notifications so that they only receive notifications when they are mentioned.
  • Inline Replies: Within group conversations, users can select a specific message and reply directly to it.

One of the major upgrades in iOS 14 is the inclusion of Widgets on the home screen. Widgets on the home screen have been redesigned to offer more information at a glance. They are also customizable to give the user more flexibility in how they arrange their home screen.

applibrary-1280x720

iOS 14 introduces the App Library, a program which automatically organizes applications into categories offering a simple, easy-to-navigate view. App Library helps make all of a user’s applications visible at once and allow users to customize how they’d like their applications to be categorized.

In addition to incorporating a variety of UI enhancements, iOS 14 is significantly smarter. Siri is equipped with 20x more facts than it had three years ago. iOS 14 improves language translation, offering 11 different languages. Users can download the languages based on what they will need to keep translations private without requiring an internet connection.

Apple has also introduced a number of UI enhancements to help make the most of screen real estate:

Apple_ios14-incoming-call-screen_06222020_inline.jpg.large

Compact Calls condense the amount of screen real estate occupied by phone calls from iPhone, FaceTime, and third-party apps, allowing users to continue viewing information on their screen both when a call comes in and when they are on a call.

picture in picture

Picture in Picture mode similarly allows users to condense their video display so that it doesn’t take up their entire screen, allowing the user to navigate their device without pausing their video call or missing part of a video that they are watching.

ANDROID 11

In comparison to iOS 14, Android 11 is not a major visual overhaul of the platform. However, it does offer an array of new features which enhance UI.

  • Android 11 introduces native screen recording, allowing users to record their screen. It is a useful feature already included in iOS, particularly helpful when demonstrating how applications work.
  • While recording videos, Android allows users to mute notifications which would otherwise cause the recording to stop.
  • Users can now modify the touch sensitivity of their screen, increasing or decreasing sensitivity to their liking.
  • Android 11 makes viewing a history of past notifications as easy as it has ever been using the Notification History button.
  • When users grant an Android app access to a permission, in the current OS, the decision is written in stone for all future usage. Based on this decision, the application will have permanent access, access during usage, or will be blocked. Android 11 introduces one-time permissions, allowing users to grant an application access to a permission once and requiring the question to be posed again the next time they open it.

IOS 14 VS. ANDROID 11

While Android offers a variety of small improvements, iOS 14 provides the iOS platform with a major visual overhaul. This year, it is safe to say that iOS 14 wins the battle for the superior upgrade. With both Android and iOS slated for a fall release, how users respond to the new OS’s remains to be seen.