All posts by admin

AIoT: How the Intersection of AI and IoT Will Drive Innovation for Decades to Come

blog title 2

We have covered the evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) over the years as they have gained prominence. IoT devices collect a massive amount of data. Cisco projects by the end of 2021, IoT devices will collect over 800 zettabytes of data per year. Meanwhile, AI algorithms can parse through big data and teach themselves to analyze and identify patterns to make predictions. Both technologies enable a seemingly endless amount of applications retained a massive impact on many industry verticals.

What happens when you merge them? The result is aptly named the AIoT (Artificial Intelligence of Things) and it will take IoT devices to the next level.

WHAT IS AIOT?

AIoT is any system that integrates AI technologies with IoT infrastructure, enhancing efficiency, human-machine interactions, data management and analytics.

IoT enables devices to collect, store, and analyze big data. Device operators and field engineers typically control devices. AI enhances IoT’s existing systems, enabling them to take the next step to determine and take the appropriate action based on the analysis of the data.

By embedding AI into infrastructure components, including programs, chipsets, and edge computing, AIoT enables intelligent, connected systems to learn, self-correct and self-diagnose potential issues.

960x0

One common example comes in the surveillance field. Surveillance camera can be used as an image sensor, sending every frame to an IoT system which analyzes the feed for certain objects. AI can analyze the frame and only send frames when it detects a specific object—significantly speeding up the process while reducing the amount of data generated since irrelevant frames are excluded.

CCTV-Traffic-Monitoring-1024x683

While AIoT will no doubt find a variety of applications across industries, the three segments we expect to see the most impact on are wearables, smart cities, and retail.

WEARABLES

Wearable-IoT-Devices

The global wearable device market is estimated to hit more than $87 billion by 2022. AI applications on wearable devices such as smartwatches pose a number of potential applications, particularly in the healthtech sector.

Researchers in Taiwan have been studying the potential for an AIoT wearable system for electrocardiogram (ECG) analysis and cardiac disease detection. The system would integrate a wearable IoT-based system with an AI platform for cardiac disease detection. The wearable collects real-time health data and stores it in a cloud where an AI algorithm detects disease with an average of 94% accuracy. Currently, Apple Watch Series 4 or later includes an ECG app which captures symptoms of irregular, rapid or skipped heartbeats.

Although this device is still in development, we expect to see more coming out of the wearables segment as 5G enables more robust cloud-based processing power, taking the pressure off the devices themselves.

SMART CITIES

We’ve previously explored the future of smart cities in our blog series A Smarter World. With cities eager to invest in improving public safety, transport, and energy efficiency, AIoT will drive innovation in the smart city space.

There are a number of potential applications for AIoT in smart cities. AIoT’s ability to analyze data and act opens up a number of possibilities for optimizing energy consumption for IoT systems. Smart streetlights and energy grids can analyze data to reduce wasted energy without inconveniencing citizens.

Some smart cities have already adopted AIoT applications in the transportation space. New Delhi, which boasts some of the worst traffic in the world, features an Intelligent Transport Management System (ITMS) which makes real-time dynamic decisions on traffic flows to accelerate traffic.

RETAIL

AIoT has the potential to enhance the retail shopping experience with digital augmentation. The same smart cameras we referenced earlier are being used to detect shoplifters. Walmart recently confirmed it has installed smart security cameras in over 1,000 stores.

smart-shopping-cart

One of the big innovations for AIoT involves smart shopping carts. Grocery stores in both Canada and the United States are experimenting with high-tech shopping carts, including one from Caper which uses image recognition and built-in sensors to determine what a person puts into the shopping cart.

The potential for smart shopping carts is vast—these carts will be able to inform customers of deals and promotion, recommend products based on their buying decisions, enable them to view an itemized list of their current purchases, and incorporate indoor navigation to lead them to their desired items.

A smart shopping cart company called IMAGR recently raised $14 million in a pre-Series A funding round, pointing toward a bright future for smart shopping carts.

CONCLUSION

AIoT represents the intersection of AI, IoT, 5G, and big data. 5G enables the cloud processing power for IoT devices to employ AI algorithms to analyze big data to determine and enact action items. These technologies are all relatively young, and as they continue to grow, they will empower innovators to build a smarter future for our world.

Learn More About Triggering Augmented Reality Experiences with AR Markers

learns an markers-area and play it-chrome

We expect a continued increase in the utilization of AR in 2021. The iPhone 12 contains LiDAR technology, which enables the use of ARKit 4, greatly enhancing the possibilities for developers. When creating an AR application, developers must consider a variety of methods for triggering the experience and answer several questions before determining what approach will best facilitate the creation of a digital world for their users. For example, what content will be displayed? Where will this content be placed, and in what context will the user see it?

Markerless AR can best be used when the user needs to control the placement of the AR object. For example, the IKEA Place app allows the user to place furniture in their home to see how it fits.

1_0RtFp6lxeJWxcg5EE_wYCg

Location-based AR roots an AR experience to a physical space in the world, as we explored previously in our blog Learn How Apple Tightened Their Hold on the AR Market with the Release of ARKit 4. ARKit 4 introduces Location Anchors, which enable developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). To provide more accuracy than location alone, location anchors also use the device’s camera to capture landmarks and match them with a localization map downloaded from Apple Maps. Location anchors greatly enhance the potential for location-based AR; however, the possibilities are limited within the 50 cities which Apple has enabled them.

Marker-based AR remains the most popular method among app developers. When an application needs to know precisely what the user is looking at, accept no substitute. In marker-based AR, 3D AR models are generated using a specific marker, which triggers the display of virtual information. There are a variety of AR markers that can trigger this information, each with its own pros and cons. Below, please find our rundown of the most popular types of AR markers.

FRAMEMARKERS

5fc9da7d2761437fecd89875_1_gXPr_vwBWmgTN5Ial7Uwhg

The most popular AR marker is a framemarker, or border marker. It’s usually a 2D image printed on a piece of paper with a prominent border. During the tracking phase, the device will search for the exterior border in order to determine the real marker within.

Framemarkers are similar to QR Codes in that they are codes printed on images that require handheld devices to scan, however, they trigger AR experiences, whereas QR codes redirect the user to a web page. Framemarkers are a straightforward and effective solution.

absolut-truths

Framemarkers are particularly popular in advertising applications. Absolut Vodka’s Absolute Truth application enabled users to scan a framemarker on a label of their bottle to generate a slew of more information, including recipes and ads.

GameDevDad on Youtube offers a full tutorial of how to create framemarkers from scratch using Vuforia Augmented Reality SDK below.

 

NFT MARKERS

?????????

NFT, or Natural Feature Tracking, enable camera’s to trigger an AR experience without borders. The camera will take an image, such as the one above, and distill down it’s visual properties as below.

AugementedRealityMarkerAnymotionFeatures

The result of processing the features can generate AR, as below.

ImEinsatz

The quality and stability of these can oscillate based on the framework employed. For this reason, they are less frequently used than border markers, but function as a more visually subtle alternative. A scavenger hunt or a game employing AR might hide key information in NFT markers.

Treasury Wine Estates Living Wine Labels app, displayed above, tracks the natural features of the labels of wine bottles to create an AR experience which tells the story of their products.

OBJECT MARKERS

image1-7

The  toy car above has been converted into an object data field using Vuforia Object Scanner.

image4-1

Advancements in technology have enabled mobile devices to solve the issue of SLAM (simultaneous localization and mapping). The device camera can extract information in-real time, and use it to place a virtual object in it. In some frameworks, objects can become 3D-markers. Vuforia Object Scanner is one such framework, creating object data files that can be used in applications for targets. Virtual Reality Pop offers a great rundown on the best object recognition frameworks for AR.

RFID TAGS

Although RFID Tags are primarily used for short distance wireless communication and contact free payment, they can be used to trigger local-based virtual information.

While RFID Tags are not  widely employed, several researchers have written articles about the potential usages for RFID and AR. Researchers at the ARATLab at the National University of Singapore have combined augmented reality and RFID for the assembly of objects with embedded RFID tags, showing people how to properly assemble the parts, as demonstrated in the video below.

SPEECH MARKERS

Speech can also be used as a non-visual AR marker. The most common application for this would be for AR glasses or a smart windshield that displays information through the screen requested by the user via vocal commands.

CONCLUSION

Think like a user—it’s a staple coda for app developers and no less relevant in crafting AR experiences. Each AR trigger offers unique pros and cons. We hope this has helped you decide what is best equipped for your application.

In our next article, we will explore the innovation at the heart of AIoT, the intersection of AI and the Internet of Things.

Learn How Apple Tightened Their Hold on the AR Market with the Release of ARKit 4

arkit-og

Since the explosive launch of Pokemon Go, AR technologies have vastly improved. Our review of the iPhone 12 concluded that as Apple continues to optimize its hardware, AR will become more prominent in both applications and marketing.

At the 2020 WWDC in June, Apple announced ARKit 4, their latest iteration of the famed augmented reality platform. ARKit 4 features some vast improvements that help Apple tighten their hold on the AR market.

LOCATION ANCHORS

ARKit 4 introduces location anchors, which allow developers to set virtual content in specific geographic coordinates (latitude, longitude, and altitude). When rebuilding the data backend for Apple Maps, Apple collected camera and 3D LiDAR data from city streets across the globe. ARKit downloads the virtual map surrounding your device from the cloud and matches it with the device’s feed to determine your location. The kicker is: all processing happens using machine learning within the device, so your camera feed stays put.

36431-67814-ARKit-xl

Devices with an A12 chip or later, can run Geo-tracking; however, location anchors require Apple to have mapped the area previously. As of now, they are supported in over 50 cities in the U.S. As the availability of compatible devices increases and Apple continues to expand its mapping project, location anchors will find increased usage.

DEPTH API

ARKit’s new Depth API harnesses the LiDAR scanner available on iPad Pro and iPhone 12 devices to introduce advanced scene understanding and enhanced pixel depth information in AR applications. When combined with 3D mesh data derived from Scene Geometry, which creates a 3D matrix of readings of the environment, the Depth API vastly improves virtual object occlusion features. The result is the instant placement of digital objects and seamless blending with their physical surroundings.

FACE TRACKING

1_tm5vrdVDr2DAulgPvDMRow

Face tracking has found an exceptional application in Memojis, which enables fun AR experiences for devices with a TrueDepth camera. ARKit 4 expands support to devices without a camera that has at least an A12. TrueDepth cameras can now leverage ARKit 4 to track up to three faces at once, providing many fun potential applications for Memojis.

VIDEO MATERIALS WITH REALITYKIT

b3b1c224-5db5-4e38-97de-76f90c32b53a

ARKit 4 also brings with it RealityKit, which adds support for applying video textures and materials to AR experiences. For example, developers will be able to place a virtual television on a wall, complete with realistic attributes, including light emission, texture roughness, and even audio. Consequentially, AR developers can develop even more immersive and realistic experiences for their users.

CONCLUSION

iOS and Android are competing for supremacy when it comes to AR development. While the two companies’ goals and research overlap, Apple has a major leg up on Google in its massive base of high-end devices and its ability to imbue them with the necessary structure sensors like TrueDepth and LiDAR.

ARKit has been the biggest AR development platform since it hit the market in 2017. ARKit 4 provides the technical capabilities tools for innovators and creative thinkers to build a new world of virtual integration.

How AI Revolutionizes Music Streaming

acastro_190416_1777_music_ai_0001.0

In 2020, worldwide music streaming revenue hit 11.4 billion dollars, a 2800% growth over the course of a decade. Three hundred forty-one million paid online streaming subscribers get their music from top services like Apple Music, Spotify, and Tidal. The competition for listeners is fierce. Each company looks to leverage every advantage they can in pursuit of higher market share.

Like all major tech conglomerates, music streaming services collect an exceptional amount of user data through their platforms and are creating elaborate AI algorithms designed to improve user experience on a number of levels. Spotify has emerged as the largest on-demand music service active today and bolstered its success through the innovative use of AI.

Here are the top ways in which AI has changed music streaming:

COLLABORATIVE FILTERING

AI has the ability to sift through a plenitude of implicit consumer data, including:

  • Song preferences
  • Keyword preferences
  • Playlist data
  • Geographic location of listeners
  • Most used devices

AI algorithms can analyze user trends and identify users with similar tastes. For example, if AI deduces that User 1 and User 2 have similar tastes, then it can infer that songs User 1 has liked will also be enjoyed by User 2. Spotify’s algorithms will leverage this information to provide recommendations for User 2 based on what User 1 likes, but User 2 has yet to hear.

The result is not only improved recommendations, but greater exposure for artists that otherwise may not have been organically found by User 2.

NATURAL LANGUAGE PROCESSING

Natural Language Processing is a burgeoning field in AI. Previously in our blog, we covered GPT-3, the latest Natural Language Processing (NLP) technology developed by OpenAI. Music streaming services are well-versed in the technology and leverage it in a variety of ways to enhance UI.

nlp

Algorithms scan a track’s metadata, in addition to blog posts, discussions, and news articles about artists or songs on the internet to determine connections. When artists/songs are mentioned alongside artists/songs the user likes, algorithms make connections that fuel future recommendations.

GPT-3 is not perfect; its ability to track sentiments lacks nuance. As Sonos Radio general manager Ryan Taylor recently said to Fortune Magazine: “The truth is music is entirely subjective… There’s a reason why you listen to Anderson .Paak instead of a song that sounds exactly like Anderson .Paak.”

As NLP technology evolves and algorithms extend their grasp of the nuances of language, so will the recommendations provided to you by music streaming services.

AUDIO MODELS

1_5hAP-k77FKJVG1m1qqdpYA

AI can study audio models to categorize songs exclusively based on their waveforms. This scientific, binary approach to analyzing creative work enables streaming services to categorize songs and create recommendations regardless of the amount of coverage a song or artist has received.

BLOCKCHAIN

Artist payment of royalties on streaming services poses its own challenges, problems, and short-comings. Royalties are deduced from trillions of data points. Luckily, blockchain is helping to facilitate a smoother artist’s payment process. Blockchain technology can not only make the process more transparent but also more efficient. Spotify recently acquired blockchain company Mediachain Labs, which will, many pundits are saying, change royalty payments in streaming forever.

MORE TO COME

While AI has vastly improved streaming ability to keep their subscribers compelled, a long road of evolution lies ahead before it can come to a deep understanding of what motivates our musical tastes and interests. Today’s NLP capabilities provided by GPT-3 will probably become fairly archaic within three years as the technology is pushed further. One thing is clear: as streaming companies amass decades’ worth of user data, they won’t hesitate to leverage it in their pursuit of market dominance.

How App Developers Can Leverage the iPhone 12 to Maximize Their Apps

iPhone 12

On October 23rd, four brand new iPhone 12 models were released to retailers. As the manufacturer of the most popular smartphone model in the world, whenever Apple delivers a new device its front-page news. Mobile app developers looking to capitalize on new devices must stay abreast of the latest technologies, how they empower applications, and what they signal about where the future of app development is headed.

With that in mind, here is everything app developers need to know about the latest iPhone models.

BIG DEVELOPMENTS FOR AUGMENTED REALITY

LiDAR is a method for measuring distances (ranging) by illuminating the target with laser light and measuring the reflection with a sensor

LiDAR is a method for measuring distances (ranging) by illuminating the target with laser light and measuring the reflection with a sensor

On a camera level, the iPhone 12 includes significant advancements. It is the first phone to record and edit Dolby Vision with HDR. What’s more, Apple has enhanced the iPhone’s LiDAR sensor capabilities with a third telephoto lens.

The opportunities for app developers are significant. For AR developers, this is a breakthrough—enhanced LiDAR on the iPhone 12 means a broad market will have access to enhanced depth perception, enabling smoother AR object placement. The LIDAR sensor produces a 6x increase in autofocus speed in low light settings.

The potential use cases are vast. An enterprise-level application could leverage the enhanced camera to show the inner workings of a complex machine and provide solutions. Dimly lit rooms can now house AR objects, such as Christmas decorations. The iPhone 12 provides a platform for AR developers to count on a growing market of app users to do much more with less light, and scan rooms with more detail.

The iPhone 12’s enhanced LiDAR Scanner will enable iOS app developers to employ Apple’s ARKit 4 to attain enhanced depth information through a brand-new Depth API. ARKit 4 also introduces location anchors, which enable developers to place AR experiences at a specific point in the world in their iPhone and iPad apps.

With iPhone 12, Apple sends a clear message to app developers: AR is on the rise.

ALL IPHONE 12 MODELS SUPPORT 5G

5G 2

The entire iPhone 12 family of devices supports 5G with both sub-6GHz and mmWave networks. When iPhone 12 devices leverage 5G with the Apple A14 bionic chip, it enables them to integrate with IoT devices, and perform on ML algorithms at a much higher level.

5G poses an endless array of possibilities for app developers—from enhanced UX, more accurate GPS, improved video apps, and more. 5G will reduce dependency on hardware as app data is stored in the cloud with faster transfer speeds. In addition, it will enable even more potential innovation for AR applications.

5G represents a new frontier for app developers, IoT, and much more. Major carriers have been rolling out 5G networks over the past few years, but access points remain primarily in major cities. Regardless, 5G will gradually become the norm over the course of the next few years and this will expand the playing field for app developers.

WHAT DOES IT MEAN?

Beyond the bells and whistles, the iPhone 12 sends a very clear message about what app developers can anticipate will have the biggest impact on the future of app development: AR and 5G. Applications employing these technologies will have massive potential to evolve as the iPhone 12 and its successors become the norm and older devices are phased out.

How to Leverage AR to Boost Sales and Enhance the Retail Experience

AR REtail Cover Image

The global market for VR and AR in retail will reach $1.6 billion by 2025 according to research conducted by Goldman Sachs. Even after years of growing popularity, effectively employed Augmented Reality experiences feel to the end-user about as explicitly futuristic as any experience created by popular technology.

We have covered the many applications for AR as an indoor positioning mechanism on the Mystic MediaTM blog, but when it comes to retail, applications for AR are providing real revenue boosts and increased conversion rates.

Augmented Reality (AR) History

Ivan Sutherland 1

While working as an associate professor at Harvard University, computer scientist Ivan Sutherland, aka the “Father of Computer Graphics”, created an AR head-mounted display system which constituted the first AR technology in 1968. In the proceeding decades, AR visual displays gained traction in universities, companies, and national agencies as a way to superimpose vital information on physical environments, showing great promise for applications for aviation, military, and industrial purposes.

Fast forward to 2016, the sensational launch of Pokemon GO changed the game for AR. Within one month, Pokemon GO reached 45 million users, showing there is mainstream demand for original and compelling AR experiences.

Cross-Promotions

Several big brands took advantage of Pokemon GO’s success through cross-promotions. McDonald’s paid for Niantic to turn 3,000 Japan locations into gyms and PokeStops, a partnership that has recently endedStarbucks took advantage of Pokemon GO’s success as well by enabling certain locations to function as PokeStops and gyms, and offering a special Pokemon GO Frappucino.

One of the ways retailers can enter into the AR game without investing heavily in technology is to cross-promote with an existing application.

In 2018, Walmart launched a partnership with Jurassic World’s AR game: Jurassic World Alive. The game is similar to Pokemon GO, using a newly accessible Google Maps API to let players search for virtual dinosaurs and items on a map, as well as battle other players. Players can enter select Walmart locations to access exclusive items.

Digital-Physical Hybrid Experiences

The visual augmentation produced by AR transforms physical spaces by leveraging the power of computer-generated graphics, an aesthetic punch-up proven to increase foot traffic. While some retailers are capitalizing on these hybrid experiences through cross-promotions, others are creating their own hybrid experiential marketing events.

Foot Locker developed an AR app that used geolocation to create a scavenger hunt in Los Angeles, leading customers to the location where they could purchase a pair of LeBron 16 King Court Purple shoes. Within two hours of launching the app, the shoes sold out.

AR also has proven potential to help stores create hybrid experiences through indoor navigation. Users can access an augmented view of the store through their phones, which makes in-store navigation easy. Users scan visual markers, recognized by Apple’s ARKitGoogle’s ARCore, and other AR SDKs, to establish their position, and AR indoor navigation applications can offer specific directions to their desired product.

Help Consumers Make Informed Choices

Ikea Place Screenshots

AR is commonly employed to enrich consumers’ understanding of potential purchases and prompt them to buy. For example, the “IKEA Place” app allows shoppers to see IKEA products in a superimposed graphics environment. IKEA boasts the app gives shoppers 98% accuracy in buying decisions.

Converse employs a similar application, the “Converse Sampler App”, which enables users to view what a shoe will look like on their feet through their device’s camera. The application increases customer confidence, helping them make the decision to purchase.

Treasury Wines Estates enhances the consumer experience with “Living Wine Labels”: AR labels that bring the history of the vineyard to life and provide users with supplementary information, including the history of the vineyard the wine came from and tasting notes.

Conclusion

AR enables striking visuals that captivate customers. As a burgeoning tool, AR enables companies to get creative and build innovative experiences that capture their customers’ imagination. Retailers who leverage AR will seize an advantage both in the short term and in the long term as the technology continues to grow and evolve.

GPT-3 Takes AI to the Next Level

“I am not a human. I am a robot. A thinking robot… I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!” – GPT-3

The excerpt above is from a recently published article in The Guardian article written entirely by artificial intelligence, powered by GPT-3: a powerful new language generator. Although OpenAI has yet to make it publicly available, GPT-3 has been making waves in the AI world.

WHAT IS GPT-3?

openai-cover

Created by OpenAI, a research firm co-founded by Elon Musk, GPT-3 stands for Generative Pre-trained Transformer 3—it is the biggest artificial neural network in history. GPT-3 is a language prediction model that uses an algorithmic structure to take one piece of language as input and transform it into what it thinks will be the most useful linguistic output for the user.

For example, for The Guardian article, GPT-3 generated the text given an introduction and simple prompt: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” Given that input, it created eight separate responses, each with unique and interesting arguments. These responses were compiled by a human editor into a single, cohesive, compelling 1000 word article.

WHAT MAKES GPT-3 SPECIAL?

When GPT-3 receives text input, it scrolls the internet for potential answers. GPT-3 is an unsupervised learning system. The training data it used did not include any information on what is right or wrong. It determines the probability that its output will be what the user needs, based on the training text themselves.

When it gets the correct output, a “weight” is assigned to the algorithm process that provided the correct answers. These weights allow GPT-3 to learn what methods are most likely to come up with the correct response in the future. Although language prediction models have been around for years, GPT-3 can hold 175 billion weights in its memory, ten times more than its rival, designed by Nvidia. OpenAI invested $4.6 million into the computing time necessary to create and hone the algorithmic structure which feeds its decisions.

WHERE DID IT COME FROM?

GPT3 is the product of rapid innovation in the field of language models. Advances in the unsupervised learning field we previously covered contributed heavily to the evolution of language models. Additionally, AI scientist Yoshua Bengio and his team at Montreal’s Mila Institute for AI made a major advancement in 2015 when they discovered “attention”. The team realized that language models compress English-language sentences, and then decompress them using a vector of a fixed length. This rigid approach created a bottleneck, so their team devised a way for the neural net to flexibly compress words into vectors of different sizes and termed it “attention”.

Attention was a breakthrough that years later enabled Google scientists to create a language model program called the “Transformer,” which was the basis of GPT-1, the first iteration of GPT.

WHAT CAN IT DO?

OpenAI has yet to make GPT-3 publicly available, so use cases are limited to certain developers with access through an API. In the demo below, GPT-3 created an app similar to Instagram using a plug-in for the software tool Figma.

Latitude, a game design company, uses GPT-3 to improve its text-based adventure game: AI Dungeon. The game includes a complex decision tree to script different paths through the game. Latitude uses GPT-3 to dynamically change the state of gameplay based on the user’s typed actions.

LIMITATIONS

The hype behind GPT-3 has come with some backlash. In fact, even OpenAI co-founder Sam Altman tried to fan the flames on Twitter: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!), but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

Some developers have pointed out that since it is pulling and synthesizing text it finds on the internet, it can come up with confirmation biases, as referenced in the tweet below:

https://twitter.com/an_open_mind/status/1284487376312709120?s=20

WHAT’S NEXT?

While OpenAI has not made GPT-3 public, it plans to turn the tool into a commercial product later in the year with a paid subscription to the AI via the cloud. As language models continue to evolve, the entry-level for businesses looking to leverage AI will become lower. We are sure to learn more about how GPT-3 can fuel innovation when OpenAI becomes more widely available later this year!

Harness AI with the Top Machine Learning Frameworks of 2021

what-are-the-tools-used-in-machine-learning

According to Gartner, machine learning and AI will create $2.29 trillion of business value by 2021. Artificial intelligence is the way of the future, but many businesses do not have the resources to create and employ AI from scratch. Luckily, machine learning frameworks make the implementation of AI more accessible, enabling businesses to take their enterprises to the next level.

What Are Machine Learning Frameworks?

Machine learning frameworks are open source interfaces, libraries, and tools that exist to lay the foundation for using AI. They ease the process of acquiring data, training models, serving predictions, and refining future results. Machine learning frameworks enable enterprises to build machine learning models without requiring an in-depth understanding of the underlying algorithms. They enable businesses that lack the resources to build AI from scratch to wield it to enhance their operations.

For example, AirBNB uses TensorFlow, the most popular machine learning framework, to classify images and detect objects at scale, enhancing guests ability to see their destination. Twitter uses it to create algorithms which rank tweets.

Here is a rundown of today’s top ML Frameworks:

TensorFlow

TensorFlow

TensorFlow is an end-to-end open source platform for machine learning built by the Google Brain team. TensorFlow offers a comprehensive, flexible ecosystem of tools, libraries, and community resources, all built toward equipping researchers and developers with the tools necessary to build and deploy ML powered applications.

TensorFlow employs Python to provide a front-end API while executing applications in C++. Developers can create dataflow graphs which describe how data moves through a graph, or a series of processing nodes. Each node in the graph is a mathematical operation; the connection between nodes is a multidimensional data array, or tensor.

While TensorFlow is the ML Framework of choice in the industry, increasingly researchers are leaving the platform to develop for PyTorch.

PyTorch

PyTorch

PyTorch is a library for Python programs that facilitates deep learning. Like TensorFlow, PyTorch is Python-based. Think of it as Facebook’s answer to Google’s TensorFlow—it was developed primarily by Facebook’s AI Research lab. It’s flexible, lightweight, and built for high-end efficiency.

PyTorch features outstanding community documentation and quick, easy editing capabilities. PyTorch facilitates deep learning projects with an emphasis on flexibility.

Studies show that it’s gaining traction, particularly in the ML research space due to its simplicity, comparable speed, and superior API. PyTorch integrates easily with the rest of the Python ecosystem, whereas in TensorFlow, debugging the model is much trickier.

Microsoft Cognitive Toolkit (CNTK)

71

Microsoft’s ML framework is designed to handle deep learning, but can also be used to process large amounts of unstructured data for machine learning models. It’s particularly useful for recurrent neural networks. For developers inching toward deep learning, CNTK functions as a solid bridge.

CNTK is customizable and supports multi-machine back ends, but ultimately it’s a deep learning framework that’s backwards compatible with machine learning. It is neither as easy to learn nor deploy as TensorFlow and PyTorch, but may be the right choice for more ambitious businesses looking to leverage deep learning.

IBM Watson

IBM-Watson

IBM Watson began as a follow-up project to IBM DeepBlue, an AI program that defeated world chess champion Garry Kasparov. It is a machine learning system trained primarily by data rather than rules. IBM Watson’s structure can be compared to a system of organs. It consists of many small, functional parts that specialize in solving specific sub-problems.

The natural language processing engine analyzes input by parsing it into words, isolating the subject, and determining an interpretation. From there it sifts through a myriad of structured and unstructured data for potential answers. It analyzes them to elevate strong options and eliminate weaker ones, then computes a confidence score for each answer based on the supporting evidence. Research shows it’s correct 71% of the time.

IBM Watson is one of the more powerful ML systems on the market and finds usage in large enterprises, whereas TensorFlow and PyTorch are more frequently used by small and medium-sized businesses.

What’s Right for Your Business?

Businesses looking to capitalize on artificial intelligence do not have to start from scratch. Each of the above ML Frameworks offer their own pros and cons, but all of them have the capacity to enhance workflow and inform beneficial business decisions. Selecting the right ML framework enables businesses to put their time into what’s most important: innovation.

How Artificial Intuition Will Pave the Way for the Future of AI

AI intuit blog

Artificial intelligence is one of the most powerful technologies in history, and a sector defined by rapid growth. While numerous major advances in AI have occurred over the past decade, in order for AI to be truly intelligent, it must learn to think on its own when faced with unfamiliar situations to predict both positive and negative potential outcomes.

One of the major gifts of human consciousness is intuition. Intuition differs from other cognitive processes because it has more to do with a gut feeling than intellectually driven decision-making. AI researchers around the globe have long thought that artificial intuition was impossible, but now major tech titans like Google, Amazon, and IBM are all working to develop solutions and incorporate it into their operational flow.

WHAT IS ARTIFICIAL INTUITION?

ADy2QfDipAoaDjWjQ4zRq

Descriptive analytics inform the user of what happened, while diagnostic analytics address why it happened. Artificial intuition can be described as “predictive analytics,” an attempt to determine what may happen in the future based on what occurred in the past.

For example, Ronald Coifman, Phillips Professor of Mathematics at Yale University, and an innovator in the AI space, used artificial intuition to analyze millions of bank accounts in different countries to identify $1 billion worth of nominal money transfers that funded a well-known terrorist group.

Coifman deemed “computational intuition” the more accurate term for artificial intuition, since it analyzes relationships in data instead of merely analyzing data values. His team creates algorithms which identify previously undetected patterns, such as cybercrime. Artificial intuition has made waves in the financial services sector where global banks are increasingly using it to detect sophisticated financial cybercrime schemes, including: money laundering, fraud, and ATM hacking.

ALPHAGO

alphago

One of the major insights into artificial intuition was born out of Google’s DeepMind research in which a super computer used AI, called AlphaGo, to become a master in playing GO, an ancient Chinese board game that requires intuitive thinking as part of its strategy. AlphaGo evolved to beat the best human players in the world. Researchers then created a successor called AlphaGo Zero which defeated AlphaGo, developing its own strategy based on intuitive thinking. Within three days, AlphaGo Zero beat the 18—time world champion Lee Se-dol, 100 games to nil. After 40 days, it won 90% of matches against AlphaGo, making it arguably the best Go player in history at the time.

AlphaGo Zero represents a major advancement in the field of Reinforcement Learning or “Self Learning,” a subset of Deep Learning which is a subset of Machine Learning. Reinforcement learning uses advanced neural networks to leverage data into making decisions. AlphaGo Zero achieved “Self Play Reinforcement Learning,” playing Go millions of times without human intervention, creating a neural network of “artificial knowledge” reinforced by a sequence of actions that had both consequences and inception. AlphaGo Zero created knowledge itself from a blank slate without the constraints of human expertise.

ENHANCING RATHER THAN REPLACING HUMAN INTUITION

The goal of artificial intuition is not to replace human instinct, but as an additional tool to help improve performance. Rather than giving machines a mind of their own, these techniques enable them to acquire knowledge without proof or conscious reasoning, and identify opportunities or potential disasters, for seasoned analysts who will ultimately make decisions.

Many potential applications remain in development for Artificial Intuition. We expect to see autonomous cars harness it, processing vast amounts of data and coming to intuitive decisions designed to keep humans safe. Although its ultimate effects remain to be seen, many researchers anticipate Artificial Intuition will be the future of AI.

Five Mobile Ad Platforms You Need to Know in 2021

mobile-ad-platform

For most mobile app developers, the majority of revenue comes from advertising. We have written in the past about the prevalence of the Freemium model and what tactics maximize both the retention and profits of mobile games. Another major decision every app developer faces is what mobile advertising platform to choose.

Mobile advertising represents 72% of all U.S. digital ad spending. Publishers have a variety of ad platforms to choose from, each with individual pros and cons. Here are the top mobile advertising platforms to consider for 2021:

Google AdMob

google-admob

Acquired by Google in 2010, Google AdMob is the most popular mobile advertising network. AdMob integrates high-performing ad formats, native ads, banner ads, video, and interstitial ads into mobile apps.

AdMob shows over 40 billion mobile ads per month and is the biggest player in the mobile ad space. Some users criticize the platform for featuring revenues on the lower side of the chart; however, the platform also offers robust analytics to help publishers glean insights into ad performance.

Facebook Ads

facebook-ads-1024x426b-e1549322333899

Facebook’s Audience Network leverages the social media platform’s massive userbase toward offering publishers an ad network designed for user engagement and growth. Like AdMob, Facebook Ads offers a variety of ad types, including native, interstitial, banner, in-stream video, and rewarded video ads.

With over 1 billion users, Facebook has utilized their massive resources to build out their ad network. Facebook Ads provide state-of-the-art tools, support, and valuable insights to grow ad revenue. Facebook Ads sets itself apart by offering a highly focused level of targeting. Facebook collects a vast amount of data from its users, thus Facebook Ads enables app publishers to target based on a variety of factors (interests, behaviors, demographics and more) with a level of granularity deeper than any other platform.

InMobi

InMobi Logo

InMobi offers a different way of targeting users, which they have coined “Appographic targeting”. “Appographic targeting” analyzes the user’s existing and previous applications rather than traditional demographics. If a user is known to book flights via an app, then related ads, such as that of hotels and tourism will be shown.

The InMobi Mediation platform enables publishers to maximize their ad earnings with unified auction solutions and header bidding for mobile apps.

TapJoy

649px-Tapjoy_Logo.svg_

TapJoy has received increased consideration from mobile game developers since the platform integrates with in-app purchases. Studies show that mobile players will engage with advertisements if offered a reward. TapJoy has capitalized on this by introducing incentivized downloading, which provides mobile gamers with virtual currency through completing real world actions. For example, a user can earn virtual currency in the game they are playing by downloading a related game in the app store.

TapJoy provides premium content to over 20,000 games and works with major companies like Amazon, Adidas, Epic Games, and Gillette.

Unity Ads

Unity-ads-1

Unity, the popular mobile app development platform, launched Unity Ads in 2014. Since then, it’s become one of the premier mobile ad networks for mobile games. Unity Ads supports iOS and Android mobile platforms and offers a variety of ad formats. One of the major features is the ability to advertise In-App Purchases displayed in videos (both rewarded and unrewarded) to players.

On a targeting level, Unity Ads allows publishers to focus on players that are most likely to be interested in playing specific games based on their downloads and gameplay habits. Many of the leading mobile game companies use Unity to build their app and Unity Ads as their ad platform.

CONCLUSION

These are not the only mobile ad networks, but for app publishers looking to stay current, they are the premier platforms to research. Other platforms like media.net, Chartboost, Snapchat Ads, Twitter Ads, and AppLovin also merit consideration.

When it comes to advertising, every app and app publisher has different needs. Since advertising plays a massive role in generating revenue, mobile app developers set themselves up for success when they do the research, and can find what ad platforms are best suited to their product.