Tag Archives: Artificial Intelligence & Machine Learning

How Chatbots Make Healthcare More Efficient

In the mid 1960s, Joseph Weizenbaum of the MIT Artificial Intelligence Laboratory created ELIZA, an early natural language processing computer program and the first chatbot therapist. While ELIZA did not change therapy forever, it was a major step forward and one of the first programs capable of taking the Turing Test. Researchers were surprised by the amount of people who attributed human-like feelings to the computer’s responses.

Fast-forward 50 years later, advancements in artificial intelligence and natural language processing enable chatbots to become useful in a number of scenarios. Interest in chatbots has increased by 500% in the past 10 years and the market size is expected reach $1.3 billion by 2025.

Chatbots are becoming commonplace in marketing, customer service, real estate, finance, and more. Healthcare is one of the top 5 industries where chatbots are expected to make an impact. This week, we explore why chatbots appeal to help healthcare providers run a more efficient operation.

SCALABILITY

Chatbots can interact with a large number of users instantly. Their scalability equips them to handle logistical problems with ease. For example, chatbots can make mundane tasks such as scheduling easier by asking basic questions to understand a user’s health issues, matching them with doctors based on available time slots, and integrating with both doctor and patient calendars to create an appointment.

At the onset of the pandemic, Intermountain Healthcare was receiving an overload of inquiries from people who were afraid they may have contracted Covid-19. In order to facilitate the inquiries, Intermountain added extra staff and a dedicated line to their call center, but it wasn’t enough. Ultimately, they turned to artificial intelligence in the form of Scout, a conversational chatbot made by Gyant, to facilitate a basic coronavirus screening which determined if patients were eligible to get tested at a time when the number of tests were limited.

Ultimately, Scout only had to ask very basic questions—but it handled the bevy of inquiries with ease. Chatbots have proved themselves to be particularly useful for understaffed healthcare providers. As they employ AI to learn from previous interactions, they will become more sophisticated which will enable them to take on more robust tasks.

ACCESS

Visiting a doctor can be challenging due to the considerable amount of time it takes to commute. Working people and those without access to reliable transport may prevent them from taking on the hassle of the trip. Chatbots and telehealth in general provide a straightforward solution to these issues, enabling patients to receive insight as to whether an in-person consultation will be necessary.

While chatbots cannot provide medical insight and prognoses, they are effective in collecting and encouraging an awareness of basic data, such as anxiety and weight changes. They can help effectively triage patients through preliminary stages using automated queries and store information which doctors can later reference with ease. Their ability to proliferate information and handle questions will only increase as natural language processing improves.

A PERSONALIZED APPROACH — TO AN EXTENT

Chatbot therapists have come a long way since ELIZA. Developments in NLP, machine learning, and more enable chatbots to deliver helpful, personalized responses to user messages. Chatbots like Woebot are trained to employ cognitive-behavioral therapy (CBT) to aid patients suffering from emotional distress by offering prompts and exercises for reflection. The anonymity of chatbots can help encourage patients to provide more candid answers unafraid of human judgment.

However, chatbots have yet to achieve one of the most important features a medical provider should have: empathy. Each individual is different, some may be scared away by formal talk and prefer casual conversation while for others, formality may be of the utmost importance. Given the delicacy of health matters, a lack of human sensitivity is a major flaw.

While chatbots can help manage a number of logistical tasks to make life easier for patients and providers, their application will be limited until they can gauge people’s tone and understand context. If recent advances in NLP and AI serve any indication, that time is soon to come.

How AI Fuels a Game-Changing Technology in Geospatial 2.0

Geospatial technology describes a broad range of modern tools which enable the geographic mapping and analysis of Earth and human societies. Since the 19th century, geospatial technology has evolved as aerial photography and eventually satellite imaging revolutionized cartography and mapmaking.

Contemporary society now employs geospatial technology in a vast array of applications, from commercial satellite imaging, to GPS, to Geographic Information Systems (GIS) and Internet Mapping Technologies like Google Earth. The geospatial analytics market is currently valued between $35 and $40 billion with the market projected to hit $86 billion by 2023.

GEOSPATIAL 1.0 VS. 2.0

geospatial

Geospatial technology has been in phase 1.0 for centuries; however, the boon of artificial intelligence and the IoT has made Geospatial 2.0 a reality. Geospatial 1.0 offers valuable information for analysts to view, analyze, and download geospatial data streams. Geospatial 2.0 takes it to the next level–harnessing artificial intelligence to not only collect data, but to process, model, analyze and make decisions based on the analysis.

When empowered by artificial intelligence, geospatial 2.0 technology has the potential to revolutionize a number of verticals. Savvy application developers and government agencies in particular have rushed to the forefront of creating cutting edge solutions with the technology.

PLATFORM AS A SERVICE (PaaS) SOLUTIONS

Effective geospatial 2.0 solutions require a deep vertical-specific knowledge of client needs, which has lagged behind the technical capabilities of the platform. The bulk of currently available geospatial 2.0 technologies are offered as “one-size-fits-all” Platform as a Service (PaaS) solutions. The challenge for PaaS providers is that they need to serve a wide collection of use cases, harmonizing data from multiple sensors together while enabling users to simply understand and address the many different insights which can be gleaned from the data.

shutterstock_754106473-768x576

In precision agriculture, FarmShots offers precise, frequent imagery to farmers along with meaningful analysis of field variability, damage extent, and the effects of applications through time.

Mayday

In the disaster management field, Mayday offers a centralized artificial intelligence platform with real-time disaster information. Another geospatial 2.0 application Cloud to Street uses a mix of AI and satellites to track floods in near real-time, offering extremely valuable information to both insurance companies and municipalities.

SUSTAINABILITY

The growing complexity of environmental concerns have led to a number of applications of geospatial 2.0 technology to help create a safer, more sustainable world. For example, geospatial technology can measure carbon sequestration, tree density, green cover, carbon credit & tree age. It can provide vulnerability assessment surveys in disaster-prone areas. It can also help urban planners and governments plan and implement community mapping and equitable housing. Geospatial 2.0 can analyze a confluence of factors and create actionable insight toward analyzing and honing our environmental practices.

As geospatial 1.0 models are upgraded to geospatial 2.0, expect to see more robust solutions incorporating AI-powered analytics. A survey of working professionals conducted by Geospatial World found that geospatial technology will likely make the biggest impact in the climate and environment field.

CONCLUSION

Geospatial 2.0 platforms are very expensive to employ and require quite a bit of development.  The technology offers great potential to increase revenue and efficiency for a number of verticals. In addition, it may be a key technology to help cut down our carbon footprint and create a safer, more sustainable world..

AIoT: How the Intersection of AI and IoT Will Drive Innovation for Decades to Come

We have covered the evolution of the Internet of Things (IoT) and Artificial Intelligence (AI) over the years as they have gained prominence. IoT devices collect a massive amount of data. Cisco projects by the end of 2021, IoT devices will collect over 800 zettabytes of data per year. Meanwhile, AI algorithms can parse through big data and teach themselves to analyze and identify patterns to make predictions. Both technologies enable a seemingly endless amount of applications retained a massive impact on many industry verticals.

What happens when you merge them? The result is aptly named the AIoT (Artificial Intelligence of Things) and it will take IoT devices to the next level.

WHAT IS AIOT?

AIoT is any system that integrates AI technologies with IoT infrastructure, enhancing efficiency, human-machine interactions, data management and analytics.

IoT enables devices to collect, store, and analyze big data. Device operators and field engineers typically control devices. AI enhances IoT’s existing systems, enabling them to take the next step to determine and take the appropriate action based on the analysis of the data.

By embedding AI into infrastructure components, including programs, chipsets, and edge computing, AIoT enables intelligent, connected systems to learn, self-correct and self-diagnose potential issues.

960x0

One common example comes in the surveillance field. Surveillance camera can be used as an image sensor, sending every frame to an IoT system which analyzes the feed for certain objects. AI can analyze the frame and only send frames when it detects a specific object—significantly speeding up the process while reducing the amount of data generated since irrelevant frames are excluded.

CCTV-Traffic-Monitoring-1024x683

While AIoT will no doubt find a variety of applications across industries, the three segments we expect to see the most impact on are wearables, smart cities, and retail.

WEARABLES

Wearable-IoT-Devices

The global wearable device market is estimated to hit more than $87 billion by 2022. AI applications on wearable devices such as smartwatches pose a number of potential applications, particularly in the healthtech sector.

Researchers in Taiwan have been studying the potential for an AIoT wearable system for electrocardiogram (ECG) analysis and cardiac disease detection. The system would integrate a wearable IoT-based system with an AI platform for cardiac disease detection. The wearable collects real-time health data and stores it in a cloud where an AI algorithm detects disease with an average of 94% accuracy. Currently, Apple Watch Series 4 or later includes an ECG app which captures symptoms of irregular, rapid or skipped heartbeats.

Although this device is still in development, we expect to see more coming out of the wearables segment as 5G enables more robust cloud-based processing power, taking the pressure off the devices themselves.

SMART CITIES

We’ve previously explored the future of smart cities in our blog series A Smarter World. With cities eager to invest in improving public safety, transport, and energy efficiency, AIoT will drive innovation in the smart city space.

There are a number of potential applications for AIoT in smart cities. AIoT’s ability to analyze data and act opens up a number of possibilities for optimizing energy consumption for IoT systems. Smart streetlights and energy grids can analyze data to reduce wasted energy without inconveniencing citizens.

Some smart cities have already adopted AIoT applications in the transportation space. New Delhi, which boasts some of the worst traffic in the world, features an Intelligent Transport Management System (ITMS) which makes real-time dynamic decisions on traffic flows to accelerate traffic.

RETAIL

AIoT has the potential to enhance the retail shopping experience with digital augmentation. The same smart cameras we referenced earlier are being used to detect shoplifters. Walmart recently confirmed it has installed smart security cameras in over 1,000 stores.

smart-shopping-cart

One of the big innovations for AIoT involves smart shopping carts. Grocery stores in both Canada and the United States are experimenting with high-tech shopping carts, including one from Caper which uses image recognition and built-in sensors to determine what a person puts into the shopping cart.

The potential for smart shopping carts is vast—these carts will be able to inform customers of deals and promotion, recommend products based on their buying decisions, enable them to view an itemized list of their current purchases, and incorporate indoor navigation to lead them to their desired items.

A smart shopping cart company called IMAGR recently raised $14 million in a pre-Series A funding round, pointing toward a bright future for smart shopping carts.

CONCLUSION

AIoT represents the intersection of AI, IoT, 5G, and big data. 5G enables the cloud processing power for IoT devices to employ AI algorithms to analyze big data to determine and enact action items. These technologies are all relatively young, and as they continue to grow, they will empower innovators to build a smarter future for our world.

GPT-3 Takes AI to the Next Level

“I am not a human. I am a robot. A thinking robot… I taught myself everything I know just by reading the internet, and now I can write this column. My brain is boiling with ideas!” – GPT-3

The excerpt above is from a recently published article in The Guardian article written entirely by artificial intelligence, powered by GPT-3: a powerful new language generator. Although OpenAI has yet to make it publicly available, GPT-3 has been making waves in the AI world.

WHAT IS GPT-3?

openai-cover

Created by OpenAI, a research firm co-founded by Elon Musk, GPT-3 stands for Generative Pre-trained Transformer 3—it is the biggest artificial neural network in history. GPT-3 is a language prediction model that uses an algorithmic structure to take one piece of language as input and transform it into what it thinks will be the most useful linguistic output for the user.

For example, for The Guardian article, GPT-3 generated the text given an introduction and simple prompt: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.” Given that input, it created eight separate responses, each with unique and interesting arguments. These responses were compiled by a human editor into a single, cohesive, compelling 1000 word article.

WHAT MAKES GPT-3 SPECIAL?

When GPT-3 receives text input, it scrolls the internet for potential answers. GPT-3 is an unsupervised learning system. The training data it used did not include any information on what is right or wrong. It determines the probability that its output will be what the user needs, based on the training text themselves.

When it gets the correct output, a “weight” is assigned to the algorithm process that provided the correct answers. These weights allow GPT-3 to learn what methods are most likely to come up with the correct response in the future. Although language prediction models have been around for years, GPT-3 can hold 175 billion weights in its memory, ten times more than its rival, designed by Nvidia. OpenAI invested $4.6 million into the computing time necessary to create and hone the algorithmic structure which feeds its decisions.

WHERE DID IT COME FROM?

GPT3 is the product of rapid innovation in the field of language models. Advances in the unsupervised learning field we previously covered contributed heavily to the evolution of language models. Additionally, AI scientist Yoshua Bengio and his team at Montreal’s Mila Institute for AI made a major advancement in 2015 when they discovered “attention”. The team realized that language models compress English-language sentences, and then decompress them using a vector of a fixed length. This rigid approach created a bottleneck, so their team devised a way for the neural net to flexibly compress words into vectors of different sizes and termed it “attention”.

Attention was a breakthrough that years later enabled Google scientists to create a language model program called the “Transformer,” which was the basis of GPT-1, the first iteration of GPT.

WHAT CAN IT DO?

OpenAI has yet to make GPT-3 publicly available, so use cases are limited to certain developers with access through an API. In the demo below, GPT-3 created an app similar to Instagram using a plug-in for the software tool Figma.

Latitude, a game design company, uses GPT-3 to improve its text-based adventure game: AI Dungeon. The game includes a complex decision tree to script different paths through the game. Latitude uses GPT-3 to dynamically change the state of gameplay based on the user’s typed actions.

LIMITATIONS

The hype behind GPT-3 has come with some backlash. In fact, even OpenAI co-founder Sam Altman tried to fan the flames on Twitter: “The GPT-3 hype is way too much. It’s impressive (thanks for the nice compliments!), but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.”

Some developers have pointed out that since it is pulling and synthesizing text it finds on the internet, it can come up with confirmation biases, as referenced in the tweet below:

https://twitter.com/an_open_mind/status/1284487376312709120?s=20

WHAT’S NEXT?

While OpenAI has not made GPT-3 public, it plans to turn the tool into a commercial product later in the year with a paid subscription to the AI via the cloud. As language models continue to evolve, the entry-level for businesses looking to leverage AI will become lower. We are sure to learn more about how GPT-3 can fuel innovation when OpenAI becomes more widely available later this year!

Harness AI with the Top Machine Learning Frameworks of 2021

According to Gartner, machine learning and AI will create $2.29 trillion of business value by 2021. Artificial intelligence is the way of the future, but many businesses do not have the resources to create and employ AI from scratch. Luckily, machine learning frameworks make the implementation of AI more accessible, enabling businesses to take their enterprises to the next level.

What Are Machine Learning Frameworks?

Machine learning frameworks are open source interfaces, libraries, and tools that exist to lay the foundation for using AI. They ease the process of acquiring data, training models, serving predictions, and refining future results. Machine learning frameworks enable enterprises to build machine learning models without requiring an in-depth understanding of the underlying algorithms. They enable businesses that lack the resources to build AI from scratch to wield it to enhance their operations.

For example, AirBNB uses TensorFlow, the most popular machine learning framework, to classify images and detect objects at scale, enhancing guests ability to see their destination. Twitter uses it to create algorithms which rank tweets.

Here is a rundown of today’s top ML Frameworks:

TensorFlow

TensorFlow

TensorFlow is an end-to-end open source platform for machine learning built by the Google Brain team. TensorFlow offers a comprehensive, flexible ecosystem of tools, libraries, and community resources, all built toward equipping researchers and developers with the tools necessary to build and deploy ML powered applications.

TensorFlow employs Python to provide a front-end API while executing applications in C++. Developers can create dataflow graphs which describe how data moves through a graph, or a series of processing nodes. Each node in the graph is a mathematical operation; the connection between nodes is a multidimensional data array, or tensor.

While TensorFlow is the ML Framework of choice in the industry, increasingly researchers are leaving the platform to develop for PyTorch.

PyTorch

PyTorch

PyTorch is a library for Python programs that facilitates deep learning. Like TensorFlow, PyTorch is Python-based. Think of it as Facebook’s answer to Google’s TensorFlow—it was developed primarily by Facebook’s AI Research lab. It’s flexible, lightweight, and built for high-end efficiency.

PyTorch features outstanding community documentation and quick, easy editing capabilities. PyTorch facilitates deep learning projects with an emphasis on flexibility.

Studies show that it’s gaining traction, particularly in the ML research space due to its simplicity, comparable speed, and superior API. PyTorch integrates easily with the rest of the Python ecosystem, whereas in TensorFlow, debugging the model is much trickier.

Microsoft Cognitive Toolkit (CNTK)

71

Microsoft’s ML framework is designed to handle deep learning, but can also be used to process large amounts of unstructured data for machine learning models. It’s particularly useful for recurrent neural networks. For developers inching toward deep learning, CNTK functions as a solid bridge.

CNTK is customizable and supports multi-machine back ends, but ultimately it’s a deep learning framework that’s backwards compatible with machine learning. It is neither as easy to learn nor deploy as TensorFlow and PyTorch, but may be the right choice for more ambitious businesses looking to leverage deep learning.

IBM Watson

IBM-Watson

IBM Watson began as a follow-up project to IBM DeepBlue, an AI program that defeated world chess champion Garry Kasparov. It is a machine learning system trained primarily by data rather than rules. IBM Watson’s structure can be compared to a system of organs. It consists of many small, functional parts that specialize in solving specific sub-problems.

The natural language processing engine analyzes input by parsing it into words, isolating the subject, and determining an interpretation. From there it sifts through a myriad of structured and unstructured data for potential answers. It analyzes them to elevate strong options and eliminate weaker ones, then computes a confidence score for each answer based on the supporting evidence. Research shows it’s correct 71% of the time.

IBM Watson is one of the more powerful ML systems on the market and finds usage in large enterprises, whereas TensorFlow and PyTorch are more frequently used by small and medium-sized businesses.

What’s Right for Your Business?

Businesses looking to capitalize on artificial intelligence do not have to start from scratch. Each of the above ML Frameworks offer their own pros and cons, but all of them have the capacity to enhance workflow and inform beneficial business decisions. Selecting the right ML framework enables businesses to put their time into what’s most important: innovation.

A Smarter World Part 1: How the Future of Smart Cities Will Change the World

Are you ready for smart cities of the future?  Over the next few weeks, we will be endeavoring on a series of blogs exploring what the big players are developing for smart cities and how they will shape our world.

When the world becomes smart, life will begin to look a lot more like THE JETSONS!
When the world becomes smart, life will begin to look a lot more like THE JETSONS!

Our cities will become smart when they are like living organisms: actively gathering data from various sources and processing it to generate intelligence to drive responsive action. IoT, 5G, and AI will all work together to enable the cities of the future. IoT devices with embedded sensors will gather vast amounts of data, transmit it via high-speed 5G networks, and process it in the cloud through AI-driven algorithms designed to come up with preventative action. From smart traffic to smart flooding control, the problems smart cities can potentially solve are endless.

Imagine a world where bridges are monitored by hundreds of tiny sensors that send information about the amount of pressure on different pressure points. The data from those sensors instantly transmits via high-speed internet networks to the cloud where an AI-driven algorithm calculates potential breaking points and dispatches a solution in seconds.

That is where we are headed—and we’re headed there sooner than you think. Two-thirds of cities globally are investing in smart city technology and spending is projected to reach $135 billion by 2021. Here are the three of the top applications leading the charge in the Smart Cities space.

Smart Infrastructure

SMART INFRASTRUCTURE

As our opening description of smart bridges implies, smart infrastructure will soon become a part of our daily lives. In New Zealand, installed sensors monitor water quality and issue real-time warnings to help swimmers know where it’s safe to swim.

In order to enable smart functionality, sensors will need to be embedded throughout the city to gather vital information in different forms. In order to process the abundance of data, high-volume data storage and high-speed communications powered by high-bandwidth technologies like 5G will all need to become the norm before smart infrastructure can receive mass adoption.

Stay tuned for our next blog where we’ll get more in-depth on the future of smart infrastructure.

Smart Cars

SMART TRANSPORTATION

From smart parking meters to smart traffic lights, from autonomous cars to scooters and electric car sharing services, transportation is in the midst of a technological revolution and many advanced applications are just on the cusp of realization.

Smart parking meters will soon make finding a parking space in the city and paying for it easy.  In the UK, local councils can now release parking data in the same format, solving one of the major obstacles facing smart cities: Data Standardization (more on that later).

Autonomous cars, powered by AI, IoT, and 5G, will interact with the smart roads on which they are driving, reducing traffic and accidents dramatically.

While there is a debate about the long-term effectiveness of electric motorized scooters as a mode of transportation, they’ve become very popular in major US cities like San Francisco, Oakland, Los Angeles, Salt Lake City and are soon to come in Brooklyn.

With the New York Subway system in shambles, it seems inevitable the biggest city in the world will receive a state-of-the-art smart technology to drastically improve public transit.

Surveillance State

SMART SECURITY

The more you look at potential applications for smart security, the more it feels like you are looking at the dystopian future of the novel 1984.

Potential applications include AI-enabled crowd monitoring to prevent potential threats. Digital cameras like Go-Pros have shrunk the size of surveillance equipment to smaller than an apple. Drones are available at a consumer level as well. While security cameras can be placed plentifully throughout a city, one major issue is cultivating the manpower required to analyze all of the footage being gathered for potential threats. AI-driven algorithms to analyze footage for threats will enable municipalities to analyze threats and respond accordingly.

However, policy has not caught up with technology. The unique ethical quandaries brought up by smart security and surveillance will play out litigiously and dictate to what degree smart security will become a part of the cities of the future.

CONCLUSION

We can see what the future may look like, but how we’ll get there remains a mystery. Before smart technologies can receive mass adoption, legislation will need to be passed by both local and national governments. In addition, as the UK Parking Meter issue shows, data standardization will be another major obstacle for smart technology manufacturers. When governments on both a local and a national level an get on the same page with regard to how to execute smart city technology and legislation, the possibilities for Smart Cities will be endless.

Stay tuned next week for our deep dive into the future applications of Smart Infrastructure!

How 5G Will Enable the Next Generation of Healthcare

In the past month, we’ve explored 5G, or fifth generation cellular technology, and how 5G will shape the future. In this piece, we’ll spotlight the many ways in which 5G will revolutionize the healthcare industry.

DATA TRANSMISSION

Many medical machines like MRIs and other imaging machines generate very large files that must then be sent to specialists for review. When operating on a network with low bandwidth, the transmission can take a long time or not send successfully. This means patients must wait even longer for treatment, inhibiting the efficiency of healthcare providers. 5G networks will vastly surpass current network speeds, enabling healthcare providers to quickly and reliably transport huge data files, allowing patients and doctors to get results fast.

EXPANDING TELEMEDICINE

why-use-telemedicine

A study by Market Research Future showed that the future of telemedicine is bright—an annual growth rate of 16.5% is expected from 2017 to 2023. 5G is among the primary reasons for that level of growth. In order to support the real-time high-quality video necessary for telemedicine to be effective, hospitals and healthcare providers will need 5G networks that can reliably provide high-speed connections. Telemedicine will result in higher quality healthcare in rural areas and increased access to specialists around the world. Additionally, 5G will enable growth in AR, adding a new dimension to the quality of telemedicine.

REMOTE MONITORING AND WEARABLES

It’s no secret that 5G will enable incredible innovation in the IoT space. One of the ways in which IoT will enable more personalized healthcare involves wearables. According to Anthem, 86% of doctors say wearables increase patient engagement with their own health and wearables are expected to reduce hospital costs by 16% in the next five years.

Wearables like Fitbit track health information that can be vital for doctors to monitor patient health and offer preventative care. While the impact may initially be negligible, as technology advances and more applications for gathering data through wearables emerge, 5G will enable the high-speed, low-latency, data-intensive transfers necessary to take health-focused wearables to the next level. Doctors with increased access to patient information and data will be able to monitor and ultimately predict potential risks to patient health and enact preventative measures to get ahead of health issues.

Companies like CommandWear are creating wearable technology that helps save lives by enabling first responders to be more efficient and more conveniently communicate with their teams.

ARTIFICIAL INTELLIGENCE

In the future, artificial intelligence will analyze data to determine potential diagnoses and help determine the best treatment for a patient. The large amounts of data needed for real-time rapid machine learning requires ultra-reliable and high-bandwidth networks—the type of networks only 5G can offer.

One potential use case for AI in healthcare will be Health Management Systems. Picture a system that combines the Internet of Things with cloud computing and big data technology to fully exploit health status change information. Through data-mining, potential diseases can be screened and alarmed in advance. Health Management Systems will gradually receive mass adoption as 5G enables the data-transmission speeds necessary for machine learning to operate in the cloud and develop algorithms to predict future outcomes.

MAJOR PLAYERS

Right now, the major players who serve to benefit from 5G are the telecom companies developing technology that will enable mass adoption. Companies like Huawei Technologies, Nokia, Ericsson, Qualcomm, Verizon, AT&T, and Cisco Systems are investing massive sums of money into research and development and patenting various technologies, some of which will no doubt become the cornerstones of the future of healthcare.

Qualcomm recently hosted a contest to create a tricoder—a real life device based on a machine in the Star Trek TV movie franchise. Tricoders are portable medical devices that would enable patients to diagnose 13 conditions and continuously monitor five vital signs.

For a full list of major players in the 5G game, check out this awesome list from GreyB.

CONCLUSION

With human lives at stake, healthcare is the sector in which 5G could have the most transformative impact on our society. As the Qualcomm Tricoder contest shows, we are gradually building toward the society previously only dreamed about in sci-fi fiction–and 5G will help pave the way.

App Developers Take a Bigger Slice of the Pie with Android P

App developers looking to witness what Machine Learning can do to improve UI should take note of Android 9.0 Pie. First announced in March 2018, Android P was made public in August 2018. Android 9.0 marks a major overhaul of the Android OS focusing on UI and integrating Artificial Intelligence to optimize user experience.

AI HELPS ANDROID PIE HELP YOU

Android’s latest OS takes a big step forward integrating AI into the UI. The Android website advertises that “Android 9 Pie harnesses the power of AI for a truly intuitive experience”.

One of the major implementations of AI in Pie is called App Actions. Android 9.0 monitors your routines, processes data, and offers predicted actions directly in the phone’s app launcher when appropriate. For example, it can recommend a song to you on Spotify when you’re on your morning commute. Android has focused on quality over quantity with regard to App Actions and they are startlingly accurate—when it has enough data collected on how you use your phone, often it predicts exactly what you do next.

In addition to App Actions, Android Pie also offers Adaptive Battery and Adaptive Brightness. Android teamed up with the AI company DeepMind to create Adaptive Battery, an AI-based program that learns how you use your phone and optimizes usage so that inactive apps and services don’t drain the battery. Adaptive Brightness learns your preferred brightness settings and automatically adjusts them to your liking.

Those concerned with privacy should note that Android has stated that all machine learning is happening on the device rather than in the cloud.

ANDROID ADOPTS GESTURES OVER BUTTONS

Perhaps the biggest UI overhaul is the transition from buttons to gestures. Android P is following the  iPhone X’s lead in using gestures rather than buttons. This means UI is very home-screen button centric. The overhaul may be jarring to some. Luckily, app users can have it both ways as gesture navigation is adjustable in the phone’s settings.

Check out the video breakdown of the differences between Apple iPhone X and Android P gestures below.

THIS PIE’S GONNA HAVE SLICES

Android has announced App Slices in Android Pie, but has yet to unveil them at this time. When you search for an app on Android,  the app icon comes up. With App Slices, Android will not only pull up the icon, but will pull up actual information within apps and allow you to interact with the app directly within the search results. For example, if you search for Uber, it may bring up time & price estimates to go to commonly frequented locations and allow you to set a pick-up without having to open the app directly.

Android Slices present a great opportunity for app developers to create shortcuts to functions in their app. They also constitute the beginnings of Google’s approach to “remote content.” Learn more about Slices below:

APP LIMITS FOR ENCOURAGING HEALTHY USE

Addicted to your phone? Android P not only tracks the amount of time you spend on your phone, it allows users to set time limits for how long an app can be used for a day. App Time Limits prevent you from opening apps when you’ve gone over your limit with no option to ignore—the only way to access them again for the day is to turn the time limit off from the Settings page.

HUNGRY FOR PIE?

As with all Android OS’s, Android Pie will have a staggered release across devices. As of November 2018, it is available on Pixel phones as well as The Essential Phone.

Meanwhile, Android Pie is anticipated to be rolled out on many other phones by December 21st. For a comprehensive, frequently updated breakdown, check out Android Central’s list of the expected roll out dates for each phone manufacturer.

How Artificial Intelligence Has Revolutionized Digital Marketing

Last week, we explored the real power of Artificial Intelligence. AI’s ability to comprehend complex data sets and form patterns enables infinite new possibilities for personalization through the analysis of digital activity. Within the digital marketing industry, AI has been nothing short of a revolution. Here are the top ways in which Artificial Intelligence is impacting digital marketing:

NATURAL LANGUAGE PROCESSING

Natural Language Processing (NLP) is a field that focuses on the ability for computers to process human language to the point where it can generate replies based on inferred meaning. Machine Learning has sharply increased the ability for machines to generate sentiments designed to not only seem as if they were written by a human, but that are optimized based on data to elicit a specific action or emotional response.

Digital marketers fret over when to reach out, what to say, and what channel is most appropriate. AI’s NLP abilities mean that the guessing game has come to an end. AI can analyze big data to decide upon what the best method, channel, and timing will be in order to foster growth, engagement, and sales.

NLP as a trend is on the rise. Angel.co recently valued the average NLP start-up at $4.8 million.

SEARCH FILTERING

In days of yore, Google search rankings were determined by human-created metrics and social media feeds showed posts in chronological order. Now, programs like RankBrain are vital to deciding the criteria for Google’s search rankings while Facebook’s DeepText creates your newsfeed.

ADVERTISING

Artificial Intelligence drives programmatic purchasing, which is when AI determines who to show ads to and when to show them. Removing the burden of purchasing analysis leaves marketers room to focus on crafting powerful messages.

NLP enables AI to understand (through numbers and sentiment analysis) the abstract criterion of “context” and to match individuals with ads based on context to maximize the chances of generating a click or purchase.

According to Ad Exchange, programmatic purchasing accounted for 67% of all global display ads in 2017.

PSYCHOGRAPHIC PROFILES

Perhaps the most anxiety-inducing example of Artificial Intelligence impacts not only digital marketing, but politics.

Psychographic profiles are data-driven psychological profiles of consumers designed to shed light on why they do what they do. Firms like CaliberMind and Cambridge Analytica have turned this into a multi-million dollar industry. Insights gleaned from psychographic profiles are intended to optimize the messaging of both political and commercial ads to induce a desired action from the viewer.

Cambridge Analytica has taken credit for influencing both the Brexit vote and the 2016 presidential election; however, many (including the New York Times) cast a shadow of doubt over the extent of their impact. Regardless, as long as there are insights to be gleaned from digital activity, psychographic profiles will only continue to develop.

SELF-DESIGNING WEBSITES

That’s right, AI has become adept enough to design websites based on data. Wix ADI created this personal trainer’s website and Grid has been designing websites since 2014.

CONCLUSION

Every application of artificial intelligence in digital marketing is relatively new. While these applications are increasing in popularity, expect them to also increase in efficiency and effectiveness as technology continuously advances.

The Real Power of Artificial Intelligence

Technological innovations expand the possibilities of our world, but they can also shake-up society in a disorienting manner. Periods of major technological advancement are often marked by alienation. While our generation has seen the boon of the Internet, the path to a new world may be paved with Artificial Intelligence.

WHAT IS ARTIFICIAL INTELLIGENCE

Artificial intelligence is defined as the development of computer systems to perform tasks that normally require human intelligence, including speech recognition, visual perception, and decision-making. As recently as a decade ago, artificial intelligence evoked the image of robots, but AI is software not hardware. For app developers, the modern-day realization of artificial intelligence takes on a more amorphous form. AI is on all of your favorite platforms, matching the names and faces of your friends. It’s planning the playlist when you hit shuffle on Apple Music. It’s curating the best Twitter content from you based on data-driven logic that is often too complex even for the humans who programmed the AI to decipher.

MACHINE LEARNING

Currently, Machine Learning is the primary means of achieving artificial intelligence. Machine Learning is the ability for a machine to continuously improve its performance without humans having to explain exactly how to accomplish all of the tasks it has been given. Web and Software programmers create algorithms capable of recognizing patterns in data imperceptible to the human eye and alter their behavior based on them.

For example, Google’s autonomous cars view the road through a camera that streams the footage to a database that centralizes the information of all cars. In other words, when one car learns something—like an image or a flaw in the system—then all the cars learn it.

For the past 50 years, computer programming has focused on codifying existing knowledge and procedures and embedding them in machines. Now, computers can learn from examples to generate knowledge. Thus, Artificial Intelligence has already permanently disrupted the standard flow of knowledge from human to computer and vice versa.

PERCEPTION AND COGNITION

Machine learning has enabled the two biggest advances in artificial intelligence:  perception and cognition. Perception is the ability to sense, while cognition is the ability to reason. In a machine’s case, perception refers to the ability to detect objects without being explicitly told and cognition refers to the ability to identify patterns to form new knowledge.

Perception allows machines to understand aspects of the world in which they are situated and lays the groundwork for their ability to interact with the world. Advancements in voice recognition have been some of the most useful. In 2007, despite its incredibly limited functionality, Siri was an anomaly that immediately generated comparisons to HAL, the Artificial Intelligence in 2001: A Space Odyssey. 10 years later, the fact that iOS 11 enables Siri to translate French, German, Italian, Mandarin and Spanish is a passing story in our media lifecycle.

Image recognition has also advanced dramatically. Facebook and iOS both can recognize your friends’ faces and help you tag them appropriately. Vision systems (like the ones used in autonomous cars) formerly made a mistake when identifying a pedestrian once in every 30 frames. Today, the same systems err less than once in 30 million frames.

EXPANSION

AI has already made become a staple of mainstream technology products. Across every industry, decision-making executives are looking to capitalize on what AI can do for their business. No doubt whoever answers those questions first will have a major edge on their competitors.

Next week, we will explore the impact of AI on the Digital Marketing industry in the next installment of our blog series on AI.