Open your Facebook feed, a newspaper or turn on the news and you’ll likely see something about the dangers of machine learning, the increasing amount of fake news or even the dangers of AI on our privacy. Yet, these technologies are continuing to develop and thanks to new developments in automation and machine deception – they will continue to shape the use of AI over the coming year.
1. New technologies will enable partial automation of tasks
Automation occurs in stages. While full automation might still be a way off, there are many workflows and tasks that lend themselves to partial automation. In fact, McKinsey estimates that “fewer than 5 per cent of occupations can be entirely automated using current technology. However, about 60 per cent of occupations could have 30 per cent or more of their constituent activities automated.”
We have already seen some interesting products and services that rely on computer vision and speech technologies, and we expect to see even more in 2019. Look for additional improvements in language models and robotics that will result in solutions that target text and physical tasks. Rather than waiting for a complete automation model, competition will drive organisations to implement partial automation solutions and the success of those partial automation projects will spur further development.
2. Artificial Intelligence in the enterprise will build upon existing analytic applications
Companies have spent the last few years building processes and infrastructure to unlock disparate data sources in order to improve analytics on their most mission-critical analysis, whether it is business analytics, recommenders and personalisation, forecasting, or anomaly detection and monitoring.
Aside from new systems that use vision and speech technologies, we expect early forays into deep learning and reinforcement learning will be in areas where companies already have data and machine learning in place. For example, companies are infusing their systems for temporal and geospatial data with deep learning, resulting in scalable and more accurate hybrid systems (i.e., systems that combine deep learning with other machine learning methods).
3. UX/UI design will become critical
Many current AI solutions work hand in hand with consumers, human workers, and domain experts. These systems improve the productivity of users and in many cases enable them to perform tasks at incredible scale and accuracy. Proper UX/UI design not only streamlines those tasks but also goes a long way toward getting users to trust and use AI solutions.
4. Hardware will become more specialised for sensing, model training, and model inference
The resurgence in deep learning began around 2011 with record-setting models in speech and computer vision. Today, there is certainly enough scale to justify specialised hardware--Facebook alone makes trillions of predictions per day. Google has also had enough scale to justify producing its own specialised hardware. It has been using tensor processing units (TPUs) un its cloud since last year. Therefore, 2019 should see a broader selection of specialised hardware begin to appear. Numerous companies and startups in China and the US have been working on hardware that targets model building and inference, both in the data centre and on edge devices.
5. Hybrid models will remain important
While deep learning continues to drive a lot of interesting research, most end-to-end solutions are hybrid systems. In 2019, we’ll begin to hear more about the essential role of other components and methods including model-based methods like Bayesian inference, tree search, evolution, knowledge graphs, simulation platforms, and many more. And we just might begin to see exciting developments in machine learning methods that aren’t based on neural networks
6. Investments will be made into new tools and processes
We are in a highly empirical era for machine learning. Tools for ML development will need to account for the importance of data, experimentation and model search, and model deployment and monitoring. Take just one step of the process: model building. Companies are beginning to look into tools for data lineage, metadata management and analysis, efficient utilisation of compute resources, efficient model search and hyperparameter tuning. In 2019, we can expect many new tools to ease the development and actual deployment of AI and Ml to products and services.
7. Challenges around machine deception will increase
In spite of a barrage of “fake” news, we’re still in the early days of machine-generated content (fake images, video, audio, and text). At least for now, detection and forensic technologies have been able to ferret out fake video and images. But the tools for generating fake content are improving quickly so we must ensure that detection technologies are able to keep pace.
Machine deception does not just refer to machines deceiving humans however. It also refers to machines deceiving machines (bots) and people deceiving machines (troll armies and click farms). Information propagation methods and click farms will continue to be used to fool ranking systems on content and retail platforms, and methods to detect and combat this will have to be developed as fast as new forms of machine deception are launched.
8. Questions will be raised around reliability and safety
It’s been heartening to see researchers and practitioners become seriously interested and engaged in issues pertaining to privacy, fairness, and ethics. But as AI systems become deployed in mission-critical applications including life and death scenarios, improved efficiency from automation will need to come with safety and reliability measurements and guarantees. The rise of machine deception in online platforms, as well as recent accidents involving autonomous vehicles, has cracked this issue wide open. In 2019, we can expect to hear safety discussed more intensively.
9. Access to more data will help companies to take advantage of data they didn’t generate
Because many of the models we rely on, including deep learning and reinforcement learning are data hungry, the anticipated winners in the field of AI have been huge companies or countries with access to massive amounts of data. But services for generating labelled datasets are beginning to use machine learning tools to help their human workers scale and improve their accuracy. And in certain domains, new tools like generative adversarial networks (GAN) and simulation platforms are able to provide realistic synthetic data, which can be used to train machine learning models. Thanks to new and secure privacy preserving technologies, organisations can take advantage of data they didn’t create themselves. Consequently, smaller organisations will gain the ability to compete by using machine learning and AI.
Ben Lorica, Chief Data Scientist, O'Reilly Media
Image Credit: John Williams RUS / Shutterstock