A self-driving car can generate over 10 GB of data for every mile it travels. How will the next generation of applications morph to handle the sprawling complexity of the Internet of Things?
Think about what the internet will look like 20 years from now. Today, people already use smartphones and dynamic web applications in ways we never imagined even five years ago. Users now spend more than four hours a day on their smartphones alone — to say nothing of desktop and tablet screen time — generating massive amounts of data as they consume media, scroll through social feeds, browse and buy products, and spend time in all the apps and services that make up our digital lives.
The applications of the future, however, have already begun to evolve beyond the smartphone. There’s a long-held perception that when someone talks about the Internet of Things, they’re referring to smart products such as connected refrigerators or Nest thermostats. In actuality, the sprawl of incredibly powerful IoT devices at the edge extends far beyond the smart home controlled through your Amazon Echo. To get a sense of what our connected future will look like, and the next generation of applications we’ll need to evolve to meet that complexity, look no further than self-driving cars.
A self-driving car generates more than 10 GB of data for every mile it travels, as Peter Levine of Andreesen Horowitz explains during a presentation on edge computing. A recent Datafloq report estimates that a single self-driving car will create two petabytes of data per year. You’re going to have processes in that car that are as powerful as a server, with applications executing astounding sophisticated logic. You’re not going to connect your car to the cloud to upload 10 GB a minute.
Instead, we’ll need a new range of operating systems and application servers. These software platforms will have to connect not only back to the central database or cloud application, but to one another. Imagine the everyday occurrence of connected cars avoiding collisions on the road by sending LIDAR data back and forth over peer-to-peer (P2P) channels. If you’re building an application to run in a self-driving car, how do those real-time data streams add complexity to your application over time? What if we were talking about drones instead? The connected device form factor isn’t what matters; it’s the flexible application architectures underneath that will power the IoT revolution at the edge.
It’s the web server and application infrastructure providers that will be tasked with ensuring these applications can scale to meet unprecedented data challenges. For them, Levine says, it’s not the power of an application that will matter the most: it’s the agility.
The real-time intelligent data processing needs of the IoT requires a change in how we conceptualise applications. The apps of the future need the automated logic and intelligent code to adapt in an instant to meet the data, load balancing, and traffic demands of smart devices and experiences that don’t even exist yet. In essence, the internet of the future will be made up of applications that are almost alive.
Breaking down the architecture
Look beneath the surface of any web experience, and you’ll find a mountain of data. Think about the amount of data available for analysis in a web server, a load balancer, or your application. Beyond the traffic flowing in and out, you can drill down into a treasure trove of user and device data, browsing habits and behaviours, and extrapolate that information to uncover any number of larger business or technological trends. The question is, what are you doing with all the data you collect?
Data is useless if it doesn’t somehow give you intelligence. From that intelligence, you can take action. We need to transition from a mindset focused on gathering data to one geared toward generating insights, aggregating that data into rich analytics, and channelling that intelligence back into automated processes. Infused with artificial intelligence and machine learning, this automation can not only justify your investment in IoT devices and the sensor data you collect but help your application architecture adapt dynamically like a living organism would.
To understand what makes up the architecture of a living application, first let’s examine how we got here.
Going back 10 years, to when service-oriented architecture (SOA) was the design of choice, we were all using middleware and writing applications in Java, using tools like IBM WebSphere and Oracle WebLogic. Hardware-based delivery controls were used to accelerate those applications and scale them for the web, and virtual machines were the exciting innovations that could help us better utilise our compute resources. These were good times. We knew what we were doing.
Then the cloud happened. When cloud infrastructure went mainstream, computer resources and storage networks were suddenly available at incredibly low cost and were so convenient that people didn’t have to buy physical hardware or build their own servers. Any person with an idea could build an application running on cloud computing resources for pennies an hour. This spawned a whole new influx of developers building full-stack applications and, founding companies to market them. A new breed of rock stars like Mark Zuckerberg and Elon Musk built applications that changed the world.
The other crucial factor was open source software (OSS). Open-source has been around almost as long as software has, but for decades remained on the fringes of the commercial software realm. In tandem with the cloud, open source became prolific and ubiquitous, and not just the operating systems. Open source moved into databases: SQL databases, in-memory databases, relational databases, and more. From there, we began seeing open source tools for middleware, application servers, frameworks, hypervisors, automation tools, and orchestration platforms. Now, we’re even seeing artificial intelligence and machine learning tools such as TensorFlow available as open source. You can build a whole application with a full stack entirely from open source tools.
Microservices and service mesh: A living app’s nervous system
In today’s world, you need to move faster and be smarter than ever. You’ve got to deliver killer digital experiences, and speed to market is everything. The question we face now is, even armed with the endless scalability of the cloud and a wealth of open source tools, what is going to slow you down?
Number one, the complexity continues to increase exponentially as you roll out these applications. Think about containers. They were the hype du jour two years ago, and all the hubbub surrounded Docker and registries. Over the past 18 months, that has transitioned into the next phase, moving towards orchestration platforms like Docker Swarm and Kubernetes which have stolen the spotlight. Now we're entering the third phase of this container revolution, centred around service mesh architectures. You can think about service mesh like a nervous system for microservices.
Microservices and service mesh are where we should’ve started from. The modular flexibility of microservice architectures combined with the service mesh networks built on open platforms like Istio provide answers to some of the foundational questions we’ve been asking for years: How do applications actually run on container platforms? How do you get them to talk and connect?
Now, imagine a microservices and service mesh-based solution applied back to the connected car. All the intelligent code and processes within the connected car of the future effectively become additional services within your microservices architecture, scaling and allocating resources in real time as your computer on wheels reacts to its surroundings.
What is a living application?
The game has changed. In 1993, the original pentium chip had 3.1 million transistors. In the “brain” of the new iPhone X, the A11 Bionic chip, has 4.3 billion transistors, more than a million times as many. The IoT devices of the future will have exponentially more. To process all the data, compute power, and fluctuating traffic loads of a next-generation internet populated by these powerful devices, an application needs to act as a brain of its own, making real-time decisions.
Living applications act like an organism, responding instinctively (in this case, using automated logic to make data-driven decisions) to grow or shrink based on the environment they’re in, spawning and spinning down additional instances as needed. The application should be able to self-heal if it’s broken and defend itself if attacked. These apps also need to free organisations to maintain their legacy infrastructure while transitioning to what comes next.
As to how you move those existing applications forward into this new era, it’s all about reducing the complexity of management and integration. You don’t want to throw out your old applications when you can still extract value from your investment by integrating them into your modern, adaptable architecture to get value out of those investments.
Because a living application can administer and manage itself, it lets you focus on adding new features and capabilities, not trying to administer and manage it if it can administer and manage itself. That’s the definition of automation. It frees up time, pounds, and resources, letting you concentrate on building that next best-selling self-driving car. Once an application begins reacting and scaling to real-time stimuli – an organism adapting to an ever-changing digital world – a connected future of intelligent devices doesn’t seem as daunting anymore.
Gus Robertson, CEO, NGINX
Image Credit: Flex