On this podcast we have a company spotlight for you on Canonical. We are talking to VP of Products Steve George about the products and partnerships they are most well known for.
For more related podcasts click here. (opens in new tab)
To subscribe to receive new podcast episodes for free click here. (opens in new tab)
Let’s start with a bit of background on Canonical what are you best known for?
We are a Linux distribution and the product we are best known for is Ubuntu. It's one of basically three or four commercial operating Linux operated systems out there, many of which people will know such as Red Hat and SUSE. At Ubunto what we have been doing over the last few years is particularly focusing on cloud computing and the impacts that developers see in the way that they develop and deploy applications out into their businesses. We are really trying to understand what is going to change now that we are deploying and developing applications in the cloud and what that means in terms of the sorts of things that Linux distribution needs to deliver for them.
Now if you look at the Ubuntu.com (opens in new tab) website you will see the phrase Hyperscale trend. Explain what we mean by that Steve?
What have been the business needs that have pushed this forward then?
The main things I think have been that a lot of the developments around Hyperscale computing are really about the agility in development. A common phrase that you hear people talking about is the fault line between development and operations and reducing that fault line to develop a greater speed of time to market so rather than the traditional cycle in an enterprise of developing an application for 6 months or 9 months and then doing a deployment which would take 3 to 4 months we are trying to reduce that so that you can get those capabilities out to your business or out to your customers that much faster so can you do continuous deployment. So, it is really about enabling greater agility, greater speed and certainly there are lots of efficiencies that you can also get while you are doing that which will help on the cost saving side of that equation.
Well you are famous as you mentioned for Ubuntu and for those not familiar with this system give us a bit of background on that?
Ubuntu is a type of Linux distribution and we work on both clients’ laptops but also on servers. We have a very agile deployment method so we do two types of releases; every 6 months we release a new version of Ubuntu that has the latest features and the latest pieces of software in it. Then every 2 years we do a big release which is a long term support release and the real critical difference for businesses there is that it answers these dual problems that they have at a technology level around how they can get the latest and greatest features that they can develop their software on because that is what developers want to work on but then how can they also make sure they have a capable system internally which is stable, secure and maintained. This means that people running systems in production can have those systems stable and secure for a much longer period. Our maintenance period on LTS is 5 years for example so for those who are perhaps deploying a CRM system they would chose the LTS release, but those trying to do some fast development on a new feature or on a new capability they would chose our 6 month release. In general we have tended to be focused on fast technology, new technology so things like Hyperscale and big data instead of some of the slower moving aspects of the enterprises like Oracle database compatibility for example.
So from your core product in Ubuntu to this new relationship with HP tell us about how that relationship came about?
One of the things that we noticed from our customers and our users in particular as they start to use this more agile stair box capability is that one of the implications aside from the software level is what happens at hardware level. You now move to a system in the cloud or with these very agile deployments where what you want to be able to do is get the software out there and scale in a horizontal fashion so you want your application to run across multiple servers and then you need to add more scaling to the application rather than getting a larger server you want to just add more servers in. That is the way in which the architecture changes. But of course at a hardware level that has a lot of implications as well because what you are trying to do at hardware level is trying to be as efficient as possible and to use the smallest amount of energy that you can and the smallest amount of physical space that you can and so we have been thinking about Hyperscale for a while working on things like ARM chip so ARM CPUs which have traditionally used a lot less power than some of the X86 ones. So we started working with HP on the Moonshot project and what Moonshot is about is re-thinking how servers are designed, configured and deployed and to take advantage of these new software architectural capabilities.
When you were looking at the developments in your operating system and core products what were the gaps that you identified over the last 12 months in your offerings that you have had to respond to in order to move things forward?
I think the interesting gaps that we are trying to respond to right now is probably at two levels, one at the hardware level and the work that we have been doing with Moonshot specifically is about trying to get the most and highest level of density in servers. To some degree this is an old problem because server manufacturers and people in Data centres have wanted to get as much density as they can out of servers for a long time but some of the new architectural capabilities there are available for instance with some of the things are around ARM and ARM CPUs mean that there is a sort of a new step down in terms of energy usage and a new level of density that we can manage. That does have implications for the operating system though because the operating system has to be hardware aware and it has to be able to boot this new hardware and bring this new hardware online whenever there is a temporary need. So, imagine an internal application, maybe it is a financial application and it runs once a month or runs at a steady state once a month and then suddenly has a big pile of work and it has to work out how to pay all of the employees. So what you want in those circumstances is a hardware level there so it can chug along on 15 servers and then when there is a sudden spike it should be able to double that amount of hardware that is being used. Of course at a hardware level what you want is the hardware not to be using electricity for 29 days of the month and then for 1 day in the month is to bring that hardware up and use the software and then go back to not using that hardware after that temporary work load spike has finished. There are implications around the software around that in being able to boot hardware, manage hardware and understand hardware’s capabilities and that is certainly something that we have been thinking a lot of with Moonshot.
Has cost saving for your clients been a key driver in the developments at Canonical? Big data has been traditionally very expensive to manage so how have you made it cost efficient for your customers?
I think that one of the things that is going on here is that, one of the challenges with big data in the past has been that at a hardware level and a software level it has been quite costly and complicated. At a software level really big data is starting to become much more ubiquitous and Ubuntu has a lot of big data capabilities. We are doing a lot of work on the software to make it really easy to deploy and manage big data capabilities. A lot of the capabilities around cloud computing help to put that management framework in place. So that has been getting easier but you still need a large number of systems that can process all of that big data. So when thinking about how you can really make it cost efficient for an enterprise to manage big data capabilities and how they can use the hardware as efficiently as possible, we talk about how we imagine that a customer’s internal servers are acting. Just like a public cloud. Rather than you just using the Amazon cloud, we take a customer’s internal servers and start treating those like a cloud. Right now we want to use all of our hardware for this big data job or a significant part of the hardware for this big data job and then we are going to use it for something else later. Thinking about that flexibility and that dynamism in the way that customers can use the hardware is a really big thing and ensuring that it is as cost efficient as possible.
Well we have seen a massive increase in the data that businesses are needing to handle but we have also seen this massive increase during a period where the markets have suffered from the global recession has this combination opened up opportunities for a company with cost saving solutions like yours do you think?
I think so, one of the things that was happening before this tough period was that people were very comfortable with the technology that they had and so as they added more data or more processing to their internal systems the balance of risk for them was that it was better to stay with some of the expensive proprietary systems because they understood those systems. Whereas now, both in terms of the rate of growth in big data requirements but also the fact that everyone’s budgets are under so much pressure that the balance of risk favours something with a step function better than what was available before. Open source and from my perspective particularly, Ubuntu allows us to deliver a much more efficient, capable and scalable systems to customers at a tenth of the price of they are paying for the proprietary systems. It does mean that there is new software for them to learn so there is a cost of change but they get a much better function and capabilities at the other end of it. That has had a big impact and I think people have become just more aware of the richness of the solutions that are out there as the big data issues have become more common. That is helping organisations to experiment with some of the open source offerings out there.
If you could pick out the most important benefit to organisations that Ubuntu and Moonshot offer what would they be?
I think that for organisations that are looking for a greater degree of agility in the sort of application development and deployment methods that they are looking for and they really are in a situation where they want to make those great cost savings and efficiency that you can get from a specialised hardware then Moonshot and Ubuntu is a great solution.
Your predictions for the Hyperscale trend over the next few years, where are we going next?
I think I am looking forward to seeing Hyperscale coming much more broadly into the market and some of the capabilities about management and deployment of software becoming much more mainstream because they are applicable and there are lessons that people have learnt in Hyperscale space that will be coming into the main market over the next year or so.