Google's TensorFlow Serving puts machine learning models into production

Google has been steadily working on integrating machine learning into its search engine and Android OS for some time and now the company has launched a new open-source project called TensorFlow Serving that will allow developers to take advantage of its machine learning models.

The new project will utilise Google's TensorFlow machine learning library to help developers build and train machine learning algorithms. Although it is based around resources from Google, TensorFlow Serving will also support other models and data and can be extended to make use of them.

There are other projects available that can be used to make the process of building and training machine learning algorithms but TensorFlow Serving is putting an emphasis on using its models in production environments.

First developers will train their models in TensorFlow and then they will use the APIs from TensorFlow Serving to react to the input of their clients. The project will be able to tap into a machine's GPU resources to help speed up processing time.

TensorFlow Serving will allow developers the freedom to experiment with various algorithms and models while still having a stable architecture and API. The architecture will remain stable as they refine the models or the output changes based on new incoming data.

Instead of being written in Google's Go programming language, TensorFlow is written in C++ and has been optimised for maximum performance. Google claims that it is able to handle 100,000 queries per second per core running on a 16-core Xeon machine.

For developers looking for more information on TensorFlow Serving, it is now available on GitHub along with multiple tutorials on how to get started using it.

Image Credit: Sarah Holmlund / Shutterstock