You are currently viewing New features and capabilities added to the Google Cloud Platform

New features and capabilities added to the Google Cloud Platform

Sharing is caring!

Google has released new updates to its Cloud AI Platform to improve forecasting and training functionalities for machine learning and deep learning models.

Google’s AI Platform, an end-to-end Machine Learning Platform as a Service (ML PaaS), enables data scientists, ML developers, and AI engineers to deliver their projects faster. It provides services’ to handle the lifecycle of machine learning and deep learning models. The Google Cloud AI platform offers all the necessary building blocks for developing and implementing complex machine learning models.

The latest updates in the Google Cloud Platform make training and implementation of the ML model powerful and flexible.

Model Development

The assistance for executing custom containers to train models on Cloud AI Platform has now become available to the customers. This capability enables users to come up with their own pre-installed ML framework and Docker container images to execute on the AI Platform.

  • Custom container maintenance removes the limitations of the cloud-based learning model.
  • Customers can store special container images with specific language versions, frameworks, and tools to be used in their training programs.
  • Data Scientists and ML developers can come up with their own frameworks and libraries with the help of special containers.
  • Developers can create and test container images before using them in the cloud platform.
  • To automate the deployment process, the DevOps team can integrate the AI ​​platform into the existing CI/CD pipeline.
  • DevOps team can automate the deployment process by integrating the AI platform into the existing Continuous Integration/Continuous Development pipeline.

Google Platform has launched scale tiers, a set of predefined cluster specification based on a GCE VMs class, to streamline the process of choosing the right hardware configuration for the ML model. It enables customers to select the custom tier to determine machine configurations for master, worker, and parameter server.

Model Deployment and Inference

The method of providing a fully-trained model that responds with forecasts is called Inference.

  • Customers can utilize the AI Platform Prediction service to conclude target values for new data.
  • Customers can leverage trained machine learning models in the Google Cloud AI Platform.
  • The Cloud AI Platform Prediction service will enable customers to select from a collection of Google Compute Engine machine types to execute a Machine Learning model.
  • To accelerate the inferencing process, customers can add on GPUs like NVIDIA T4 or TPUs.
  • Customers who use the AI ​​platform can now make forecast requests and responses directly with BigQuery to examine and identify detect skew and outliers.

The availability of enhanced features, including custom containers and GKE-based prediction service, provides more flexibility to the platform and scalability for training and implementing machine learning models in the cloud.

Deepali

Deepali Kulshrestha

Salesforce Certified Developer | Delivery Management Head
Deepali, a certified Salesforce Advanced Administrator and Salesforce Developer and CSPO Certified at Cloud Analogy, is a successful name in the industry circles when it comes to the delivery of successful projects with end-to-end testing. Deepali is a globally-renowned industry stalwart when it comes to managing Operations & Delivery Planning in driving Business Performance Management.

Hire the best Salesforce Development Company. Choose certified Salesforce Developers from Cloud Analogy now.

Leave a Reply

× How can I help you?