An e-commerce website utilizes a machine learning model for real-time product recommendations based on cart items. The current workflow involves Pub/Sub messaging, BigQuery storage for predictions, and model updates in Cloud Storage. The goal is to minimize prediction latency and model update effort.
The suggested solution is D: Utilizing the RunInference API with WatchFilePattern within a Dataflow job that encapsulates the model and serves predictions.
Your company manages an ecommerce website. You developed an ML model that recommends additional products to users in near real time based on items currently in the user’s cart. The workflow will include the following processes:
1. The website will send a Pub/Sub message with the relevant data and then receive a message with the prediction from Pub/Sub 2. Predictions will be stored in BigQuery 3. The model will be stored in a Cloud Storage bucket and will be updated frequently
You want to minimize prediction latency and the effort required to update the model. How should you reconfigure the architecture?
Skip the extension — just come straight here.
We’ve built a fast, permanent tool you can bookmark and use anytime.
Go To Paywall Unblock Tool