Skip to content

AWS Machine Learning Engineer - Intermediate Quiz

Back to Quiz Home


This quiz covers operationalizing ML models, including monitoring for drift, feature management, and deployment strategies.


#

What does "Data Drift" mean in the context of SageMaker Model Monitor?

#

How can you serve two versions of a model (A and B) on a single SageMaker Endpoint to test performance?

#

What is the primary purpose of the SageMaker Feature Store?

#

Which SageMaker Feature Store component provides low-latency access for real-time inference?

#

How do you secure a SageMaker Notebook to prevent data exfiltration to the public internet?

#

What is "SageMaker Pipelines"?

#

How do you deploy a custom SciKit-Learn model trained on your laptop to SageMaker?

#

What happens if you enable "Inter-Container Traffic Encryption" for a training job?

#

Which service orchestrates the "Human-in-the-loop" workflow for labeling training data?

#

What is "Model Quality Drift"?

#

How do you optimize inference latency for a deep learning model on SageMaker?

#

What is the "SageMaker Model Registry"?

#

Which deployment option allows you to test a new model in production without showing predictions to users (Shadow Mode)?

#

How can you run a script automatically every time a Notebook Instance starts (e.g., to install a specific library)?

#

What is "Bias Drift" in Model Monitor?

#

Which IAM permission is required for a SageMaker Role to write artifacts to S3?

#

How does SageMaker "Data Parallel" distributed training work?

#

What is the "Offline Store" in Feature Store backed by?

#

Which SageMaker tool helps you debug training jobs by capturing tensors?

#

What is "Managed Spot Training" checkpoints?

Quiz Progress

0 / 0 questions answered (0%)

0 correct


📚 Study Guides


📬 Weekly DevOps, Cloud & Gen AI quizzes & guides