AWS Machine Learning Engineer - Advanced Quiz
← Back to Quiz Home
This quiz tests your ability to design complex ML workflows, optimize inference costs, and handle custom deployment scenarios.
How do you implement a "Serial Inference Pipeline" on SageMaker?
Serial pipelines allow you to fuse preprocessing (Scikit) and Inference (XGBoost) into one API call without network round-trips.
What is the benefit of "SageMaker Inference Recommender"?
It removes the guesswork of "Which instance type should I use?" by running real benchmarks.
How does "Model Parallel" distributed training differ from "Data Parallel"?
For massive LLMs (Billions of params), Model Parallelism (slicing the model) is mandatory.
You need to run a GPU-based training job but want massive cost savings. You can tolerate interruptions. What is the best strategy?
Checkpointing ensures that if the spot instance is reclaimed, you only lose the progress since the last save, not the whole job.
What is the use case for "SageMaker Edge Manager"?
It extends SageMaker's management capabilities to devices outside the AWS cloud.
How do you handle "Training-Serving Skew" where preprocessing logic drifts between Python training scripts and Java inference apps?
Using a consistent artifact (container) for preprocessing guarantees the logic is identical.
What is "SageMaker Autopilot"?
Autopilot generates the notebooks used to create the model, allowing you to inspect and modify them ("White Box").
How do you optimize cost for an endpoint that has spiky traffic (idle at night, busy during day)?
Serverless Inference scales to zero when idle, making it perfect for intermittent traffic.
What is "Neo" compilation?
Neo allows you to run complex models on constrained edge devices.
How can you define a dependency between steps in a SageMaker Pipeline (e.g., "Only register if evaluation > 80%")?
The ConditionStep evaluates the output of the ProcessingStep (Evaluation) and decides whether to proceed to RegisterModel.
What is the "SageMaker Role" requirement for accessing data in S3 encrypted with a custom KMS key?
S3 permissions allow reading the file (blob), but KMS permissions are required to decrypt the blob.
How do you monitor "Feature Importance" drift?
If a model suddenly starts relying 100% on "ZipCode" instead of "Income", that's a sign of bias or drift.
What is "Pipe Mode" implementation detail?
This allows processing datasets much larger than the disk space of the training instance.
How do you implement "Warm Pools" for SageMaker Training?
Warm pools are great for iterative experimentation where you re-run training frequently.
What is the "Asynchronous Inference" endpoint type suitable for?
Async inference uses an internal queue, protecting the endpoint from bursts and allowing long runtimes.
How do you customize the container image used for training?
BYOC (Bring Your Own Container) gives you full control over the OS, libraries, and runtime.
What is "SageMaker Hyperparameter Tuning" (HPO)?
It treats the tuning process as a regression problem to find the optimal set of parameters efficiently.
How do you ensure data privacy when using Amazon Bedrock?
Bedrock is designed for enterprise usage where data privacy is paramount.
What is "Inference Recommendation" load test based on?
It spins up the actual instances and bombards them with requests to measure latency and throughput.
How do you update a running Endpoint without downtime?
SageMaker ensures the new instances are healthy before shifting traffic and terminating the old ones.
Quiz Progress
0 / 0 questions answered
(0%)
0 correct
Quiz Complete!
0%
📚 Study Guides
📬 Weekly DevOps, Cloud & Gen AI quizzes & guides