AWS GenAI Engineer - Basics Quiz
← Back to Quiz Home
This quiz covers the fundamentals of Amazon Bedrock, Foundation Models (FMs), and basic inference parameters.
Bedrock provides serverless access to models from Anthropic, Cohere, Meta, Mistral, Stability AI, and Amazon.
Which Amazon Titan model is best suited for search and semantic similarity tasks?
Titan Embeddings converts text into numerical vectors to enable semantic search.
What does the "Temperature" inference parameter control?
Low temperature (0.0) makes the model more deterministic (factual); high temperature (1.0) makes it more creative/random.
What is "RAG" (Retrieval-Augmented Generation)?
RAG grounds the LLM on your specific data without needing to fine-tune the model.
Which pricing model for Amazon Bedrock guarantees a specific level of throughput for steady-state workloads?
Provisioned Throughput requires purchasing "Model Units" for a committed term (e.g., 1 month).
What is a "Foundation Model" (FM)?
FMs are the "foundation" upon which specialized GenAI applications are built.
Which Bedrock feature allows you to block PII (Personally Identifiable Information) from reaching the model?
Guardrails provide a safety layer that checks inputs and outputs for sensitive information or harmful content.
What is the primary difference between Anthropic's Claude 3 and Amazon Titan?
Bedrock offers "Choice of Models" so you can match the right model to your specific use case.
What does "Top-P" (Nucleus Sampling) do?
Top-P is another way to control diversity in the output, similar to Temperature.
What is "Prompt Engineering"?
Prompt engineering is the cheapest and fastest way to improve model performance.
Which vector database is fully managed and serverless, recommended for use with Bedrock Knowledge Bases?
OpenSearch Serverless provides the vector engine needed for storing embeddings in a RAG architecture.
What is the "Context Window" of an LLM?
Claude 3 Opus, for example, has a massive 200k token context window.
Which specialized AWS chip is designed to accelerate Deep Learning inference?
Inferentia (Inf2) instances offer high performance at low cost for running GenAI models.
How can you consume a Bedrock model privately within your VPC?
VPC Endpoints ensure traffic between your application and Bedrock stays on the AWS network.
Fine-tuning updates the model's weights to better understand a niche domain or specific output style.
Roughly, 1000 tokens is about 750 words. Pricing is often per 1M tokens.
Which model provider on Bedrock offers "Jurassic-2" models?
AI21 Labs provides the Jurassic series, known for strong natural language capabilities.
What is "Zero-Shot" prompting?
"Translate this to Spanish: Hello" is a zero-shot prompt.
Which Amazon Bedrock feature allows you to evaluate model performance?
You can use automated evaluation or human-based evaluation to compare models.
What is the "System Prompt"?
System prompts are critical for "steering" the behavior of the model securely.
Quiz Progress
0 / 0 questions answered
(0%)
0 correct
Quiz Complete!
0%
📚 Study Guides
📬 Weekly DevOps, Cloud & Gen AI quizzes & guides