Skip to content

AWS GenAI Engineer - Intermediate Quiz

โ† Back to Quiz Home


This quiz covers the mechanics of RAG (Retrieval-Augmented Generation), including embeddings, chunking, and agents.


#

What is "Chunking" in the context of Knowledge Bases?

#

Which chunking strategy splits text where the meaning changes (e.g., between distinct topics) rather than just by token count?

#

What are Agents for Amazon Bedrock?

#

How does an Agent know which external API to call?

#

What is "Chain-of-Thought" (CoT) prompting?

#

Which metric evaluates whether the RAG answer is derived only from the retrieved context (preventing hallucinations)?

#

What is "Hybrid Search"?

#

What is "Hierarchical Chunking"?

#

Which AWS service provides the "Thought Trace" (CoT) logs for Bedrock Agents?

#

When would you use Provisioned Throughput in Bedrock?

#

What is the role of an "Action Group" in Bedrock Agents?

#

What is "Embeddings" in GenAI?

#

Which component is responsible for retrieving relevant documents in a RAG system?

#

How do you handle a user request that requires data from a private SQL database using Bedrock?

#

What is "Context Precision" in RAG evaluation?

#

What is "Continued Pre-training"?

#

Which AWS service would you use to store the Vector Index for a Knowledge Base if you want a serverless experience?

#

What is the "Context Window" limit for Claude 3 Opus?

#

What does "Answer Relevance" measure?

#

How can an Agent handle ambiguous user requests?

Quiz Progress

0 / 0 questions answered (0%)

0 correct


๐Ÿ“š Study Guides


๐Ÿ“ฌ Weekly DevOps, Cloud & Gen AI quizzes & guides