AWS Certified AI Practitioner AIF-C01 Q41-Q50

  1. AWS Certified AI Practitioner AIF-C01 Q1-Q10
  2. AWS Certified AI Practitioner AIF-C01 Q11-Q20
  3. AWS Certified AI Practitioner AIF-C01 Q21-Q30
  4. AWS Certified AI Practitioner AIF-C01 Q31-Q40
  5. AWS Certified AI Practitioner AIF-C01 Q41-Q50
  6. AWS Certified AI Practitioner AIF-C01 Q51-Q60
  7. AWS Certified AI Practitioner AIF-C01 Q61-Q70
  8. AWS Certified AI Practitioner AIF-C01 Q71-Q80
  9. AWS Certified AI Practitioner AIF-C01 Q81-Q90
  10. AWS Certified AI Practitioner AIF-C01 Q91-Q100
  11. AWS Certified AI Practitioner AIF-C01 Q101-Q110

Please Subscribe to Access the Premium Content

The remaining premium contents are locked. Please subscribe to the monthly newsletter to unlock the content for free.

Loading...

41. An AI practitioner is using a large language model (LLM) to create content for marketing campaigns. The generated content sounds plausible and factual but is incorrect.
Which problem is the LLM having?

A. Data leakage
B. Hallucination
C. Overfitting
D. Underfitting

Answer

B


42. A company has built a solution by using generative AI. The solution uses large language models (LLMs) to translate training manuals from English into other languages. The company wants to evaluate the accuracy of the solution by examining the text generated for the manuals.
Which model evaluation strategy meets these requirements?

A. Bilingual Evaluation Understudy (BLEU)
B. Root mean squared error (RMSE)
C. Recall-Oriented Understudy for Gisting Evaluation (ROUGE)
D. F1 score

Answer

A


43. A large retailer receives thousands of customer support inquiries about products every day. The customer support inquiries need to be processed and responded to quickly. The company wants to implement Agents for Amazon Bedrock.
What are the key benefits of using Amazon Bedrock agents that could help this retailer?

A. Generation of custom foundation models (FMs) to predict customer needs
B. Automation of repetitive tasks and orchestration of complex workflows
C. Automatically calling multiple foundation models (FMs) and consolidating the results
D. Selecting the foundation model (FM) based on predefined criteria and metrics

Answer

B


44. Which option is a benefit of ongoing pre-training when fine-tuning a foundation model (FM)?

A. Helps decrease the model’s complexity
B. Improves model performance over time
C. Decreases the training time requirement
D. Optimizes model inference time

Answer

B


45. What are tokens in the context of generative AI models?

A. Tokens are the basic units of input and output that a generative AI model operates on, representing words, subwords, or other linguistic units.
B. Tokens are the mathematical representations of words or concepts used in generative AI models.
C. Tokens are the pre-trained weights of a generative AI model that are fine-tuned for specific tasks.
D. Tokens are the specific prompts or instructions given to a generative AI model to generate output.

Answer

A


46. A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.
Which factor will drive the inference costs?

A. Number of tokens consumed
B. Temperature value
C. Amount of data used to train the LLM
D. Total training time

Answer

A


47. A company is using Amazon SageMaker Studio notebooks to build and train ML models. The company stores the data in an Amazon S3 bucket. The company needs to manage the flow of data from Amazon S3 to SageMaker Studio notebooks.
Which solution will meet this requirement?

A. Use Amazon Inspector to monitor SageMaker Studio.
B. Use Amazon Macie to monitor SageMaker Studio.
C. Configure SageMaker to use a VPC with an S3 endpoint.
D. Configure SageMaker to use S3 Glacier Deep Archive.

Answer

C


48. A company has a foundation model (FM) that was customized by using Amazon Bedrock to answer customer queries about products. The company wants to validate the model’s responses to new types of queries. The company needs to upload a new dataset that Amazon Bedrock can use for validation.
Which AWS service meets these requirements?

A. Amazon S3
B. Amazon Elastic Block Store (Amazon EBS)
C. Amazon Elastic File System (Amazon EFS)
D. AWS Snowcone

Answer

A


49. Which prompting attack directly exposes the configured behavior of a large language model (LLM)?

A. Prompted persona switches
B. Exploiting friendliness and trust
C. Ignoring the prompt template
D. Extracting the prompt template

Answer

D


50. A social media company wants to use a large language model (LLM) to summarize messages. The company has chosen a few LLMs that are available on Amazon SageMaker JumpStart. The company wants to compare the generated output toxicity of these models.

Which strategy gives the company the ability to evaluate the LLMs with the LEAST operational overhead?

A. Crowd-sourced evaluation
B. Automatic model evaluation
C. Model evaluation with human workers
D. Reinforcement learning from human feedback (RLHF)

Answer

B


Leave a Comment

Your email address will not be published. Required fields are marked *


Scroll to Top