OCI 2023 AI Foundations Associate (1Z0-1122-23) S5

Oracle Cloud Infrastructure 2023 AI Foundations Associate (1Z0-1122-23)

 

 

  1. What is “in-context learning” in the context of large language models (LLMs)?
  • Providing a few examples of a target task via the input prompt (*)
  • Teaching the model through zero-shot learning
  • Modifying the behavior of a pretrained LLM permanently
  • Training a model on a diverse range of tasks

Correct Option: Providing a few examples of a target task via the input prompt.

In-context learning refers to the capability of generative large language models (LLMs) to learn and perform new tasks without further training or fine-tuning. Instead of modifying the model permanently, users can guide the model’s behavior by providing a few examples of the target task through the input prompt. This is particularly useful when direct access to the model is limited, such as when using it through an API or user interface.

 

2.Sequence models are used to solve problems involving sequentially ordered data points or events. Which is NOT the best use case for sequence models?

  • Image classification and object recognition (*)
  • Speech recognition and language translation
  • Time series analysis and forecasting
  • Natural language processing tasks such as sentiment analysis

Correct Option: Image classification and object recognition

Sequence models are indeed well-suited for tasks involving sequentially ordered data points or events, such as time series analysis, natural language processing, speech recognition, and language translation. However, for image classification and object recognition, traditional machine learning models and convolutional neural networks (CNNs) are more commonly used.

 

 3 Which aspect of Large Language Models significantly impacts their capabilities, performance, and resource requirements?

  • Total number of GPUs used for model training
  • Complexity of the programming languages used for model development
  • Model size and parameters, including the number of tokens and weights (*)
  • Number of training iterations performed during model training

Correct Option: Model size and parameters, including the number of tokens and weights.

The size and complexity of a language model, including the number of parameters (weights) and tokens have a profound impact on its capabilities and performance. Larger models with more parameters tend to have a better understanding of language and can generate more coherent and contextually relevant text. Larger models, however, require substantial computational resources, including GPUs and memory, for both training and inference.

 

4  Fine-tuning is unnecessary for Large Language Models (LLMs) if your application does not involve which specific aspect?

  • Domain vocabulary
  • Efficiency & resource utilization
  • Bias mitigation
  • Task-specific adaptation (*)

 

Correct Option: Task-specific adaptation

Fine-tuning of Large Language Models (LLMs) is primarily performed to adapt the model to specific tasks or domains. If your application doesn’t require task-specific adaptation, then fine-tuning may not be necessary. Fine-tuning can be used to optimize the efficiency and resource utilization of LLMs, help adapt the model to domain-specific vocabulary, and address bias-related issues.

 

  1. Which statement accurately describes generative AI?

 

  • Limits functions to generating only text-based outputs
  • Focuses on making accurate predictions based on training data
  • Exclusively trains to predict future data patterns
  • Creates new content without making predictions (*)

 

Correct Option: Creates new content without making predictions.

Generative AI is focused on creating new content or data rather than making predictions based on existing training data. It involves generating novel and meaningful outputs such as images, text, music, or other forms of creative content.

source: https://oracle.com

 

See also