OCI 2023 AI Foundations Associate (1Z0-1122-23) S1
Oracle Cloud Infrastructure 2023 AI Foundations Associate (1Z0-1122-23)
Key Points to Remember
- Price: Free
- Format: Multiple Choice and Multiple Response questions
- Duration: 60 minutes
- Number of Questions: 30
- Passing score: 60%
- Validation: This Exam has been validated against the 2023 version of “Become an OCI AI Foundations Associate” course.
- Effective as of November 1, 2022, OCI certification credentials are now valid for 2 years (previously 18 months).
Exam Topics
- Format: Multiple Choice and Multiple Response questions
- Read the question very carefully
- Identify themes and keywords in the question
- Read all the options very carefully
- You might find two choices sounding exactly the same
- Identify and discard the “distractors”
- Start with eliminating obvious options that don’t apply
- Don’t get stuk at one place, mark for review
- Always answer, worst case scenario, make a guess
Q1: AI concepts and workloads
Which Al domain is associated with tasks like identifying the sentiment of a text and translating text between languages?
A. Natural Language Processing
B. Computer Vision
C. Speech Processing
D. Anomaly Detection
Explanation: The natural language processing domain of Al focuses on tasks related to understanding, processing, and generating natural language, such as sentiment analysis, translation, and text classification.
Q2: Machine Learning concepts
In Machine Learning, what does the term “Model training” involve?
A. Writing code for the entire program
B. Collecting and labeling data
C. Establishing a relationship between input and output parameters
D. Analyzing the accuracy of a trained model
Explanation: Model training involves building a relationship between the input features and the desired output. It’s the process of creating a model that can make predictions based on input data.
Q3: Identify common Machine Learning types
What is the main distinction between classification and regression in supervised Machine Learning?
A. Classification predicts continuous values; regression assigns data points to categories.
B. Classification assigns data points to categories; regression predicts continuous values.
C. Classification and regression both predict continuous values.
D. Classification and regression both assign data points to categories.
Explanation: The key difference between classification and regression lies in the nature of the target variable. Classification deals with categorical outcomes and assigns data points to specific categories or classes, while regression deals with continuous numeric outcomes and predicts values within a range.
Q4: Deep Learning concepts
What is the primary purpose of deep learning model architectures like Convolutional Neural Networks (CNNs)?
A. Generating high-resolution images
B. Creating music compositions
C. Processing sequential data
D. Detecting patterns in images
Explanation: Convolutional Neural Networks (CNNs) are specifically designed to process and analyze visual data, such as images. They excel at detecting patterns, features, and objects within images.
Q5: Understand fundamentals of Generative AI
Q5: What is the primary distinction between generative Al and other Al approaches like supervised learning?
- Generative Al focuses on decision-making and optimization.
- Generative Al aims to understand underlying data distribution & create new examples.
- Generative Al generates labeled outputs for training.
- Generative Al is exclusively used for text-based applications.
Explanation: Generative Al goes beyond making predictions or decisions. It focuses on modeling the structure of the data and creating new examples that resemble the training data, allowing for the generation of new content.
Q6: Explain Large Language Model concepts
What role do tokens play in Large Language Models (LLMs)?
- Tokens represent the numerical values of model parameters.
- Tokens determine the size of the model’s memory.
- Tokens are individual units into which a piece of text is divided during processing by the model.
- Tokens are used to define the architecture of the model’s neural network.
Explanation: In the context of LLMs, tokens are the individual units into which a piece of text is divided during processing. Tokens are usually words, subwords, or characters. LLMs process and analyze these tokens to understand and generate text.
Q7: Explain Prompt Engineering and Fine-tuning
What is “in-context learning” in the context of large language models (LLMs)?
- Training a model on a diverse range of tasks.
- Modifying the behavior of a pre-trained LLM permanently.
- Teaching the model through zero-shot learning.
- Providing a few examples of a target task via the input prompt.
Explanation: In-context learning refers to the capability of Large Language Models (LLMs) to learn and perform new tasks without further training or fine-tuning. Instead of modifying the model permanently, users can guide the model’s behavior by providing a few examples of the target task through the input prompt. This is particularly useful when direct access to the model is limited, such as when using it through an API or user interface.
Source:oracle.com
See also:
OCI 2023 AI Foundations Associate (1Z0-1122-23) S2