Oracle Database 11g Release 2: SQL Tuning

QUESTION NO: 1

Examine the query and its execution plan:

Which statement is true regarding the execution plan?

A. This query first fetches rows from the CUSTOMERS table that satisfy the conditions, and then the join return NULL from the CUSTOMER_ID column when it does not find any corresponding rows in the ORDERS table.

B. The query fetches rows from CUSTOMERS and ORDERS table simultaneously, and filters the rows that satisfy the conditions from the resultset.

C. The query first fetches rows from the ORDERS table that satisfy the conditions, and then the join returns NULL form CUSTOMER_ID column when it does not find any corresponding rows in the CUSTOMERS table.

D. The query first joins rows from the CUSTOMERS and ORDERS tables and returns NULL for the ORDERS table columns when it does not find any corresponding rows in the ORDERS table, and then fetches the rows that satisfy the conditions from the result set.

Answer: A

Explanation:

QUESTION NO: 2

Which three statements are true about histograms?

A. They capture the distribution of different values in an index for better selectivity estimates.

B. They can be used only with indexed columns.

C. They provide metadata about distribution of and occurrences of values in a table column.

D. They provide improved selectivity estimates in the presence of data skew, resulting in execution plans with uniform distribution.

E. They help the optimizer in deciding whether to use an index or a full table scan.

F. They help the optimizer to determine the fastest table join order.

Answer: C,E,F

Explanation:

C: A histogram is a frequency distribution (metadata) that describes the distribution of data values within a table.

E: It's well established that histograms are very useful for helping the optimizer choose between a full-scan and and index-scan.

F: Histograms may help the Oracle optimizer in deciding whether to use an index vs. a full-table scan (where index values are skewed) or help the optimizer determine the fastest table join order. For determining the best table join order, the WHERE clause of the query can be inspected along with the execution plan for the original query. If the cardinality of the table is too-high, then histograms on the most selective column in the WHERE clause will tip-off the optimizer and change the table join order.

Note:

* The Oracle Query Optimizer uses histograms to predict better query plans. The ANALYZE command or DBMS_STATS package can be used to compute these histograms.

Incorrect:

B: Histograms are NOT just for indexed columns.

– Adding a histogram to an un-indexed column that is used in

a where clause can improve performance.

D: Histograms Opportunities

Any column used in a where clause with skewed data

Columns that are not queried all the time

Reduced overhead for insert, update, delete

QUESTION NO: 3

View the exhibit and examine the query and its execution plan from the PLAN_TABLE.

Which statement is true about the execution?

A. The row with the ID column having the value 0 is the first step execution plan.

B. Rows are fetched from the indexes on the PRODUCTS table and from the SALES table using full table scan simultaneously, and then hashed into memory.

C. Rows are fetched from the SALES table, and then a hash join operator joins with rows fetched from indexes on the PRODUCTS table.

D. All the partitions of the SALES table are read in parallel.

Answer: C

Explanation:

QUESTION NO: 4

Which four statements are correct about communication between parallel execution process?

A. The number of logical pathways between parallel execution producers and consumers depends on the degree parallelism.

B. The shared pool can be used for parallel execution messages buffers.

C. The large pool can be used for parallel execution messages buffers.

D. The buffer cache can be used for parallel execution message buffers.

E. Communication between parallel execution processes is never required if a query uses full partition-wise joins.

F. Each parallel execution process has an additional connection to the parallel execution coordinator.

Answer: A,B,E,F

Explanation:

A: Note that the degree of parallelism applies directly only to intra-operation parallelism. If inter-operation parallelism is possible, the total number of parallel execution servers for a statement can be twice the specified degree of parallelism. No more than two sets of parallel execution servers can run simultaneously. Each set of parallel execution servers may process multiple operations. Only two sets of parallel execution servers need to be active to guarantee optimal inter-operation parallelism.

B: By default, Oracle allocates parallel execution buffers from the shared pool.

F: When executing a parallel operation, the parallel execution coordinator obtains parallel execution servers from the pool and assigns them to the operation. If necessary, Oracle can create additional parallel execution servers for the operation. These parallel execution servers remain with the operation throughout job execution, then become available for other operations. After the statement has been processed completely, the parallel execution servers return to the pool.

References:

QUESTION NO: 5

You have enabled DML by issuing: ALTER session ENABLE PARALLEL DML;

The PARELLEL_DEGREE_POLICY initialization parameter is set to AUTO.

Which two options true about DML statements for which parallel execution is requested?

A. Statements for which PDML is requested will execute serially estimated time is less than the time specified by the PARALLEL_MIN_THRESHOLD parameter.

B. Statements for which PDML is requested will be queued if the number of busy parallel execution servers greater than PARALLEL_MIN_SERVERS parameter.

C. Statements for which PDML is requested will always execute in parallel if estimated execution in parallel if estimated execution time is greater than the time specified by the PARELLEL_MIN_TIME_THRESHOLD parameter.

D. Statements for which PDML is requested will be queued if the number of busy parallel execution servers is greater than PARELLEL_SERVERS_TARGET parameter.

E. Statement for which PDML is requested will be queued if the number of busy parallel execution servers is greater than PARELLEL_DEGREE_LIMIT parameter.

Answer: C,D

Explanation:

C: PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution time a statement should have before the statement is considered for automatic degree of parallelism. By default, this is set to 30 seconds. Automatic degree of parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.

D: PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to run parallel statements before statement queuing will be used. When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require parallel execution, if the necessary parallel server processes are not available. Statement queuing will begin once the number of parallel server processes active on the system is equal to or greater than PARALLEL_SERVER_TARGET.

Note:

* PARALLEL_DEGREE_POLICY specifies whether or not automatic degree of Parallelism, statement queuing, and in-memory parallel execution will be enabled.

AUTO

Enables automatic degree of parallelism, statement queuing, and in-memory parallel execution.

* PARALLEL_MIN_SERVERS specifies the minimum number of parallel execution processes for the instance. This value is the number of parallel execution processes Oracle creates when the instance is started.

References:

QUESTION NO: 6

Examine Exhibit1 to view the query and its AUTOTRACE output.

Which two statements are true about tracing?

A. The displayed plan will be stored in PLAN_TABLE.

B. Subsequent execution of this statement will use the displayed plan that is stored in v$SQL.

C. The displayed plan may not necessarily be used by the optimizer.

D. The query will not fetch any rows; it will display only the execution plan and statistics.

E. The execution plan generated can be viewed from v$SQLAREA.

Answer: A,D

Explanation:

The PLAN_TABLE is automatically created as a public synonym to a global temporary table. This temporary table holds the output of EXPLAIN PLAN statements for all users. PLAN_TABLE is the default sample output table into which the EXPLAIN PLAN statement inserts rows describing execution plans

QUESTION NO: 7

Which two types of column filtering may benefit from partition pruning?

A. Equally operates on range-partitioned tables.

B. In-list operators on system-partitioned tables

C. Equality operators on system-partitioned tables

D. Operators on range-partitioned tables

E. Greater than operators on hash-partitioned tables

Answer: A,D

Explanation:

The query optimizer can perform pruning whenever a WHERE condition can be reduced to either one of the following two cases:

partition_column = constant

partition_column IN (constant1, constant2, ..., constantN)

In the first case, the optimizer simply evaluates the partitioning expression for the value given, determines which partition contains that value, and scans only this partition. In many cases, the equal sign can be replaced with another arithmetic comparison, including <, >, <=, >=, and <>. Some queries using BETWEEN in the WHERE clause can also take advantage of partition pruning.

Note:

* The core concept behind partition pruning is relatively simple, and can be described as “Do not scan partitions where there can be no matching values”.

When the optimizer can make use of partition pruning in performing a query, execution of the query can be an order of magnitude faster than the same query against a nonpartitioned table containing the same column definitions and data.

* Example:

Suppose that you have a partitioned table t1 defined by this statement:

CREATE TABLE t1 (

fname VARCHAR(50) NOT NULL,

lname VARCHAR(50) NOT NULL,

region_code TINYINT UNSIGNED NOT NULL,

dob DATE NOT NULL

)

PARTITION BY RANGE( region_code ) (

PARTITION p0 VALUES LESS THAN (64),

PARTITION p1 VALUES LESS THAN (128),

PARTITION p2 VALUES LESS THAN (192),

PARTITION p3 VALUES LESS THAN MAXVALUE

);

Consider the case where you wish to obtain results from a query such as this one:

SELECT fname, lname, region_code, dob

FROM t1

WHERE region_code > 125 AND region_code < 130;

It is easy to see that none of the rows which ought to be returned will be in either of the partitions p0 or p3; that is, we need to search only in partitions p1 and p2 to find matching rows. By doing so, it is possible to expend much less time and effort in finding matching rows than would be required to scan all partitions in the table. This“cutting away” of unneeded partitions is known as pruning.

QUESTION NO: 8

Which two statements about In-Memory Parallel Execution are true?

A. It can be configured using the Database Resource Manager.

B. It increases the number of duplicate block images in the global buffer cache.

C. It requires setting PARALLEL_DEGREE_POLICY to LIMITED.

D. Objects selected for In-Memory Parallel Execution have blocks mapped to specific RAC instances.

E. It requires setting PARALLEL_DEGREE_POLICY to AUTO

F. Objects selected for In-Memory Parallel Execution must be partitioned tables or indexes.

Answer: D,E

Explanation:

D, E: In-Memory Parallel Execution

When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle Database decides if an object that is accessed using parallel execution would benefit from being cached in the SGA (also called the buffer cache). The decision to cache an object is based on a well-defined set of heuristics including the size of the object and frequency on which it is accessed. In an Oracle RAC environment, Oracle Database maps pieces of the object into each of the buffer caches on the active instances. By creating this mapping, Oracle Database automatically knows which buffer cache to access to find different parts or pieces of the object. Using this information, Oracle Database prevents multiple instances from reading the same information from disk over and over again, thus maximizing the amount of memory that can cache objects. If the size of the object is larger than the size of the buffer cache (single instance) or the size of the buffer cache multiplied by the number of active instances in an Oracle RAC cluster, then the object is read using direct-path reads.

E: PARALLEL_DEGREE_POLICY specifies whether or not automatic degree of Parallelism, statement queuing, and in-memory parallel execution will be enabled.

AUTO

Enables automatic degree of parallelism, statement queuing, and in-memory parallel execution.

Incorrect:

C:

LIMITED

Enables automatic degree of parallelism for some statements but statement queuing and in-memory Parallel Execution are disabled. Automatic degree of parallelism is only applied to those statements that access tables or indexes decorated explicitly with the PARALLEL clause. Tables and indexes that have a degree of parallelism specified will use that degree of parallelism.

References:

QUESTION NO: 9

Which three are benefits of In-Memory Parallel Execution?

A. Reduction in the duplication of block images across multiple buffer caches

B. Reduction in CPU utilization

C. Reduction in the number of blocks accessed

D. Reduction in physical I/O for parallel queries

E. Ability to exploit parallel execution servers on remote instance

Answer: A,C,D

Explanation:

Note: In-Memory Parallel Execution

When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle Database decides if an object that is accessed using parallel execution would benefit from being cached in the SGA (also called the buffer cache). The decision to cache an object is based on a well-defined set of heuristics including the size of the object and frequency on which it is accessed. In an Oracle RAC environment, Oracle Database maps pieces of the object into each of the buffer caches on the active instances. By creating this mapping, Oracle Database automatically knows which buffer cache to access to find different parts or pieces of the object. Using this information, Oracle Database prevents multiple instances from reading the same information from disk over and over again, thus maximizing the amount of memory that can cache objects. If the size of the object is larger than the size of the buffer cache (single instance) or the size of the buffer cache multiplied by the number of active instances in an Oracle RAC cluster, then the object is read using direct-path reads.

References:

QUESTION NO: 10

You plan to bulk load data INSERT INTO . . . SELECT FROM statements.

Which two situations benefit from parallel INSERT operations on tables that have no materialized views defined on them?

A. Direct path insert of a million rows into a partitioned, index-organized table containing one million rows and a conventional B*tree secondary index.

B. Direct path insert of a million rows into a partitioned, index-organized table containing 10 rows and a bitmapped secondary index.

C. Direct path insert of 10 rows into a partitioned, index-organized table containing one million rows and conventional B* tree secondary index.

D. Direct path insert of 10 rows into a partitioned, index-organized table containing 10 rows and a bitmapped secondary index

E. Conventional path insert of a million rows into a nonpartitioned, heap-organized containing 10 rows and having a conventional B* tree index.

F. Conventional path insert of 10 rows into a nonpartitioned, heap-organized table one million rows and a bitmapped index.

Answer: A,B

Explanation:

Note:

* A materialized view is a database object that contains the results of a query.

* You can use the INSERT statement to insert data into a table, partition, or view in two ways: conventional INSERTand direct-path INSERT.

* With direct-path INSERT, the database appends the inserted data after existing data in the table. Data is written directly into datafiles, bypassing the buffer cache. Free space in the existing data is not reused. This alternative enhances performance during insert operations and is similar to the functionality of the Oracle direct-path loader utility, SQL*Loader. When you insert into a table that has been created in parallel mode, direct-path INSERT is the default.

* Direct-path INSERT is not supported for an index-organized table (IOT) if it is not partitioned, if it has a mapping table, or if it is reference by a materialized view.

* When you issue a conventional INSERT statement, Oracle Database reuses free space in the table into which you are inserting and maintains referential integrity constraints

* Conventional INSERT always generates maximal redo and undo for changes to both data and metadata, regardless of the logging setting of the table and the archivelog and force logging settings of the database

QUESTION NO: 11

Which are the two prerequisites for enabling star transformation on queries?

A. The STAR_TRANSFORMATION_ENABLED parameter should be set to TRUE or TEMP_DISABLE.

B. A B-tree index should be built on each of the foreign key columns of the fact table(s),

C. A bitmap index should be built on each of the primary key columns of the fact table(s).

D. A bitmap index should be built on each of the foreign key columns of the fact table(s).

E. A bitmap index must exist on all the columns that are used in the filter predicates of the query.

Answer: A,E

Explanation:

A: Enabling the transformation

E: Star transformation is essentially about adding subquery predicates corresponding to the constraint dimensions. These subquery predicates are referred to as bitmap semi-join predicates. The transformation is performed when there are indexes on the fact join columns (s.timeid, s.custid...). By driving bitmap AND and OR operations (bitmaps can be from bitmap indexes or generated from regular B-Tree indexes) of the key values supplied by the subqueries, only the relevant rows from the fact table need to be retrieved. If the filters on the dimension tables filter out a lot of data, this can be much more efficient than a full table scan on the fact table. After the relevant rows have been retrieved from the fact table, they may need to be joined back to the dimension tables, using the original predicates. In some cases, the join back can be eliminated.

Star transformation is controlled by the star_transformation_enabled parameter. The parameter takes 3 values.

TRUE - The Oracle optimizer performs transformation by identifying fact and constraint dimension tables automatically. This is done in a cost-based manner, i.e. the transformation is performed only if the cost of the transformed plan is lower than the non-transformed plan. Also the optimizer will attempt temporary table transformation automatically whenever materialization improves performance.

FALSE - The transformation is not tried.

TEMP_DISABLE - This value has similar behavior as TRUE except that temporary table transformation is not tried.

The default value of the parameter is FALSE. You have to change the parameter value and create indexes on the joining columns of the fact table to take advantage of this transformation.

References:

QUESTION NO: 12

An application accessing your database got the following error in response to SQL query:

ORA-12827: insufficient parallel query slaves available

View the parallel parameters for your instance:

No hints are used and the session use default parallel settings.

What four changes could you make to help avoid the error and ensure that the query executes in parallel?

A. Set PARELLEL_DEGREE_POLICY to AUTO.

B. Increase the value of PARELLEL_MAX_SERVERS.

C. Increase PARELLEL_SERVERS_TARGET.

D. Decrease PARELLEL_MIN_PERCENT.

E. Increase PARELLEL_MIN_SERVERS.

F. Decrease PARELLEL_MIN_TIME_THRESHOLD.

G. Increase PARELLEL__MIN_TIME_THRESHOLD.

Answer: A,C,D,G

Explanation:

C: PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to run parallel statements before statement queuing will be used. When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require parallel execution, if the necessary parallel server processes are not available. Statement queuing will begin once the number of parallel server processes active on the system is equal to or greater than PARALLEL_SERVER_TARGET.

By default, PARALLEL_SERVER_TARGET is set lower than the maximum number of parallel server processes allowed on the system (PARALLEL_MAX_SERVERS) to ensure each parallel statement will get all of the parallel server resources required and to prevent overloading the system with parallel server processes.

D:

Note: ORA-12827: insufficient parallel query slaves available

Cause: PARALLEL_MIN_PERCENT parameter was specified and fewer than minimum slaves were acquired

Action: either re-execute query with lower PARALLEL_MIN_PERCENT or wait until some running queries are completed, thus freeing up slaves

A, G: PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution time a statement should have before the statement is considered for automatic degree of parallelism. By default, this is set to 30 seconds. Automatic degree of parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.

QUESTION NO: 13

Examine the Exhibit 1 to view the structure of and indexes for EMPLOYEES and DEPARTMENTS tables.

Which three statements are true regarding the execution plan?

A. The view operator collects all rows from a query block before they can be processed but higher operations in the plan.

B. The in-line query in the select list is processed as a view and then joined.

C. The optimizer pushes the equality predicate into the view to satisfy the join condition.

D. The optimizer chooses sort-merge join because sorting is required for the join equality predicate.

E. The optimizer chooses sort-merge join as a join method because an equality predicate is used for joining the tables.

Answer: A,B,C

Explanation:

Incorrect:

Not D, not E:

Sort Merge joins are used for UN-Equality and also there is no SORT clause in the SQL.

Note: The optimizer may choose a sort merge join over a hash join for joining large amounts of data when any of the following conditions is true:

* The join condition between two tables is not an equijoin, that is, uses an inequality condition such as <, <=, >, or >=.

* Because of sorts required by other operations, the optimizer finds it cheaper to use a sort merge.

QUESTION NO: 14

In Your Database, The Cursor_Shareing Parameter is set to EXACT. In the Employees table, the data is significantly skewed in the DEPTNO column. The value 10 is found in 97% of rows.

Examine the following command and out put.

Which three statements are correct?

A. The DEPTNO column will become bind aware once histogram statistics are collected.

B. The value for the bind variable will considered by the optimizer to determine the execution plan.

C. The same execution plan will always be used irrespective of the bind variable value.

D. The instance collects statistics and based on the pattern of executions creates a histogram on the column containing the bind value.

E. Bind peeking will take place only for the first execution of the statement and subsequent execution will use the same plan.

Answer: A,B,D

Explanation:

* We here see that the cursor is marked as bind sensitive (IS_BIND_SEN is Y).

* In 11g, the optimizer has been enhanced to allow multiple execution plans to be used for a single statement that uses bind variables. This ensures that the best execution plan will be used depending on the bind value.

* A cursor is marked bind sensitive if the optimizer believes the optimal plan may depend on the value of the bind variable. When a cursor is marked bind sensitive, Oracle monitors the behavior of the cursor using different bind values, to determine if a different plan for different bind values is called for.

* (B, not C): A cursor is marked bind sensitive if the optimizer believes the optimal plan may depend on the value of the bind variable. When a cursor is marked bind sensitive, Oracle monitors the behavior of the cursor using different bind values, to determine if a different plan for different bind values is called for.

Note: Setting CURSOR_SHARING to EXACT allows SQL statements to share the SQL area only when their texts match exactly. This is the default behavior. Using this setting, similar statements cannot shared; only textually exact statements can be shared.

References:

QUESTION NO: 15

You created a SQL Tuning Set (STS) containing resource-intensive SQL statements. You plan to run the SQL Tuning Advisor.

Which two types of recommendations can be provided by the SQL Tuning Advisor?

A. Semantic restructuring for each SQL statement

B. Gathering missing or stale statistics at the schema level for the entire workload

C. Creating a materialized view to benefit from query rewrite for the entire workload

D. Gathering missing or stale statistics for objects used by the statements.

E. Creating a partition table to benefit from partition pruning for each statement

Answer: A,D

Explanation:

The output of the SQL Tuning Advisor is in the form of an advice or recommendations, along with a rationale for each recommendation and its expected benefit. The recommendation relates to collection of statistics on objects ( D), creation of new indexes, restructuring of the SQL statement (A), or creation of a SQL profile. You can choose to accept the recommendation to complete the tuning of the SQL statements.

Note:

* A SQL Tuning Set can be used as input to the SQL Tuning Advisor, which performs automatic tuning of the SQL statements based on other input parameters specified by the user.

* A SQL Tuning Set (STS) is a database object that includes one or more SQL statements along with their execution statistics and execution context, and could include a user priority ranking. The SQL statements can be loaded into a SQL Tuning Set from different SQL sources, such as the Automatic Workload Repository, the cursor cache, or custom SQL provided by the user.

References:

QUESTION NO: 16

When would bind peeking be done for queries that vary only in values used in the WHERE clause?

A. When the column used in the WHERE clause has evenly distributed data and histogram exists on that column.

B. When the column used in the WHERE clause has evenly distributed data and index exists on that column.

C. When the column used in the WHERE clause has non uniform distribution of data, uses a bind variable, and no histogram exists for the column.

D. When the column used in the WHERE clause has non uniform distribution of data and histogram exists for the column.

Answer: B

Explanation:

QUESTION NO: 17

Which type of SQL statement would be selected for tuning by the automatic SQL framework?

A. Serial queries that are among the costliest in any or all of the four categories: the past week, any day in the past week, any hour in the past week, or single response, and have the potential for improvement

B. Serial queries that have been tuned within the last 30days and have been SQL profiled by the SQL tuning Advisor.

C. Serial and parallel queries that top the AWR Top SQL in the past week only and have been SQL profiled by the SQL Tuning Advisor.

D. Serial queries that top the AWR Top SQL in the past week only and whose poor performance can be traced to concurrency issues.

E. Serial and parallel queries that are among the costliest in any or all of the four categories: the past week, and day in the past week, any hour in the past week, or a single response, and that can benefit from access method changes.

Answer: A

Explanation:

References:

QUESTION NO: 18

You instance has these parameter settings:

Which three statements are true about these settings if no hints are used in a SQL statement?

A. A statement estimated for more than 10 seconds always has its degree of parallelism computed automatically.

B. A statement with a computed degree of parallelism greater than 8 will be queued for a maximum of 10 seconds.

C. A statement that executes for more than 10 seconds always has its degree of parallelism computed automatically.

D. A statement with a computed degree of parallelism greater than 8 will raise an error.

E. A statement with any computed degree of parallelism will be queued if the number of busy parallel execution processes exceeds 64.

F. A statement with a computed degree of parallelism of 20 will be queued if the number of available parallel execution processes is less 5.

Answer: C,E,F

Explanation:

C (not A): PARALLEL_MIN_TIME_THRESHOLD specifies the minimum execution time a statement should have before the statement is considered for automatic degree of parallelism. By default, this is set to 30 seconds. Automatic degree of parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.

PARALLEL_DEGREE_LIMIT integer

A numeric value for this parameter specifies the maximum degree of parallelism the optimizer can choose for a SQL statement when automatic degree of parallelism is active. Automatic degree of parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.

E: PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to run parallel statements before statement queuing will be used. When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require parallel execution, if the necessary parallel server processes are not available. Statement queuing will begin once the number of parallel server processes active on the system is equal to or greater than PARALLEL_SERVER_TARGET.

F: PARALELL_MIN_MINPERCENT

PARALLEL_MIN_PERCENT operates in conjunction with PARALLEL_MAX_SERVERS and PARALLEL_MIN_SERVERS. It lets you specify the minimum percentage of parallel execution processes (of the value of PARALLEL_MAX_SERVERS) required for parallel execution. Setting this parameter ensures that parallel operations will not execute sequentially unless adequate resources are available. The default value of 0 means that no minimum percentage of processes has been set.

Consider the following settings:

PARALLEL_MIN_PERCENT = 50

PARALLEL_MIN_SERVERS = 5

PARALLEL_MAX_SERVERS = 10

If 8 of the 10 parallel execution processes are busy, only 2 processes are available. If you then request a query with a degree of parallelism of 8, the minimum 50% will not be met.

Note: With automatic degree of parallelism, Oracle automatically decides whether or not a statement should execute in parallel and what degree of parallelism the statement should use. The optimizer automatically determines the degree of parallelism for a statement based on the resource requirements of the statement. However, the optimizer will limit the degree of parallelism used to ensure parallel server processes do not flood the system. This limit is enforced by PARALLEL_DEGREE_LIMIT.

Values:

CPU

IO

integer

A numeric value for this parameter specifies the maximum degree of parallelism the optimizer can choose for a SQL statement when automatic degree of parallelism is active. Automatic degree of parallelism is only enabled if PARALLEL_DEGREE_POLICY is set to AUTO or LIMITED.

References:

QUESTION NO: 19

Exhibit

Examine the following SQL statement:

Examine the exhibit to view the execution plan.

Which statement is true about the execution plan?

A. The EXPLAIN PLAN generates the execution plan and stores it in c$SQL_PLAN after executing the query. Subsequent executions will use the same plan.

B. The EXPLAIN PLAN generates the execution plan and stores it in PLAN_TABLE without executing the query. Subsequent executions will always use the same plan.

C. The row with the ID 3 is the first step executed in the execution plan.

D. The row with the ID 0 is the first step executed in the execution plan.

E. The rows with the ID 3 and 4 are executed simultaneously.

Answer: E

Explanation:

Note the other_tag parallel in the execution plan.

Note:

Within the Oracle plan_table, we see that Oracle keeps the parallelism in a column called other_tag. The other_tag column will tell you the type of parallel operation that is being performed within your query.

For parallel queries, it is important to display the contents of the other_tag in the execution.

QUESTION NO: 20

Which two types of SQL statements will benefit from dynamic sampling?

A. SQL statements that are executed parallel

B. SQL statement that use a complex predicate expression when extended statistics are not available.

C. SQL statements that are resource-intensive and have the current statistics

D. SQL statements with highly selective filters on column that has missing index statistics

E. Short-running SQL statements

Answer: A,B

Explanation:

A: he optimizer decides whether to use dynamic statistics based on several factors. For example, the database uses automatic dynamic statistics when the SQL statement uses parallel execution.

B: One scenario where DS is used is when the statement contains a complex predicate expression and extended statistics are not available. Extended statistics were introduced in Oracle Database 11g Release 1 with the goal to help the optimizer get good quality cardinality estimates for complex predicate expressions.

D: DS It is typically used to compensate for missing or insufficient statistics that would otherwise lead to a very bad plan.

References:

QUESTION NO: 21

You are administering a database supporting an OLTP workload. A new module was added to one of the applications recently in which you notice that the SQL statements are highly resource intensive in terms of CPU, I/O and temporary space. You created a SQL Tuning Set (STS) containing all resource-intensive SQL statements. You want to analyze the entire workload captured in the STS. You plan to run the STS through the SQL Advisor.

Which two recommendations can you get?

A. Combing similar indexes into a single index

B. Implementing SQL profiles for the statements

C. Syntactic and semantic restructuring of SQL statements

D. Dropping unused or invalid index.

E. Creating invisible indexes for the workload

F. Creating composite indexes for the workload

Answer: C,F

Explanation:

The output of the SQL Tuning Advisor is in the form of an advice or recommendations, along with a rationale for each recommendation and its expected benefit. The recommendation relates to collection of statistics on objects , creation of new indexes (F), restructuring of the SQL statement (C), or creation of a SQL profile. You can choose to accept the recommendation to complete the tuning of the SQL statements.

References:

QUESTION NO: 22

A new application module is deployed on middle tier and is connecting to your database. You want to monitor the performance of the SQL statements generated from the application.

To accomplish this, identify the required steps in the correct order from the steps given below:

1. Use DBNMS_APPLICATION_INFO to set the name of the module

2. Use DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE to enable statistics gathering for the module.

3. Use DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE to enable tracing for the service

4. Use the trcsess utility to consolidate the trace files generated.

5. Use the tkprof utility to convert the trace files into formatted output.

A. 1, 2, 3, 4, 5

B. 2, 3, 1, 4, 5

C. 3, 1, 2, 4, 5

D. 1, 2, 4, 5

E. 1, 3, 4, 5

F. 2, 1, 4, 5

Answer: A

Explanation:

Note:

* Before tracing can be enabled, the environment must first be configured to enable gathering of statistics.

* (gather statistics): DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE

Enables statistic gathering for a given combination of Service Name, MODULE and ACTION

* DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE

Enables SQL tracing for a given combination of Service Name, MODULE and ACTION globally unless an instance_name is specified.

dbms_monitor.serv_mod_act_trace_enable(

service_name IN VARCHAR2,

module_name IN VARCHAR2 DEFAULT ANY_MODULE,

action_name IN VARCHAR2 DEFAULT ANY_ACTION,

waits IN BOOLEAN DEFAULT TRUE,

binds IN BOOLEAN DEFAULT FALSE,

instance_name IN VARCHAR2 DEFAULT NULL,

plan_stat IN VARCHAR2 DEFAULT NULL);

SELECT instance_name

FROM gv$instance;

exec dbms_monitor.serv_mod_act_trace_enable('TESTSERV', dbms_monitor.all_modules, dbms_monitor.all_actions, TRUE, TRUE, 'orabase');

exec dbms_monitor.serv_mod_act_trace_disable('TESTSERV', dbms_monitor.all_modules, dbms_monitor.all_actions, 'orabase');

* When solving tuning problems, session traces are very useful and offer vital information. Traces are simple and straightforward for dedicated server sessions, but for shared server sessions, many processes are involved. The trace pertaining to the user session is scattered across different trace files belonging to different processes. This makes it difficult to get a complete picture of the life cycle of a session.

Now there is a new tool, a command line utility called trcsess to help read the trace files. The trcsess command-line utility consolidates trace information from selected trace files, based on specified criteria. The criteria include session id, client id, service name, action name and module name.

* Once the trace files have been consolidated (with trcsess), tkprof can be run against the consolidated trace file for reporting purposes.

QUESTION NO: 23

One of your databases supports a mixed workload.

When monitoring SQL performance, you detect many direct paths reads full table scans.

What are the two possible causes?

A. Histograms statistics not available

B. Highly selective filter on indexed columns

C. Too many sort operations performed by queries

D. Indexes not built on filter columns

E. Too many similar type of queries getting executed with cursor sharing disabled

Answer: B,D

Explanation:

Note:

* The direct path read Oracle metric occurs during Direct Path operations when the data is asynchronously read from the database files into the PGA instead of into the SGA data buffer. Direct reads occur under these conditions:

- When reading from the TEMP tablespace (a sort operation)

- When reading a parallel full-table scan (parallel query factotum (slave) processes)

- Reading a LOB segment

* The optimizer uses a full table scan in any of the following cases:

- Lack of Index

- Large Amount of Data

- Small Table

- High Degree of Parallelism

QUESTION NO: 24

Examine the Exhibit.

Which two options are true about the execution plan and the set of statements?

A. The query uses a partial partition-wise join.

B. The degree of parallelism is limited to the number of partitions in the EMP_RANGE_DID table.

C. The DEPT table id dynamically distributed based on the partition keys of the EMP_RANGE_DID table.

D. The server process serially scans the entire DEPT table for each range partition on the EMP_RANGE_DID table.

E. The query uses a full partition-wise join.

Answer: A,D

Explanation:

QUESTION NO: 25

What are three common reasons for SQL statements to perform poorly?

A. Full table scans for queries with highly selective filters

B. Stale or missing optimizer statistics

C. Histograms not existing on columns with evenly distributed data

D. High index clustering factor

E. OPTIMIZER_MODE parameter set to ALL_ROWS for DSS workload

Answer: A,B,D

Explanation:

D: The clustering_factor measures how synchronized an index is with the data in a table. A table with a high clustering factor is out-of-sequence with the rows and large index range scans will consume lots of I/O. Conversely, an index with a low clustering_factor is closely aligned with the table and related rows reside together of each data block, making indexes very desirable for optimal access.

Note:

* (Not C) Histograms are feature in CBO and it helps to optimizer to determine how data are skewed(distributed) with in the column. Histogram is good to create for the column which are included in the WHERE clause where the column is highly skewed. Histogram helps to optimizer to decide whether to use an index or full-table scan or help the optimizer determine the fastest table join order.

* OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance.

all_rows

The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource use to complete the entire statement).

QUESTION NO: 26

Examine the utilization parameters for an instance:

You notice that despite having an index on the column used in the where clause, queries use full table scans with highly selective filters.

What are two possible reasons for the optimizer to use full table scans instead of index unique scans and index range scans?

A. The OPTIMIZER_MODE parameter is set to ALL_ROWS.

B. The clustering factor for the indexes is high.

C. The number of leaf blocks for the indexes is high.

D. The OPTIMIZER_INDEX_COST_ADJ initialization parameter is set to 100.

E. The blocks fetched by the query are greater than the value specified by the DB_FILE_MULTIBLOCK_READ_COUNT parameter.

Answer: A,B

Explanation:

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52asktom-1735913.html

* OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance.

Values:

first_rows_n

The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).

first_rows

The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.

all_rows

The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource use to complete the entire statement).

QUESTION NO: 27

Tracing has been enabled for the HR user. You execute the following command to check the contents of the orcl_25052.trc trace file, which was generated during tracing:

Which two statements are correct about the execution of the command?

A. SCRIPT.SQL stores the statistics for all traced SWL statements.

B. Execution plans for SQL statements are stored in TEMP_PLAN_TABLE and can be queried by the user.

C. SQL statements in the output files are stored in the order of elapsed time.

D. TKPROF use TEMP_PLAN_TABLE in the HR schema as a temporary plan table.

E. Recursive SQL statements are included in the output file.

Answer: A,D

Explanation:

INSERT

Creates a SQL script that stores the trace file statistics in the database. TKPROF creates this script with the name filename3. This script creates a table and inserts a row of statistics for each traced SQL statement into the table.

QUESTION NO: 28

You are administering a database that supports an OLTP application. To set statistics preferences, you issued the following command:

SQL > DBMS_STATS.SET_GLOBAL_PREFS (‘ESTIMATE_PERCENT’, ‘9’);

What will be the effect of executing this procedure?

A. It will influence the gathering of statistics for a table based on the value specified for ESTIMATE_PERCENT provided on table preferences for the same table exist.

B. It will influence dynamic sampling for a query to estimate the statistics based on ESTIMATE_PERCENT.

C. The automatic statistics gathering job running in the maintenance window will use global preferences unless table preferences for the same table exist.

D. New objects created will use global preference even if table preferences are specified.

Answer: D

Explanation:

https://blogs.oracle.com/optimizer/entry/understanding_dbms_statsset__prefs_procedures

QUESTION NO: 29

Examine the parallelism parameter for your instance:

parallel_servers_target

Now examine the resource plan containing parallel statement directives:

Consumer Group resource plan containing parallel statement directives:

Which two are true about parallel statement queuing when this plan is active?

A. Urgent_group sessions collectively can consume up to 64 parallel execution servers before queuing starts for this consumer group.

B. ETL_GROUP sessions can collectively consume up to 64 parallel execution servers before the queuing starts for this consumer.

C. A single OTHER_GROUPS session will execute serially once it is queued for six minutes.

D. A single ETL_GROUP session can consume up to eight parallel execution servers.

E. A single ETL_GROUP session can consume up to 32 parallel execution servers.

F. A single OTHER_GROUPS session will execute in parallel once it is queued for six minutes.

Answer: A,D

Explanation:

(http://docs.oracle.com/cd/E11882_01/server.112/e25494/dbrm.htm#ADMIN13466)

QUESTION NO: 30

View the Exhibit1 and examine the structure and indexes for the MYSALES table.

The application uses the MYSALES table to insert sales record. But this table is also extensively used for generating sales reports. The PROD_ID and CUST_ID columns are frequently used in the WHERE clause of the queries. These columns are frequently used in WHERE clause of the queries. These columns have few distinct values relative to the total number of rows in the table.

View exhibit 2 and examine one of the queries and its auto trace output.

What should you do to improve the performance of the query?

A. Use the INDEX_COMBINE hint in the query.

B. Create composite index involving the CUST_ID and PROD_ID columns.

C. Gather histograms statistics for the CUST_ID and PROD_ID columns.

D. Gather index statistics for the MYSALES_PRODID_IDX and MYSALES_CUSTID_IDX indexes.

Answer: D

Explanation:

Note:

* Statistics quantify the data distribution and storage characteristics of tables, columns, indexes, and partitions.

* INDEX_COMBINE

Forces a bitmap index access path on tab.

Primarily this hint just tells Oracle to use the bitmap indexes on table tab. Otherwise Oracle will choose the best combination of indexes it can think of based on the statistics. If it is ignoring a bitmap index that you think would be helpful, you may specify that index plus all of the others taht you want to be used. Note that this does not force the use of those indexes, Oracle will still make cost based choices.

* Histograms Opportunities

Any column used in a where clause with skewed data

Histograms are NOT just for indexed columns.

– Adding a histogram to an un-indexed column that is used in

QUESTION NO: 31

You are administering database that supports an OLTP workloads. Most of the queries use an index range scan or index unique scan as access methods.

Which three scenarios can prevent the index access being used by the queries?

A. When highly selective filters is applied on an indexed column of a table with sparsely populated blocks.

B. When the rows are filtered with an IS NULL operator on the column with a unique key defined

C. When the histogram statistics are not collected for the columns used in where clause.

D. When a highly selective filter is applied on the indexed column and the index has very low value for clustering factor.

E. When the statistics for the table are not current.

Answer: A,B,E

Explanation:

A: Low clustering factor promotes good performance.

The clustering_factor measures how synchronized an index is with the data in a table. A table with a high clustering factor is out-of-sequence with the rows and large index range scans will consume lots of I/O. Conversely, an index with a low clustering_factor is closely aligned with the table and related rows reside together of each data block, making indexes very desirable for optimal access.

Note:

* Oracle SQL not using an index is a common complaint, and it’s often because the optimizer thinks that a full-scan is cheaper than index access. Oracle not using an index can be due to:

* (E) Bad/incomplete statistics – Make sure to re-analyze the table and index with dbms_stats to ensure that the optimizer has good metadata.

* Wrong optimizer_mode – The first_rows optimizer mode is to minimize response time, and it is more likely to use an index than the default all_rows mode.

* Bugs – See these important notes on optimizer changes in 10g that cause Oracle not to use an index.

* Cost adjustment – In some cases, the optimizer will still not use an index, and you must decrease optimizer_index_cost_adj.

QUESTION NO: 32

You are logged in as the HR user and you execute the following procedure:

SQL > exec DBMS_STATS.SET_TABLE_PREFS (‘HR’, ‘EMPLOYEES’, ‘PUBLISH’, ‘FALSE’);

SQL> exec DBMS_STATS.GATHER_TABLE_STATS (‘HR’, ‘EMPLOYEES’);

Which statement is true about the newly gathered statistics?

A. They are temporary and purged when the session exits.

B. They are used by the optimizer for all sessions.

C. They are locked and cannot be overwritten.

D. They are marked as pending and stored in the pending statistics table.

Answer: D

Explanation:

In previous database versions, new optimizer statistics were automatically published when they were gathered. In 11g this is still the default action, but you now have the option of keeping the newly gathered statistics in a pending state until you choose to publish them.

The DBMS_STATS.GET_PREFS function allows you to check the 'PUBLISH' attribute to see if statistics are automatically published. The default value of TRUE means they are automatically published, while FALSE indicates they are held in a pending state.

References:

QUESTION NO: 33

You enable auto degree of parallelism (DOP) for your database instance.

Examine the following query:

Which two are true about the execution of statement?

A. Dictionary DOP for the objects accessed by the query is used to determine the statement DOP.

B. Auto DOP is used to determine the statement DOP only if the estimated serial execution time exceeds PARALLEL_MIN_TIME_THRESHOLD.

C. Dictionary DOP is used to determine the statement DOP only if the estimated serial execution time exceeds PARALLEL_MIN_TIME_THRESHOLD.

D. The statement will be queued if insufficient parallel execution slaves are available to satisfy the statements DOP.

E. The statement will be queued if the number of busy parallel execution servers exceeds PARALLEL_SERVERS_TARGET.

F. The statements may execute serially.

Answer: E,F

Explanation:

* Parallel (Manual): The optimizer is forced to use the parallel settings of the objects in the statement.

* MANUAL - This is the default. Disables Auto DOP (not B), statement queuing (not D, Not E) and in-memory parallel execution. It reverts the behavior of parallel execution to what it was previous to Oracle Database 11g, Release 2 (11.2).

* PARELLEL (MANUAL)

You can use the PARALLEL hint to force parallelism. It takes an optional parameter: the DOP at which the statement should run.

The following example forces the statement to use Oracle Database 11g Release 1 (11.1) behavior:

SELECT /*+ parallel(manual) */ ename, dname FROM emp e, dept d

WHERE e.deptno=d.deptno;

* PARALLEL_SERVERS_TARGET specifies the number of parallel server processes allowed to run parallel statements before statement queuing will be used. When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle will queue SQL statements that require parallel execution, if the necessary parallel server processes are not available. Statement queuing will begin once the number of parallel server processes active on the system is equal to or greater than PARALLEL_SERVER_TARGET.

By default, PARALLEL_SERVER_TARGET is set lower than the maximum number of parallel server processes allowed on the system (PARALLEL_MAX_SERVERS) to ensure each parallel statement will get all of the parallel server resources required and to prevent overloading the system with parallel server processes.

Note that all serial (non-parallel) statements will execute immediately even if statement queuing has been activated.

QUESTION NO: 34

Examine the exhibit.

Which is true based on the information obtainable from the execution plan?

A. A full partition-wise join performed between the EMPLOYEES and DEPARTMENTS tables.

B. A full table scan on the DEPARTMENTS table performed serially by the query coordinator.

C. A full table scan on the DEPARTMENTS table is performed serially by a single parallel execution server process.

D. A partial partition-wise join performed between the EMPLOYEES and DEPARTMENTS tables.

E. A full table scan on the EMPLOYEES table is done in parallel.

Answer: E

Explanation:

PX BLOCK ITERATORThis operation is typically the first step in a parallel pipeline. The BLOCK ITERATOR breaks up the table into chunks that are processed by each of the parallel servers involved.

Incorrect:

B, C: The scan on the Departsments table is done in parallel.

Note:

* As per exhibit: Line 7 is run first, followed by line 6.

*

Example with same structure of execution plan:

Here’s how to read the plan:1. The first thing done is at line 9 – an index fast full scan on SYS.OBJ$.I_OBJ1 index. This is done in parallel, as indicated from the “PX SEND” line above.2. In line 8, we’re doing a “PX SEND BROADCAST” operation. When joining tables in parallel, Oracle can choose to either broadcast results (rows) from one operation to apply to the other table scan, or it can choose PX SEND HASH. In this case, our CBO determined that a BROADCOAST was appropriate because the results from the OBJ$ table were much lower than the MYOBJ table3. Line 7, the PX RECEIVE step, is basically the consumer of the broadcasted rows in step 84. Line 6 is an in-memory BUFFER SORT of the rows returned from the index scan on OBJ$5. Lines 11 and 10, respectively, indicate the full scan and PX BOCK ITERATOR operation for the granules involved in the 8 PQ servers6. In line 5, Oracle is doing a hash join on the resulting rows from the parallel scans on MYOBJ and OBJ$7. Line 4 is a per-PQ server sort of data from the joind PQ servers8. Line 3 is the consumer QC that holds the result of the each of the PQ servers9. Line 2 is the PX Coordinator (QC) collecting, or consuming the rows of the joined data10. Line 1 is the final SORT AGGREGATE line that performs the grouping function

QUESTION NO: 35

Examine the exhibit to view the query and its execution plan?

What two statements are true?

A. The HASH GROUP BY operation is the consumer of the HASH operation.

B. The HASH operation is the consumer of the HASH GROUP BY operation.

C. The HASH GROUP BY operation is the consumer of the TABLE ACCESS FULL operation for the CUSTOMER table.

D. The HASH GROUP BY operation is consumer of the TABLE ACCESS FULL operation for the SALES table.

E. The SALES table scan is a producer for the HASH JOIN operation.

Answer: A,E

Explanation:

A, not C, not D: Line 3, HASH GROUP BY, consumes line 6 (HASH JOIN BUFFERED).

E: Line 14, TABLE ACCESS FULL (Sales), is one of the two producers for line 6 (HASH JOIN).

QUESTION NO: 36

A database supports three applications: CRM, ERP, and ACC. These applications connect to the database by using three different services: CRM_SRV for the CRM application, ERP_SRV for the ERP application, and ACC_SRV for the ACC application.

You enable tracing for the ACC_SRV service by issuing the following command:

SQL> EXECUTE DBMS for the ACC_SRV service by issuing the following command:

SQL> EXECUITIVE DBMS_MONITOR. SERV_MOD_ACT_TRACE_ENABLE (service_name => ‘ACC_SRV’, waits => TRUE, binds = > FALSE, instance_name = > ‘inst1’);

Which statement is true?

A. All trace information for the service connection to inst1 will be stored in a single trace file.

B. A trace file is not created because the module name is not specified.

C. A single trace file is created for each session that uses the ACC_SRV service.

D. Only those SQL statements that are identified with the ACC_SRV service executed on the inst1 instance are recorded in trace files.

E. All trace information for the ACC_SRV service connected to inst1 is stored in multiple trace files, which can be consolidated by using the tkprof utility.

Answer: C

Explanation:

SERV_MOD_ACT_TRACE_ENABLE

serv_mod_act_trace_enable and serv_mod_act_trace_disable, which enables and disables trace for given service_name, module and action.

For example for a given service name you can trace all session started from SQL*Plus.

Module and action in your own created application can be set using dbms_application_info set_module and set_action procedures.

serv_mod_act_trace_enable fills sys table wri$_tracing_enabled and view dba_enabled_traces on top of this table as follows:

SQL> exec dbms_monitor.serv_mod_act_trace_enable(service_name=>'orcl', module_name=>'SQL*Plus')

PL/SQL procedure successfully completed.

SQL> select * from sys.wri$_tracing_enabled;

TRACE_TYPE PRIMARY_ID QUALIFIER_ID1 QUALIFIER_ID2 INSTANCE_NAME FLAGS

---------- ---------- ------------- ------------- ------------- -----

˙˙˙˙˙˙˙˙ 4 orcl˙˙˙˙˙˙ SQL*Plus˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙ ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙8

SQL> select * from dba_enabled_traces;

TRACE_TYPE ˙˙˙˙PRIMARY_ID QUALIFIER_ID1 QUALIFIER_ID2 WAITS BINDS INSTANCE_NAME

-------------- ---------- ------------- ------------- ----- ----- -------------

SERVICE_MODULE orcl˙˙˙˙˙˙ SQL*Plus ˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙˙TRUE FALSE˙˙˙˙˙˙˙˙˙˙˙˙

QUESTION NO: 37

Examine the following anonymous PL/SQL code block of code:

Which two are true concerning the use of this code?

A. The user executing the anonymous PL/SQL code must have the CREATE JOB system privilege.

B. ALTER SESSION ENABLE PARALLEL DML must be executed in the session prior to executing the anonymous PL/SQL code.

C. All chunks are committed together once all tasks updating all chunks are finished.

D. The user executing the anonymous PL/SQL code requires execute privilege on the DBMS_JOB package.

E. The user executing the anonymous PL/SQL code requires privilege on the DBMS_SCHEDULER package.

F. Each chunk will be committed independently as soon as the task updating that chunk is finished.

Answer: A,E

Explanation:

A (not D, not E):

To use DBMS_PARALLEL_EXECUTE to run tasks in parallel, your schema will need the CREATE JOB system privilege.

E (not C): DBMS_PARALLEL_EXECUTE now provides the ability to break up a large table according to a variety of criteria, from ROWID ranges to key values and user-defined methods. You can then run a SQL statement or a PL/SQL block against these different “chunks” of the table in parallel, using the database scheduler to manage the processes running in the background. Error logging, automatic retries, and commits are integrated into the processing of these chunks.

Note:

* The DBMS_PARALLEL_EXECUTE package allows a workload associated with a base table to be broken down into smaller chunks which can be run in parallel. This process involves several distinct stages.

1.Create a task

2.Split the workload into chunks

CREATE_CHUNKS_BY_ROWID

CREATE_CHUNKS_BY_NUMBER_COL

CREATE_CHUNKS_BY_SQL

3.Run the task

RUN_TASK

User-defined framework

Task control

4.Check the task status

5.Drop the task

* The workload is associated with a base table, which can be split into subsets or chunks of rows. There are three methods of splitting the workload into chunks.

CREATE_CHUNKS_BY_ROWID

CREATE_CHUNKS_BY_NUMBER_COL

CREATE_CHUNKS_BY_SQL

The chunks associated with a task can be dropped using the DROP_CHUNKS procedure.

* CREATE_CHUNKS_BY_ROWID

The CREATE_CHUNKS_BY_ROWID procedure splits the data by rowid into chunks specified by the CHUNK_SIZE parameter. If the BY_ROW parameter is set to TRUE, the CHUNK_SIZE refers to the number of rows, otherwise it refers to the number of blocks.

References:

QUESTION NO: 38

Examine the following query and execution plan:

Which query transformation technique is used in this scenario?

A. Join predicate push-down

B. Subquery factoring

C. Subquery unnesting

D. Join conversion

Answer: A

Explanation:

* Normally, a view cannot be joined with an index-based nested loop (i.e., index access) join, since a view, in contrast with a base table, does not have an index defined on it. A view can only be joined with other tables using three methods: hash, nested loop, and sort-merge joins.

* The following shows the types of views on which join predicate pushdown is currently supported.

UNION ALL/UNION view

Outer-joined view

Anti-joined view

Semi-joined view

DISTINCT view

GROUP-BY view

QUESTION NO: 39

You enabled auto degree of parallelism (DOP) for your instance.

Examine the query:

Which two are true about the execution of this query?

A. Dictionary DOP will be used, if present, on the tables referred in the query.

B. DOP is calculated if the calculated DOP is 1.

C. DOP is calculated automatically.

D. Calculated DOP will always by 2 or more.

E. The statement will execute with auto DOP only when PARALLEL_DEGREE_POLICY is set to AUTO.

Answer: A,C

Explanation:

* PARALLEL (AUTO): The database computes the degree of parallelism (C), which can be 1 or greater (not D). If the computed degree of parallelism is 1, then the statement runs serially.

* You can use the PARALLEL hint to force parallelism. It takes an optional parameter: the DOP at which the statement should run. In addition, theNO_PARALLEL hint overrides a PARALLEL parameter in the DDL that created or altered the table.

The following example illustrates computing the DOP the statement should use:

SELECT /*+ parallel(auto) */ ename, dname FROM emp e, dept d

WHERE e.deptno=d.deptno;

* When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle Database automatically decides if a statement should execute in parallel or not and what DOP it should use. Oracle Database also determines if the statement can be executed immediately or if it is queued until more system resources are available. Finally, Oracle Database decides if the statement can take advantage of the aggregated cluster memory or not.

QUESTION NO: 40

View Exhibit1 and examine the structure and indexes for the MYSALES table.

The application uses the MYSALES table to insert sales record. But this table is also extensively used for generating sales reports. The PROD_ID and CUST_ID columns are frequently used in the WHERE clause of the queries. These columns have few distinct values relative to the total number of rows in the table. The MYSALES table has 4.5 million rows.

View exhibit 2 and examine one of the queries and its autotrace output.

Which two methods can examine one of the queries and its autotrace output?

A. Drop the current standard balanced B* Tree indexes on the CUST_ID and PROD_ID columns and re-create as bitmapped indexes.

B. Use the INDEX_COMBINE hint in the query.

C. Create a composite index involving the CUST_ID and PROD_ID columns.

D. Rebuild the index to rearrange the index blocks to have more rows per block by decreasing the value for PCTFRE attribute.

E. Collect histogram statistics for the CUST_ID and PROD_ID columns.

Answer: B,C

Explanation:

B: The INDEX hint explicitly chooses an index scan for the specified table. You can use the INDEX hint for domain, B-tree, bitmap, and bitmap join indexes. However, Oracle recommends using INDEX_COMBINE rather than INDEX for the combination of multiple indexes, because it is a more versatile hint.

C: Combining the CUST_ID and PROD_ID columns into an composite index would improve performance.

Site Search:

Close

Close
Download Free Demo of VCE
Exam Simulator

Experience Avanset VCE Exam Simulator for yourself.


Simply submit your e-mail address below to get started with our interactive software demo of your free trial.


Enter Your Email Address

Free Demo Limits: In the demo version you will be able to access only first 5 questions from exam.