Monday, January 31, 2011

Useful SQL Hints to improve query performance.

Search: The Web Angelfire
Report Abuse « Previous | Top 100 | Next »
share: del.icio.us | digg | reddit | furl | facebook

ORACLE SQL optimization


Step 1: Find the statements that consume the most resources. You can use the view V$SORT_USAGE to see the session and the SQL associated with a temporary segment which is still in the SQL area.

The statements with the most potential to improve performance if tuned include:
1) The queries that consume the most resources overall.
2) The queries that consume the most resources per row.
3) The queries that are executed most frequently
In the V$SQLAREA you can find the statements which are still in cache and have done a great deal of I/O and buffer gets.

Step 2: Tune those statements to use fewer resources
1) Get the statement to use fewer resources
2) Use the statement less often.

Approaches to tuning SQL statements

1) Restructuring indexes - An index plays an important part in the complete database design. Good index design is just as important as good table design. One of the main reasons for using indexes is so that the database can access the correct data quickly. However too many indexes or have useless indexes can be a burden to the system as they require maintenance when using DML, also the optimizer may choose the wrong index to use if there are too many to choose from, especially in a complex SQL select statement.
a) Remove non selective indexes to speed up DML
b) Index performance critical paths
c) Consider hash clusters, but what out for uniqueness, hashing unique data can produce very large hash indexes.
d) Consider index clusters only if the cluster keys are similar in size.

2) Restructure the statement
a) Consider alternative SQL syntax
e.g. SELECT dname, deptno
FROM dept
WHERE deptno NOT IN (SELCT deptno FROM emp)
A faster version of this statement would be
SELECT dname, deptno
FROM dept
WHERE NOT EXIST
( SELECT deptno FROM emp WHERE dept.deptno = emp.deptno)

b) Compose predicates using AND and =. Using equijoins without exception, statements that perform equijoins on untransformed columns are fastest and easiest to tune.

c) Choose an advantageous join order. Join orders can have a significant effect on performance. The main objective of SQL tuning is to avoid performing unnecessary work, which can slow down the performance of the statement as well as confusing the optimizer to select the wrong method of optimization.
i) Avoid using a full table scan if it is more efficient to get the data through an index.
ii) Avoid using indexes which select too many rows (more than 2-4% of the table number of rows in the table).
iii) Choose a join order so as to join fewer rows.
For example:
SELECT info
FROM taba a, tabb b, tabc c
WHERE
Acol BETWEEN :alow AND :ahigh
AND bcol BETWEEN :blow AND :bhigh
AND ccol BETWEEN :clow AND :chigh
AND a.key1 = b.key1
AND a.key2 = b.key2

i) Choose a driving table and a driving index. The first three rows of the WHERE clause are filter conditions and the last two are join conditions. Filter conditions dominate the choice of the driving table and the driving index. In general the driving table should be the table containing the filter condition which eliminates the most rows.
ii) Choose the right index. Once you have established the driving table, you must choose the most selective index available to drive the table. Alternative is to use a full table scan if that is more efficient; sometimes if there are no effective indexes it is better to do a full table scan as full scans a sequential rather than random access. From there, the joins should all happen through the joined indexes, the index on primary or foreign keys used to connect the table to a earlier table in the join tree. Rarely should you use the indexes on non-join conditions, except for driving the table. In the above example you should use the indexes on b.key1 and c.key2 to drive into tabb and tabc respectively.
iii) Choose the best join order, driving the best unused filters earliest. The work of following a join can be reduced by first joining to the table with the best still unused filter. Thus if 'bcol BETWEEN ¡K' is more restrictive than 'ccol BETWEEN ¡K.', the last join can be made easier if tabb is joined before tabc.

d) Use untransformed column values. Avoid using functions in the WHERE clause of the statement unless it is really necessary. Useful functions such as to_date and to_char and to_number can slow down execution as well as making the optimizer skip useful indexes on the transformed columns.

e) Avoid using mixed expressions and implicit conversions. E.g. < char_val > = < num_val >. In this case the SQL interpreter will perform an implied conversion of to_number = . This may cause two problems; firstly, the optimizer may ignore a good index on char_val and secondly errors from char to num may occur causing the statement to fail. To avoid this problem use explicit conversions e.g. char_val = to_char(num_val).

f) Write separate SQL statements for specific values. SQL is not a procedural language and using it to do many things in one statement is not a good idea. Optimizations (determining the execution plan) takes place before the database knows of the values used as parameters. An execution plan should therefore not depend in what those values are for.
For example:
SELECT info FROM tables WHERE ....
AND acol BETWEEN DECODE(:loval,'ALL',acol,:loval)
AND DECODE(:hival,'ALL',acol,:hival)
Here the optimizer cannot use the index on acol because the column acol appears on both sides of the BETWEEN expression. In these cases you have to change the structure of the SQL statement.
SELECT info FROM tables WHERE ...
AND acol BETWEEN :loval AND :hival
AND (:loval !='ALL' AND :hival !='ALL')
UNION ALL
SELECT info FROM tables WHERE ...
AND (:loval ='ALL' OR :hival !='ALL')

When you run the explain plan on the new SQL statement you will get the desired result and the undesired result in the same plan. One will use an index and the other will uses a full table scan. In the case of both parameters not equal to 'ALL' you would wish to use an index, and in the other case a full table scan will be faster. Although both the desired and the undesired part exist in the execution plan, one part of the UNION ALL will be eliminated upon execution as :hival and :loval can either be 'ALL' or not 'ALL'. This will eliminate the undesired part upon execution. Note also the use of UNION ALL and not just UNION. The purpose is to eliminate the costly SORT (and deleting duplicates) associated with a UNION and not with a UNION ALL.

g) Use control hints. Although the optimizer does a good job getting the best execution plan, it is sometimes not as aware of the overall picture and require a little hint from the developer. Regardless of why the optimizer choose the wrong execution plan, the use of hints can produce better results if used correctly, especially to force the optimizer to choose a certain path such as a full table scan rather than an index.

h) Use care when using IN and NOT IN in a subquery. Remember that WHERE (NOT) EXISTS can be a useful alternative.
i) Use care when embedding value lists. Examine carefully the contents of embedded value list and try to find an alternative if possible.
For example,
SELECT info FROM table
WHERE transport IN ('BMW','HONDA','FORD')
It may be possible instead to use:
SELECT info FROM table
WHERE transport_type = 'CAR'

j) Minimize the use of DISTINCT or GROUP BY unless necessary as they will always create a sort.

k) Reduce the number of calls to the database. Instead of calling an INSERT, UPADTE or DELETE then a SELECT to get the new record from the database, use INSERT¡KRETURNING, UPDATE¡KRETURNING or DELETE¡KRETURNING to perform both tasks in one call.

l) Avoid using views in joins, unless really necessary. Joins take time and also processing the view also tasks time, so avoid putting them together especially in outer joins.

3) Modify or disable triggers. In most cases using triggers can make life a lot easier for the DBA and developer and help to maintain data integrity. However triggers consume resources, and too many or inappropriate triggers can affect performance.

Approaches to creating indexes

Index plays an important part in database design, the efficiency of the optimizer and the reduction of workload on the main tables (especially if most of the information can be found on the index). However, indexes can also be a burden to the system, requiring extra processing from each DML statement issued to the table with an index. Therefore, good index design is essential.
Oracle provides a variety of index types, each with different properties and each with a different use. The following section is some guidelines to creating useful indexes.

1) When creating an index, the target is that the index would help to query less than 2% - 4% of the tables total data.
2) Building indexes just in case is not a bright idea.
3) Avoid building too many indexes. Indexes processing consumes extra resources per INSERT, UPDATE and DELETE.
4) Do not index columns which are frequently modified. For example, order_id, order_number, customer_id may be good indexes, but order_status, last_modification_date are not good indexes even if they may be used in one or two forms and reports.
5) ALTER INDEX ¡K. REBUILD is faster than dropping and recreating the index.
6) Compact indexes. Using ALTER INDEX¡KCOALESCE can be used to rebuild indexes online.
7) Creating index based upon commonly used functions can be a good aid, rather than burden to system if used properly. For example:
CREATE INDEX idx ON table (a+b*(c-1),a,b) can aid
SELECT info FROM table WHERE a+b*(c-1) BETWEEN 25 AND 50
Or
CREATE INDEX ind ON table(UPPER(empname))
8) Create indexes used in commonly used GROUP and ORDER BY clauses.
9) For large indexes which cover more than 5% of the tables data, but are necessary, consider using a BITMAP INDEX. Bitmap indexes use up less space for indexes with not a lot of distinct values and can be faster for this kind of index. However bitmap indexes cannot be declared unique. Another good use of bitmap indexes is the fact that it can index null values, whilst normal indexes do not.

Approaches to using clusters

Cluster
For tables which are joined regularly such as master detail tables, consider a CLUSTER.
Don't use cluster indexes on non frequent joins or fields which are update frequently.
Don't use cluster on tables which regularly performs full index scans. Clustering actually slows down access on individual records in a table.
Consider cluster on master detail tables where the master table is accessed first then the detail table. Clustering actually places the detail table right after the master table, so access is faster.
Consider a detail cluster alone if you select many details from the same master record.
Don't consider clustering if the data from all the tables in the same cluster key uses more than two blocks.

Hash Clusters
Hash clusters are similar to clusters, except that they perform an hash function to each rows cluster key values.
Consider a hash cluster if the WHERE clause of a SQL statement uses conditions that use the same column or combination of columns regularly.
Consider a hash cluster if you can determine the space required to hold all values in the cluster keys. Note: this means that cluster keys are useful for look tables but not for tables updated or change regularly.
Do not use when space is scarce, hash clusters can consume a lot of space.
As mentioned above do not use on a constantly growing table.
Don't use cluster on tables which regularly performs full index scans. Clustering actually slows down access on individual records in a table.
As mentioned above do not use on a regularly modified table.

Approaches to using hints

In the above section, it is mentioned that hints can aid or force the optimizer to use a certain execution plan. This section will describe in more detail the different types of hints that can be used and how they affect the execution plan outcome. Oracle four key optimization plans; they are as follows:
ALL_ROWS - This is used to get the best total performance for the statement. Best used for reports where all the rows must be obtained.
FIRST_ROWS - This is used to get the best response time. Best used for forms and queries where faster response time is required. This mode cannot be used with DML statements, UNIONs, INTERSECTs, MINUS, GROUP BY, FOR UPDATE, DISTINCT.
RULE - This is based on rule based optimizations where predefined optimization rules are used to determine the execution plan.
CHOOSE - Let the optimizer choose the method from the above three rules.

Hints for access method
Hints for optimization (above) control the overall execution plan, however oracle provides hints for control over individual tables and indexes. These are group as access method hints.
FULL(table) - Use full scan on the table specified.
ROWID(table) - Use scan by rowid on the table specified.
CLUSTER(table) - Use a cluster scan on the table specified.
HASH(table) - Use a hash scan on the table specified.
HASH_AJ - use to force optimizer to change all NOT IN into a hash outer join.
HASH_SJ - use to force the optimizer to EXISTS into a hash semi join.
INDEX( table index) - consider the index specified to be used first.
INDEX_ASC(table index) - specifies ascending range of scan of index explicitly.
INDEX_DESC(table index) - specifies descending range of scan of index explicitly.
INDEX_COMBINE(table index) - Chooses a bitmap access path for the table using the index specified.
INDEX_JOIN(table index) - Using index to join explicitly.
INDEX_FFS(table index) - Use a fast full index scan for the table and index specified.
NO_INDEX(table index) - Tells the optimizer to ignore certain indexes.
MERGE_AJ - Transforms a NOT IN into a merged anti join.
MERGE_SJ - transforms a EXISTS subquery into a merged semi join.
AND_EQUAL(table index index) - merges scans on several single column indexes.
USE_CONCAT - forces OR conditions in the WHERE clause into a compound UNION ALL set.
NO_EXPAND - prevents IN list to be expanded into OR conditions.
REWRITE(view) - Rewrite the statement using the view specified instead.
NO_REWRITE - prevents statement from being rewritten with a view.

Hints on join order

ORDERED - causes the optimizer to join the tables in the query in the order they appear in the FROM clause.
STAR - Forces the star query plan. The star query plan has the largest table last in the join order and joins with a nested loop. Star hint applies when there are at least three tables.

Hints on join operators

USE_NL(table) - force to use nested loops for table
USE_MERGE(table) - force to use SORT MERGE for the table.
USE-HASH(table) - force to use hash join for the table.
DRIVING_SITE(table) - force query to be done at a different site.

Examples

The times we taken from a IBM S8a server with 6 CPUs, running Oracle 8i and Oracle Applications 11i.
The important thing is not the time, but the time difference frfom the original statement.

Use of descending indexes (INDEX_DESC)
Original statement
SELECT * FROM ar_payment_schedules_all apsa
WHERE trx_date > sysdate -7
OEDER BY apsa.trx_date desc


PLAN
SELECT STATEMENT, GOAL = RULE
SORT ORDER BY
TABLE ACCESS BY TABLE ROWID AR_PAYMENT_SCHEDULES_ALL
INDEX RANGE SCAN AR_PAYMENT_SCHEDULES_N1

Execution time : 12.897 secs*

Modified
SELECT --+INDEX_DESC(apsa,ar_payment_schedules_n1)
* FROM ar_payment_schedules_all apsa
WHERE apsa.trx_date > sysdate - 7
ORDER BY apsa.trx_date DESC


PLAN
SELECT STATEMENT, GOAL = RULE
SORT ORDER BY
TABLE ACCESS BY TABLE ROWID AR_PAYMENT_SCHEDULES_ALL
INDEX RANGE SCAN DESCENDING AR_PAYMENT_SCHEDULES_N1


Execution time : 3.311 secs*

Fast Full Index scan

Using the above example, we can modify it so that the optimizer uses a fast full index scan. To do this we only select the fields from the index, so that it does not need to use the table, just the index.
SELECT --+INDEX_DESC(apsa,ar_payment_schedules_n1)
trx_date FROM ar_payment_schedules_all apsa
WHERE apsa.trx_date > sysdate - 7
ORDER BY apsa.trx_date DESC


PLAN
SELECT STATEMENT, GOAL = RULE
INDEX RANGE SCAN DESCENDING AR_PAYMENT_SCHEDULES_N1


Merging indexes (AND_EQUAL)
Sometimes adding filter conditions can result in the using of merged indexes which can greatly increase the speed of a query, even if the condition seems irrelevant.
Original
SELECT * FROM mtl_system_items msi
WHERE inventory_item_id > 50000 AND
organization_id = 1 AND
container_item_flag = 'N'
and segment1 like 'A%'


PLAN
SELECT STATEMENT, GOAL = RULE
TABLE ACCESS BY TABLE ROWID MTL_SYSTEM_ITEMS
INDEX RANGE SCAN CUX_XIAO_MTL_SYSTEM_ITEMS_N1

Result: 17 Rows, 74.957 secs*

Modified
SELECT * FROM mtl_system_items msi
WHERE inventory_item_id > 50000 AND
organization_id = 1 AND
item_type = 'STL' AND
container_item_flag = 'N'
AND segment1 like 'A%'


PLAN
SELECT STATEMENT, GOAL = RULE
TABLE ACCESS BY TABLE ROWID MTL_SYSTEM_ITEMS
AND-EQUAL
INDEX RANGE SCAN CUX_MTL_SYSTEM_ITEMS_N1
INDEX RANGE SCAN CUX_XIAO_MTL_SYSTEM_ITEMS_N1


Result: 17 Rows, 2.784 secs*

Results are dependant on server activity and results may differ with different servers and different workloads.


BAD SQL
Many speed problems are caused by bad SQL statements is not related to how the optimizer sets the execution plan. A badly written SQL will still be bad, even when you optimized them with hints. There main things to check for in debugging speed problems are as follows:
Are all the tables joined? Make sure all the tables are joined, and joined with the best fields to eliminate the most records.
Tables joined in proper order? Generally joined from parent to child.
Are all tables used? Make sure the table is used, but be careful as some tables are used for filtering.
Are there easier or faster ways to join the tables? Find the join path to a table using the least number of join conditions.

What is wrong with this statement?

select sha.org_id,
sha.packing_instructions,
sha.purchase_order_num,
sha.header_id,
so.name||'-'||sha.order_number,
sha.attribute2 picture,
sha.attribute15 invoceremark,
sha.attribute5 memo,
sha.attribute10 dan,
sha.attribute12 hkorder,
decode(sha.warehouse_id,1,'DG',4,'HY',3,'GZ') orga,
sha.warehouse_id warehouse,
decode(sha.attribute3,'I',' ','M',' ','U',' ','DI',' ','DM',' ','DU',' ','D',' ','S',' ','N',' ') la,
decode(sha.attribute4,'I',' ','M',' ','U',' ','DI',' ','DM',' ','DU',' ','D',' ','S',' ','N',' ') lo,
sla.schedule_date,
sla.creation_date s1_date,
sla.schedule_date,
sla.ordered_quantity allquantity,
decode(sla.attribute5,'I',' ','O',' ') water,
rsua.location||'-'||raa.address1,
rt.attribute1 route1,
msi.segment1,
msi.description,
fu.description,
he.full_name
from so_headers_all sha,
so_lines_all sla,
ar_customers ac,
fnd_user fu,
ra_site_uses_all rsua,
ra_addresses_all raa,
ra_site_uses_all rsua1,
ra_addresses_all raa1,
ra_territories rt,
mtl_system_items msi,
so_order_types_all so,
fnd_descr_flex_contexts fdfc,
hr_employees he
where sha.header_id=sla.header_id
and sha.customer_id=ac.customer_id
and sha.org_id=so.org_id
and sha.created_by=fu.user_id
and sla.inventory_item_id=msi.inventory_item_id
and sla.option_flag='N'
and sha.ship_to_site_use_id=rsua1.site_use_id(+)
and sha.invoice_to_site_use_id=rsua.site_use_id(+)
and rsua.address_id=raa.address_id
and rsua1.address_id=raa1.address_id
and rsua1.territory_id=rt.territory_id(+)
and sha.order_type_id=so.order_type_id
and sha.attribute1=he.employee_id(+)
and sla.warehouse_id=msi.organization_id
and fdfc.application_id=300
and fdfc.descriptive_flexfield_name='SO_HEADERS'
and msi.segment1 not like 'G%'
and sha.org_id=:p_org_id
and sha.order_type_id=:p_order_type_id
and sha.order_number between :p_order_number_lo and :p_order_number_hi
and sha.warehouse_id=nvl(:p_warehouse_id,sha.warehouse_id)
and sha.created_by=nvl(:p_created_by,sha.created_by)
and sha.attribute10=nvl(:p_attribute10,sha.attribute10)
order by sha.order_number

Notice
ar_customers ac,
ra_addresses_all raa1,
Both these tables are linked but are not used. The table and the joins associated with these tables can be safely removed without affecting the results.
Also the table fnd_descr_flex_contexts(fdfc) uses filter conditions but in not linked in any way. This is a bug and we should have added
and fdfc.descriptive_flex_context_code='M'
To the SQL statement

CHECK SQL FIRST!

Use of Nested loops
Nested loops out performs all other join methods when there are a number of large tables which are joined to narrow down the number of records. In the following example you will see that the optimizer opted to use HASH and MERGE JOINS, It has been modified to return one row.
Original statement


select oof.organization_code||'HI'|| RCT.TRX_NUMBER TRX_NUMBER11,
ra.address_id, rc.customer_id, RCT.SET_OF_BOOKS_ID SOB_ID, RSU.LOCATION,
SLA1.SELLING_PRICE*(SLA1.ordered_quantity-nvl(sla1.cancelled_quantity,0)) AMOUNT_TEMP,
RCT.INVOICE_CURRENCY_CODE CURRENCY_CODE, RCT.CREATION_DATE, RCT.CUSTOMER_TRX_ID,
RC.CUSTOMER_NAME, RC.CUSTOMER_NUMBER, RCT.WAYBILL_NUMBER, RCT.PURCHASE_ORDER,
RA.ADDRESS1, ra.ADDRESS2, ra.ADDRESS3, ra.ADDRESS4,
RCT.TRX_NUMBER, RCT.TRX_DATE,
decode(rt.name,' ',rt.name, RT.NAME||' ') TERM_NAME,
RCTL.SALES_ORDER, MSI.ATTRIBUTE4 DESCRIPTION_INTERFACE,
rctl.attribute_category, RCTL.ATTRIBUTE1, RCTL.ATTRIBUTE2, RCTL.ATTRIBUTE3,
rctl.attribute4||' ' DESCRIPTION , rctl.attribute4 STL_NUM,
(SLA1.ordered_quantity-nvl(sla1.cancelled_quantity,0))/1000 quantity_invoiced,
sla1.LIST_PRICE*1000 unit_selling_price,
SLA1.SELLING_PRICE*1000 BO_PRICE, RCTL.EXTENDED_AMOUNT,
sum(SPA.PERCENT) PERCENT, RCTL.LINE_NUMBER,
sla.original_system_line_reference so_line_id, SHA.ORDER_NUMBER,
DECODE(LENGTH(RCTL.INTERFACE_LINE_ATTRIBUTE2),4,SUBSTR(RCTL.INTERFACE_LINE_ATTRIBUTE2,1,4),7,SUBSTR(RCTL.INTERFACE_LINE_ATTRIBUTE2,4,7),8,SUBSTR(RCTL.INTERFACE_LINE_ATTRIBUTE2,5,8),RCTL.INTERFACE_LINE_ATTRIBUTE2)||'-'||RCTL.SALES_ORDER||'-'||RCT.ATTRIBUTE10 SHIP_NO,
SPA.HEADER_ID
from
RA_CUSTOMER_TRX_ALL RCT, RA_CUSTOMER_TRX_LINES_ALL RCTL,
RA_CUSTOMERS RC, RA_SITE_USES_ALL RSU, RA_ADDRESSES_ALL RA,
RA_TERMS RT, MTL_SYSTEM_ITEMS MSI, org_organization_definitions oof,
SO_PRICE_ADJUSTMENTS SPA, so_LINEs_all sla,
so_LINEs_all sla1, SO_HEADERS_ALL SHA
where
RCT.CUSTOMER_TRX_ID=RCTL.CUSTOMER_TRX_ID(+)
AND RCT.bill_TO_CUSTOMER_ID=RC.CUSTOMER_ID(+)
AND RCT.bill_TO_SITE_USE_ID=RSU.SITE_USE_ID(+)
AND RSU.ADDRESS_ID=RA.ADDRESS_ID(+)
AND RCT.TERM_ID=RT.TERM_ID(+)
AND rct.org_id=oof.organization_id(+)
AND RCTL.INTERFACE_LINE_ATTRIBUTE2 LIKE '%051'
AND RCTL.INTERFACE_LINE_ATTRIBUTE6=SPA.LINE_ID(+)
and RCTL.INTERFACE_LINE_ATTRIBUTE6=sla.line_id(+)
AND nvl(SLA.original_system_LINE_reference,sla.line_id)=SLA1.LINE_ID(+)
AND SLA1.HEADER_ID=SHA.HEADER_ID(+)
AND RCTL.INVENTORY_ITEM_ID=MSI.INVENTORY_ITEM_ID
AND RCT.ORG_ID=MSI.ORGANIZATION_ID
and rctl.line_type = 'LINE'
and rct.trx_number='10003789'
AND RCT.org_ID=5
and RC.CUSTOMER_NUMBER='H188'
and rctl.description not like '% %' and rctl.description not like '% %'--AND --NVL(RSU.ATTRIBUTE1,RCT.INVOICE_CURRENCY_CODE)=NVL(:P_CURRENCY_CODE,GSOB.CURRENCY_CODE)
GROUP BY
oof.organization_code||'HI'|| RCT.TRX_NUMBER,
ra.address_id,
rc.customer_id,
RCT.SET_OF_BOOKS_ID,
RSU.LOCATION,
SLA1.SELLING_PRICE*(SLA1.ordered_quantity-nvl(sla1.cancelled_quantity,0)),
RCT.INVOICE_CURRENCY_CODE,
RCT.CREATION_DATE,
RCT.CUSTOMER_TRX_ID,
RC.CUSTOMER_NAME,
RC.CUSTOMER_NUMBER,
RCT.WAYBILL_NUMBER,
RCT.PURCHASE_ORDER,
RA.ADDRESS1,ra.ADDRESS2,ra.ADDRESS3,ra.ADDRESS4 ,
RCT.TRX_NUMBER,
RCT.TRX_DATE,
decode(rt.name,' ',rt.name, RT.NAME||' '),
RCTL.SALES_ORDER,
MSI.ATTRIBUTE4,
rctl.attribute_category,
RCTL.ATTRIBUTE1,
RCTL.ATTRIBUTE2,
RCTL.ATTRIBUTE3,
rctl.attribute4||' ',
rctl.attribute4,
(SLA1.ordered_quantity-nvl(sla1.cancelled_quantity,0))/1000,
sla1.LIST_PRICE*1000,
SLA1.SELLING_PRICE*1000,
RCTL.EXTENDED_AMOUNT,
RCTL.LINE_NUMBER,
sla.original_system_line_reference,
SHA.ORDER_NUMBER,
DECODE(LENGTH(RCTL.INTERFACE_LINE_ATTRIBUTE2),4,SUBSTR(RCTL.INTERFACE_LINE_ATTRIBUTE2,1,4),7,SUBSTR(RCTL.INTERFACE_LINE_ATTRIBUTE2,4,7),8,SUBSTR(RCTL.INTERFACE_LINE_ATTRIBUTE2,5,8),RCTL.INTERFACE_LINE_ATTRIBUTE2)||'-'||RCTL.SALES_ORDER||'-'||RCT.ATTRIBUTE10,
SPA.HEADER_ID
ORDER BY RCT.TRX_NUMBER



PLAN (for STAR)
SELECT STATEMENT, GOAL = RULE
SORT GROUP BY
HASH JOIN OUTER
HASH JOIN OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
HASH JOIN
TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS
INDEX RANGE SCAN MTL_SYSTEM_ITEMS_N1
HASH OUTER JOIN
NESTED LOOPS OUTER
HASH JOIN
TABLE ACCESS FULL RA_CUSTOMER_TRX_LINES_ALL
NESTED LOOPS OUTER
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID RA_CUSTOMERS
INDEX RANGE SCAN RA_CUSTOMERS_U3
TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL
INDEX RANGE SCAN RA_CUSTOMER_TRX_N11
VIEW ORG_ORGANIZATION_DEFINITIONS
HASH JOIN
TABLE ACCESS FULL GL_SETS_OF_BOOKS
MERGE JOIN CARTESIAN
HASH JOIN
HASH JOIN
TABLE ACCESS FULL HR_ORGANIZATION_INFORMATION
NESTED LOOP
TABLE ACCESS FULL MTL_PARAMETERS
INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK
TABLE ACCESS FULL HR_ORGANIZATION_INFORMATION
SORT JOIN
TABLE ACCESS FULL FND_PRODUCT_GROUPS
TABLE ACCESS BY ROWID RA_SITE_USES_ALL
INDEX UNIQUE SCAN RA_SITE_USES_U1
TABLE ACCESS FULL RA_TERMS
TABLE ACCESS BY INDEX ROWID RA_ADDRESSES_ALL
INDEX UNIQUE SCAN RA_ADDRESSES_U1
TABLE ACCESS BY INDEX ROWID SO_LINES_ALL
INDEX UNIQUE SCAN SO_LINES_U1
TABLE ACCESS BY INDEX ROWID SO_LINES_ALL
INDEX UNIQUE SCAN SO_LINES_U1
TABLE ACCESS FULL SO_HEADERS_ALL
TABLE ACCESS FULL SO_PRICE_ADJUSTMENTS


Execution time (using --+ORDERED): 91.795 secs*
Execution time (using --+STAR): 59.137 secs*

Modified
All we added was to force the optimizer to use nested loops (which made a large difference), and hinted a few indexes (which made slight differences)
select --+USE_NL INDEX (sha,so_headers_u1) INDEX (rctl,ra_customer_trx_lines_n2)
etc...


PLAN
SELECT STATEMENT, GOAL = RULE
SORT GROUP BY
MERGE JOIN OUTER
SORT JOIN
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
FILTER
NESTED LOOPS OUTER
NESTED LOOPS
FILTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_ALL
INDEX RANGE SCAN RA_CUSTOMER_TRX_N1
TABLE ACCESS BY INDEX ROWID RA_TERMS
INDEX UNIQUE SCAN RA_TERMS_U1
TABLE ACCESS BY INDEX ROWID RA_SITE_USES_ALL
INDEX UNIQUE SCAN RA_SITE_USES_U1
TABLE ACCESS BY INDEX ROWID RA_ADDRESSES_ALL
INDEX UNIQUE SCAN RA_ADDRESSES_U1
TABLE ACCESS BY INDEX ROWID RA_CUSTOMERS
INDEX UNIQUE SCAN RA_CUSTOMERS_U1
TABLE ACCESS BY INDEX ROWID MTL_SYSTEM_ITEMS
INDEX RANGE SCAN MTL_SYSTEM_ITEMS_N4
TABLE ACCESS BY INDEX ROWID RA_CUSTOMER_TRX_LINES_ALL
INDEX RANGE SCAN RA_CUSTOMER_TRX_LINES_N2
TABLE ACCESS BY INDEX ROWID SO_LINES_ALL
INDEX UNIQUE SCAN SO_LINES_U1
TABLE ACCESS BY INDEX ROWID SO_LINES_ALL
INDEX UNIQUE SCAN SO_LINES_U1
TABLE ACCESS BY INDEX ROWID SO_HEADERS_ALL
INDEX UNIQUE SCAN SO_HEADERS_U1
TABLE ACCESS BY INDEX ROWID SO_PRICE_ADJUSTMENTS
INDEX RANGE SCAN SO_PRICE_ADJUSTMENTS_N2
SORT JOIN
VIEW
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
MERGE JOIN
SORT JOIN
NESTED LOOPS
TABLE ACCESS FULL FND_PRODUCT_GROUPS
TABLE ACCESS FULL GL_SETS_OF_BOOKS
SORT JOIN
TABLE ACCESS FULL HR_ORGANIZATION_INFORMATION
INDEX UNIQUE SCAN HR_ORGANIZATION_UNITS_PK
TABLE ACCESS BY INDEX ROWID MTL_PARAMETERS
INDEX UNIQUE SCAN MTL_PARAMETERS_U1
TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION
INDEX RANGE SCAN HR_ORGANIZATION_INFORMATION_FK2


Execution time : 4.958 secs*

Final notes

The skill of optimization is not the knowledge of how it is done. This document can only provide the details of what can be done. The real skill of optimization is experience, to know when to use what to achieve the desired goal.
All optimizations depend on the data in the tables and the index available. As you seen above sometimes a what seems less efficient method is actually better, e.g. using a full table scan instead of the index, and splitting a statement to uses UNION ALL to force the optimizer to do different things for different parameters. In the end it is up to the developer to produce the best method to achieved the desired goal and not the optimizer itself.



Site Sponsors

2 comments:

Anonymous said...

Very useful. Thank you!

GAMESH TAWADE said...

Pretty handy and easy to use.
Recommended you read