Sunday, September 7, 2025

Oracle CBO decisions for Join Order along with multiple tables

 

Have you ever wondered how Oracle handles queries with multiple joins?

How Oracle Find Best Orders of Tables in a large SQL statement?

Is there a difference in the order in which tables are written in queries with multiple joins?


Note:

In this topic, I'm going to focus your attention on the inner workings of Oracle in executing and producing a proper execution plan on queries containing a large number of joins.

If, finally, you have a particular query for which the optimizer has produced a logically incorrect plan, you will have to test and examine many items that are beyond the scope of this topic.


 Join order in Oracle Database

The order in which multiple tables are joined together.

For example, for each row in the employees table, the database can read each row in the departments table. In an alternative join order, for each row in the departments table, the database reads each row in the employees table.

To execute a statement that joins more than two tables, Oracle Database joins two of the tables and then joins the resulting row source to the next table. This process continues until all tables are joined into the result.

 

🔹 1. Join Order Permutations

  • When Oracle needs to join multiple tables, there are many possible join orders (permutations).
  • For example, joining 3 tables (A, B, C) has 6 possible orders (A→B→C, A→C→B, etc.).
  • As the number of tables grows, permutations explode factorially:
    • 3 tables → 6 permutations
    • 4 tables → 24
    • 5 tables → 120
    • 10+ tables → millions of possible join orders.

Clearly, testing all permutations would be computationally expensive.


🔹 2. How Oracle Reduces the Search Space

Oracle does not test every possible join order blindly. Instead, it applies heuristics and dynamic programming techniques:

  • Query block transformation: Oracle may reorder tables for efficiency unless ORDERED or LEADING hints force a sequence.
  • Join commutativity: Inner joins are commutative, so Oracle can flip them.
    Example: A JOIN B ≡ B JOIN A.
  • Join associativity: Oracle can regroup joins (A JOIN B) JOIN C vs. A JOIN (B JOIN C) if they are associative.
  • Pruning bad plans early: Oracle uses heuristics to discard join orders that are unlikely to be optimal.
  • Star transformation & bushy joins: For star schemas, Oracle considers special transformations (fact + dimensions) and sometimes bushy join trees.

 

🔹 3. Cost-Based Decision

  • Oracle’s CBO assigns a cost to each candidate join order.
  • It estimates:
    • Cardinality (row counts after filtering)
    • Selectivity (filter conditions, histograms, bind peeking)
    • Join method (nested loops, hash join, sort merge join)
    • I/O, CPU, memory, parallelism
  • Then it picks the lowest-cost plan.

 

🔹 4. Controlling Join Order

You can influence join order in Oracle:

  • ORDERED hint → Enforces the join order in the SQL text.
  • LEADING hint → Specifies which table(s) should appear first in the join order.
  • JOIN hints (USE_NL, USE_HASH, USE_MERGE) → Influence join method but also indirectly affect join order.
  • Optimizer parameters like optimizer_search_limit (limits permutations in older versions).

 

In short:
Oracle does not brute-force all join order permutations. Instead, it uses CBO-driven pruning, heuristics, and transformations to evaluate only promising join orders and then selects the lowest-cost plan.

 

Effect of Number of Tables on Join Order Permutations

The CBO will evaluate n! different join orders for a particular query where n is the number of tables in the query. In reality the actual number may be 
less than this because the optimization mechanism avoids repeating work and will not continue down paths that it knows are more expensive than the best
it has already found. There are also numerous other cutoffs that try to prevent the optimizer from spending too long evaluating a plan. One of the better known
of these is the cutoff at 80,000 permutations. With large numbers of tables, 80,000 permutations may be an insignificant proportion of the possible number.
This can mean that the best plan has not been found before the optimizer stops evaluation. See below for figures and percentages.

Unfortunately, with large numbers of tables, the time spent evaluating a single permutation can be significantly greater than with fewer tables.

This means that 50,000 permutations with 15 tables can take significantly longer than with 8 tables.

 

The following list is intended to indicate total permutations and what percentage 80,000 is of this number. This may give an idea of how accurate

or not the evaluation of a particular plan may or may not be.

Number of tables

Total number of possible permutations (n!)

Proportion of total represented permutations by 80,000 permutations

 (80,000 / n! * 100)

 

1

1

Not Relevant

2

2

Not Relevant

3

6

Not Relevant

4

24

Not Relevant

5

120

Not Relevant

6

720

Not Relevant

7

5040

Not Relevant

8

40320         

Not Relevant

9

362880        

22%

10

3628800           

2.2%

11

39916800      

0.2%

12

479001600     

0.016%

13

6226020800    

0.001284%

14

87178291200   

0.000092%

15

1307674368000   

0.000006%

 

Clearly an 80,000 sample is not really sufficient for a 15 table join.


Note that a 10+ table join is highly unlikely to ever have a different starting table in the join order simply because there are >80,000 permutations

for the 9 remaining tables. So, if the first table in the best plan is not the first table in chosen join order, then you cannot achieve the best plan.

To avoid these problems, use techniques as follows:

·         Determine the best access path by testing and then use hints to force the join order (ORDERED hint).

·         Use the PARAMETER: OPTIMIZER_MAX_PERMUTATIONS parameter to change the maximum number of permutations considered from 80,000 to something else.

             Increasing the parameter means that more permutation will be examined.

            Lowering this parameter value changes the internal algorithm to vary the starting table in the join order.

Important Considerations:

  • Oracle 10g and later:

The parameter is obsolete starting with Oracle 10g. If upgrading from an older version where it was set, you may need to explicitly set OPTIMIZER_FEATURES_ENABLE to an older version to ensure the same behavior. 

  • Parse Time vs. Plan Quality:

The main tradeoff is between parse time and the quality of the execution plan. 

  • Recommended Values:

Setting it to a value less than 1000 will usually result in parse times of a few seconds or less but set an optimal value need a comprehensive test. 

  • Dynamic:

OPTIMIZER_MAX_PERMUTATIONS is not a dynamic parameter, meaning it can only be changed by modifying the initialization parameter file and restarting the instance. 

  • Obsolete in later versions:

While still present in older versions, it's not a primary control for optimization in newer releases. Other parameters like optimizer_features_enable and hints are more relevant in those versions. 

 

  • In older releases (like Oracle 8i, 9i), this parameter limited the maximum number of join order permutations that the optimizer would consider when evaluating queries.
  • Its purpose was to avoid excessive optimization time when joining many tables.
  • Starting from Oracle 10g onward, Oracle removed it and replaced it with smarter join-ordering algorithms (dynamic pruning, join heuristics, and query transformation).

in Oracle Database 19c (and even 12c, 18c):

  • OPTIMIZER_MAX_PERMUTATIONS is obsolete (you won’t even find it in v$parameter).
  • The optimizer automatically controls join order exploration with internal limits.

 

What to use instead

If you need to control join order in 19c:

  • Use join order hints:
    • ORDERED (forces the order written in the query)
    • LEADING (specifies the join leading table)
  • Use join method hints:
    • USE_NL, USE_HASH, USE_MERGE
  • Restructure queries with query blocks (WITH clause, subqueries).
  • For very complex joins, consider SQL Plan Baselines or SQL Profiles, SPM to stabilize the optimizer’s choice.

 

OPTIMIZER_SEARCH_LIMIT:
This parameter specifies the upper limit above which the CBO will not automatically evaluate cartesian products where equality joins are not specified
in the base query. The reason is to prevent the optimizer from spending an inordinate amount of time on join orderings that are likely to be suboptimal.

 

In Oracle, the parameter controlling the number of join permutations the optimizer considers is:

  • In older versions (pre-9i): OPTIMIZER_MAX_PERMUTATIONS was used.
  • From Oracle 9i onwards, this parameter was deprecated and replaced with OPTIMIZER_SEARCH_LIMIT.

OPTIMIZER_SEARCH_LIMIT in Oracle 19c

  • Default: 0 (meaning "let the optimizer decide" with internal limits).
  • Range: 0 to 2,147,483,647 (but setting very high values can cause massive parse times).
  • Purpose: It limits the number of join permutations the optimizer considers during cost-based join order optimization.

⚠️ Setting it too high can severely impact parsing performance if you have SQL with many joined tables.

Example: Setting the parameter

ALTER SYSTEM SET OPTIMIZER_SEARCH_LIMIT = 2000 SCOPE=BOTH;

-- Or at session level:

ALTER SESSION SET OPTIMIZER_SEARCH_LIMIT = 2000;


Recommendation for new versions as 19c 

Oracle usually manages join order exploration very well internally, so changing this parameter is rarely recommended unless you’re troubleshooting a specific query with too many joins and poor optimizer decisions.

👉 In practice, better approaches include:

  • Using SQL Plan Baselines, SPM and other new facilities on 23ai
  • Adding appropriate join order hints (LEADING, ORDERED)
  • Ensuring good schema design, statistics, histograms, cardinality, selectivity, correct indexing, partitioning, …

 

Ø  if oracle handle automatically Join orders, does need we focused on tables orders in joins SQL statement?

Ø  Do these orders must be attention manually by DBA/Developers? or oracle find best execution plan automatically??

 

🔹 Oracle’s Default Behavior

  • Oracle joins two tables first, then adds the next, step by step, until all are joined.
  • But which tables it starts with, and in what order, is chosen by the Cost-Based Optimizer (CBO), unless you explicitly force it.
  • The CBO:
    • Estimates cardinality (rows after filters).
    • Chooses join order + join method (NL/Hash/Merge).
    • Picks the lowest-cost plan based on statistics, histograms, system resources, etc.

So, by default: Oracle does not simply follow your SQL text’s table order. It will reorder joins automatically for what it thinks is most efficient and CBO parameters in Database can lead to different behaviors in action.

 

🔹 Do DBAs/Devs Need to Worry About Join Order?

👉 Usually, no — if:

  • You have accurate statistics (DBMS_STATS gathered, histograms where needed).
  • The SQL is well-formed and written with clear join conditions.
  • Data distribution is not heavily skewed (or you’ve provided histograms).

In those cases, Oracle will find the best join order automatically.
That’s why in modern Oracle (10g onward), the old OPTIMIZER_MAX_PERMUTATIONS was removed — because the CBO does the heavy lifting.

 

🔹 When You Should Care About Join Order

A DBA/dev needs to manually influence join order when:

  1. Bad stats: Optimizer misestimates cardinality (e.g., skewed data, missing histograms).
  2. Complex queries: 8–10+ tables with multiple join paths — CBO might prune too aggressively.
  3. Specific performance issues: You see a bad plan (e.g., large table joined too early).
  4. Star schemas/warehouses: Sometimes CBO misses the optimal dimension join order.
  5. Optimizer bugs/edge cases: Rare but real.
  6. Bad CBO parameter setting (Check Adaptive Query Plans, exist SQL Profiles, ASH, Redesign schema, …)

 

🔹 Tools to Influence Join Order

If you find Oracle chose badly, you can:

  • LEADING (table1 table2 …) hint → explicitly control starting join order.
  • ORDERED hint → forces join order as written in SQL.
  • USE_NL / USE_HASH / USE_MERGE hints → force join methods, indirectly affecting order.
  • Query rewrite / subquery factoring (WITH clause) → isolate blocks to control execution.
  • Use ASH report, use SPM, Profiles, redesign schema, check CBO parameters such as Adaptive params

 

Summary

  • Oracle 19c CBO: Automatically finds join order → DBA usually does not need to manually set it.
  • Focus: Ensure statistics are accurate and representative of the data.
  • Exceptions: Very complex queries, skewed data, or specific performance issues → then DBAs/devs should influence join order manually with hints or rewrites.

 In practice:
90% of the time → let Oracle decide.
10% of the time (skewed data, huge joins, critical queries, bad indexing, not partitioning on huge tables, bas histograms, chaining/migration rows, CBO params,
Adaptive Query Plans and many other variables) → step in and guide it.


See this essay for more info:

https://vldb.org/cidrdb/papers/2021/cidr2021_paper01.pdf

Tuesday, September 2, 2025

Oracle storage parameters in tables along with techniques for tuning and optimizing heavy loads

Oracle storage parameters in tables along with techniques for tuning and optimizing heavy loads

IMG_2121.jpeg

Paying attention to storage level settings in tables and indexes is one of the important requirements in reducing the workload on tables in Oracle, and understanding these parameters and their precise adjustment and combining it with other techniques can help us greatly in tuning the database. In this post, I have tried to explain to you the key concepts and necessary techniques and conditions that are required to adjust these criteria.


Introduction: 


PCTFREE, PCTUSED, and High Water Mark in Oracle


In Oracle Database, efficient space management inside data blocks is controlled by storage parameters such as PCTFREE and PCTUSED, which directly affect how rows are inserted, updated, and reused. 


These parameters, in turn, influence the High Water Mark (HWM) of a table or segment, which represents the boundary between used and unused space.


1. PCTFREE (Percent Free)

Defines the minimum percentage of a block reserved as free space for future row expansions (updates that increase row size).

Ensures that updates do not cause excessive row migration or chaining.


Example: If PCTFREE = 20, then 20% of each block is reserved for row growth, and inserts stop once only 20% space remains free.


Restriction on the PCTFREE Clause

When altering an index, you can specify this parameter only in the modify_index_default_attrs clause and the split_index_partition clause.


PCTFREE has the same function in the statements that create and alter tables, partitions, clusters, indexes, materialized views, materialized view logs, and zone maps. The combination of PCTFREE and PCTUSED determines whether new rows will be inserted into existing data blocks or into new blocks.


2. PCTUSED (Percent Used)

Defines the threshold below which a block becomes available again for new inserts.

Once a block is filled up to the PCTFREE limit, it will not accept new rows until existing rows are deleted or shrink enough that the used space drops below PCTUSED.

Example: If PCTUSED = 40, a block will re-enter the freelist once less than 40% of its space is used.


Restrictions on the PCTUSED Clause

The PCTUSED parameter is subject to the following restrictions:

  • You cannot specify this parameter for an index or for the index segment of an index-organized table. 
  • This parameter is not useful and is ignored for objects with automatic segment-space management.


How PCTFREE and PCTUSED Work Together

In a newly allocated data block, the space available for inserts is the block size minus the sum of the block overhead and free space (PCTFREE). Updates to existing data can use any available space in the block. Therefore, updates can reduce the available space of a block to less than PCTFREE.

After a data block is filled to the limit determined by PCTFREE, Oracle Database considers the block unavailable for the insertion of new rows until the percentage of that block falls beneath the parameter PCTUSED. Until this value is achieved, Oracle Database uses the free space of the data block only for updates to rows already contained in the data block. A block becomes a candidate for row insertion when its used space falls below PCTUSED.


3.INITRANS 

Specify the initial number of concurrent transaction entries allocated within each data block allocated to the database object. This value can range from 1 to 255 and defaults to 1, with the following exceptions:

  • The default INITRANS value for a cluster is 2 or the default INITRANS value of the tablespace in which the cluster resides, whichever is greater. 
  • The default value for an index is 2.

In general, you should not change the INITRANSvalue from its default. 

Each transaction that updates a block requires a transaction entry in the block. This parameter ensures that a minimum number of concurrent transactions can update the block and helps avoid the overhead of dynamically allocating a transaction entry. 

The INITRANS parameter serves the same purpose in the statements that create and alter tables, partitions, clusters, indexes, materialized views, and materialized view logs.


4. Relation to the High Water Mark (HWM)

The HWM marks the boundary between blocks that have been formatted for use and those never touched by data.

Inserts can push the HWM upward as new blocks are used.

PCTFREE and PCTUSED control space reuse below the HWM, while the HWM itself keeps growing upward when existing blocks are full.

Deletes do not lower the HWM, meaning once it is raised, it stays there until segment shrink, table move, or reorganization is performed.


Feature

PCTFREE

PCTUSED

Relation to HWM

Purpose

Reserves space for row expansion

Controls when blocks can accept new rows

Defines boundary between used and never-used blocks

Trigger

Affects inserts stopping earlier

Affects when deleted blocks rejoin freelist

Raised only by inserts (not lowered by deletes)

Focus

Row growth management

Space reuse management

Segment space boundary

Impact

Prevents row migration

Improves block reuse efficiency

Controls full table scan cost & storage usage


Therefore,

PCTFREE = reservation for row growth.

PCTFREE protects update space.

PCTUSED = threshold for reusing blocks.

PCTUSED enables block reuse.

HWM = the highest point ever used in a segment, independent of free space inside blocks.


Pctfree + Pctused must be equal 100 ??!!


This is one of the common misunderstandings about PCTFREE and PCTUSED in Oracle.

They don’t balance to 100, but must not overlap beyond 100.


1. Formula for PCTFREE


PCTFREE determines the percentage of a block reserved for future updates (row expansion).


Usable space for inserts in a block = 

Block size - Overhead - (Block size * (PCTFREE/100))


So, if PCTFREE = 20 in an 8 KB block (assuming ~100 bytes overhead):


8192 - 100 - (8192 * 0.20)  =~6429 , bytes available for inserts


2. Formula for PCTUSED


PCTUSED defines the lower limit of used space before a block is considered reusable.


Reusable threshold = 

Block size * (PCTUSED /100)


If PCTUSED = 40 in an 8 KB block:


8192 * 0.40 = 3276.8 bytes


Once used space in a block falls below ~3277 bytes, it is added back to the freelist for inserts.


Must be: PCTFREE + PCTUSED = 100 ??!!


No. They are independent values, and their sum does not have to equal 100.


In fact, Oracle requires that:


PCTFREE + PCTUSED <= 100


Otherwise, there would be no usable range for inserts/deletes.


Example:

PCTFREE = 20, PCTUSED = 40 → Valid (20 + 40 = 60 ≤ 100)


PCTFREE = 60, PCTUSED = 50 → Invalid (110 > 100)


Example Setup:


Block size = 8 KB (8192 bytes)

Block overhead = ~192 bytes (metadata inside block)

Usable space = 8192 - 192 = 8000 bytes


Storage parameters:


PCTFREE = 20 → Reserve 20% for updates = 8000 × 0.20 = 1600 bytes


PCTUSED = 40 → Threshold = 8000 × 0.40 = 3200 bytes


Step 1: Initial Inserts


Oracle allows inserts until used space reaches 8000 - 1600 = 6400 bytes.


After this point, the block is considered “full” (even though 1600 bytes remain free).


So if each row = 800 bytes:


Maximum inserts = 6400 ÷ 800 = 8 rows.


At this point, free space = 1600 bytes reserved (PCTFREE).


The block is removed from freelist → no more new rows inserted here.


Step 2: Updates (Row Expansion)


Suppose one row expands from 800 → 1200 bytes:


Space used = 6400 + 400 = 6800 bytes.


Still fine, because reserved PCTFREE (1600) covered row growth.


This prevents row migration.


Step 3: Deletes


Now, delete 6 rows (4800 bytes freed).


New used space = 6800 - 4800 = 2000 bytes.

Threshold for reuse = PCTUSED = 3200 bytes.

Since 2000 < 3200, block goes back into freelist → eligible for new inserts.


Step 4: Reuse of Block


When inserts happen again, they can reuse this block until used space grows back to 6400 bytes(PCTFREE limit).


Visual Timeline


  1. Fresh block → available for inserts.
  2. Insert rows until 6400 bytes used → block “full,” no new rows allowed.
  3. Updates grow rows → free reserved space (1600 bytes) is used.
  4. Deletes reduce space below 3200 (PCTUSED) → block re-enters freelist.
  5. New inserts reuse block until limit reached again.


This cycle shows how:


PCTFREE controls when inserts stop (reserve for updates).

PCTUSED controls when inserts can start again (after deletes).

They work together to avoid wasted space and row migration.


What is mean pctfree = 0 oe equal 100 

When we set these values?


1. What PCTFREE Does


PCTFREE = % of space in each block reserved for future row expansion (updates that increase row size).

Oracle stops inserting rows when only that % of free space is left.


2. Extreme Cases


Case A: PCTFREE = 0


Means no space is reserved for updates.

Oracle fills the block completely with rows until there is only block overhead left.


Good for:

Static / insert-only tables (no updates).

Data warehouse fact tables, append-only logs.


Risk:

If rows are later updated and expand, row migration occurs (row moves to another block, leaving a pointer behind).


This can hurt query performance.


CREATE TABLE sales_data

(

   sale_id     NUMBER,

   customer_id NUMBER,

   amount      NUMBER,

   sale_date   DATE

)

PCTFREE 0;


Since this is append-only (no updates), Oracle can fully pack rows per block = maximum storage efficiency.


Case B: PCTFREE = 100

Means 100% of block is reserved for updates.

Oracle will insert at most one row per block, then stop using that block for further inserts.

This is almost never practical, except for very special cases (like experimental testing).

Effectively wastes space, but guarantees maximum space for row expansion.


CREATE TABLE archive_data

(

   doc_id    NUMBER,

   doc_text  CLOB

)

PCTFREE 100;


This would result in one row per block — highly inefficient in most scenarios.


3. Best Practice for Tables with No Updates

Use PCTFREE = 0 → maximize rows per block, minimize wasted space.

Example workloads:

Audit tables (insert-only).

Data warehouse staging/fact tables (bulk-loaded, no updates).

History/archive tables (immutable).


Summary:

PCTFREE = 0 → best for insert-only/static tables, maximum efficiency.

PCTFREE = 100 → wastes space (one row per block), only useful in rare cases.

For normal OLTP tables with updates, choose something in between (e.g., 10–20%).


On ASSM and MSSM env:

As a DBA, when do we need to adjust PCFREE or PCTUSED? 


1. Manual Segment Space Management (MSSM)


In MSSM, PCTFREE and PCTUSED directly control block availability:


PCTFREE reserves free space in each block for row updates.

PCTUSED determines when a block goes back to the freelist.

DBAs sometimes tuned these values for OLTP vs. DSS workloads.

But selecting MSSM, normally is not recommended unless you faced some special issues.


2. Automatic Segment Space Management (ASSM)


In ASSM (default for locally managed tablespaces), Oracle does not use PCTUSED.

Instead, Oracle internally manages block availability with bitmaps that track free space categories (0–25%, 25–50%, 50–75%, 75–100%).

This removes freelist management overhead.


PCTFREE still matters:

It is still honored, because Oracle must know how much space to leave in each block for row expansion.

Example: If PCTFREE = 20, Oracle will stop inserting into a block when 20% free space remains, even in ASSM.


3. So, when should you change them in ASSM?

Change PCTFREE if:

Your rows are likely to expand after inserts (updates to VARCHAR2, CLOB, etc.) → increase PCTFREE.

Your rows are mostly fixed-size, or inserts dominate with no growth → decrease PCTFREE (more rows per block).

Do not change PCTUSED in ASSM:

It’s ignored by ASSM.

Oracle decides block reuse automatically via bitmaps.


Key takeaway:

In ASSM, you only tune PCTFREE (for row growth).

PCTUSED is irrelevant because ASSM manages space reuse internally.


As a DBA, can adjusting of PCTFREE , lead to minimize Blocks contention in Heavy Inserts?


1. What PCTFREE really does


PCTFREE controls reserved space per block for future row expansion, not a direct “rows per block” setting.

But increasing PCTFREE reduces the usable space per block → fewer rows fit per block → more blocks are used overall → contention on individual blocks may reduce.


So yes:

Higher PCTFREE = fewer rows per block = inserts spread across more blocks.


Lower PCTFREE = more rows per block = fewer blocks used, but higher risk of buffer busy waits or ITL contention.


2. When this helps:


In high-concurrency OLTP inserts where many sessions insert into the same table, contention often happens because:


Too many sessions are inserting into the same hot block (especially at the segment’s “end”).


ITL (Interested Transaction List) slots inside blocks are exhausted.


Raising PCTFREE forces Oracle to move on to new blocks earlier, spreading inserts and lowering hot-block contention.


3. Alternatives (often better than just raising PCTFREE):


Increase INITRANS on the table/index → ensures more ITL slots for concurrent inserts in each block.


Use ASSM (Automatic Segment Space Management) → already spreads inserts more evenly across blocks via bitmaps, reducing freelist/block contention.


Partitioning (hash/range) → spreads inserts across multiple segments.


Reverse key indexes (for sequences) → avoids “hot index leaf block” contention.


Sequence caching (CACHE + NOORDER or Scalable Sequences) → reduces contention on sequence numbers.


4. Tradeoff of high PCTFREE

You’re wasting block space → more blocks needed → larger table size.

Can increase buffer cache usage and I/O footprint.

It’s a workaround, not a root fix, for contention.


Conclusion :

Yes, you can raise PCTFREE to reduce the number of rows per block and spread inserts across more blocks, which can help minimize block contention.

But usually, the better approach is to tune INITRANS, ASSM, partitioning, or sequence/index design, because PCTFREE is mainly meant for row expansion, not concurrency control.



Written by:

Alireza Kamrani

Oracle CBO decisions for Join Order along with multiple tables

  Have you ever wondered how Oracle handles queries with multiple joins? How Oracle Find Best Orders of Tables in a large SQL statement? ...