Locked Out

I know many of you enjoy my stories about mistakes I’ve made and how I’ve solved them, so here’s another log for the fire

I’ll start by talking about an event that occurred about a month ago that made a bit concerned at the time, but not enough for me to take action on it.  Little did I know.

Recently during a development cycle, there was a request to refresh the Integrated Test database using the Development database as a source  this is the database after Development in the lifecycle, but before Quality Assurance.  Normally I prefer not to refresh a database that’s further upstream in the cycle – I prefer to refresh backwards from Production, but actively deploy incremental changes forwards from Development.  However, since at this location there’s not a very robust database SCM process they don’t have a good record of each change in development as it occurs  and unfortunately they don’t want to turn on DDL auditing as it might interfere with their security and auditing processes.  Since they use ETL to populate the data, the decision was made to simply blow away the Integrated Test schema and re-create it using Export / Import (Data Pump versions)  most likely just the DDL (METADATA_ONLY).

I didn’t like it – I wanted to generate a change script and apply the changes.  However, I didn’t have the time or energy to argue it.

That was about 1 month ago.

Now, in the Integrated Test database we’ve been running weekly 1-hour load tests – the load test lead is competent – which has provided us with some nice baselines and comparative reports.  Since I was out of town last week at OOW, I sat down today to look at last week’s test.

It was a mess – database servers at 100% CPU throughout the test.  Weird queries showing up in the ADDM report.  Emails talking about bouncing the database to “clear it”.

What the heck happened?

I took one of the queries reported by ADDM and started tearing it apart and was flabbergasted at what the optimizer was telling me about the execution plan.  Merge Joins, Cartesian Products, and a Cost that looked like the US Federal Deficit after 2 years of liberal spending.

That’s when the little voice in my head spoke up.  Remember 2 weeks ago when we were looking at statistics and one of the developers said that the stats appeared out of date? Remember how you said it wasn’t a problem because the auto stats gather job runs every day and brought them up-to-date, that the table in question probably didn’t have any material changes to it?  Maybe you should re-examine your assumptions, buddy!

Ok, ok.

Double check the tables using ALL_TABLES and notice that several of the tables haven t been analyzed in a while (over 2 weeks).  That in-and-of-itself doesn t appear bad   maybe they haven t changed.  What was the name of the  view  that showed how much tables have changed since the last analyze?  Oh yeah, DBA_TAB_MODIFICATIONS   I remember when we had to set MONITORING manually and would check that view to see when we should kick off a manual statistics gathering job.  So, check DBA_TAB_MODFICATIONS   and see that lots of tables have tons of inserts, updates and deletes showing   that s not right   especially if the auto stats gather job is running nightly.

So, let s check on the auto stats gather job   maybe it s not running, maybe it s got an error, maybe something else

Where is that job?  Oh yeah, the new DBMS_SCHEDULER.  Now I m looking through DBA_ _SCHEDULE views and I don t see it.  Did the DBAs drop the job?  Disable it?  Why on earth would they do that?  No, that doesn t make sense   no one asked for them to do that, and I m sure they wouldn t do it on their own.

Ah, the auto stats gather job isn t a SCHEDULEd job, it s an (wait for it), AUTOTASK (we re on 11gR2).  (I m sure I could have saved time by looking at Metalink Note 779596.1   How Can We Find Out Status of Task  Auto Optimizer Stats Collection    but what s the fun with that?)

So, we check and it s enabled.  We check and it s been run (DBA_AUTOTASK_CLIENT_HISTORY and DBA_AUTOTASK_JOB_HISTORY)   heck, it ran last night!  Oddly it ran in only 4 seconds   with a smiling  SUCCEEDED  next to it.  How could the job succeed and not update the stats?

Alright   something else must be wrong   I m not convinced that the database is lying or has a bug   there must be something else.  Maybe something about the tables?  I remember that back in the day if a table didn t have any stats on it in the first place, that updating them would fail   which is why I used to always do a quick analyze on tables immediately after creating them.  That doesn t appear to be the case here, but maybe something else about the tables

Hmmmm   Where to check?  How about Enterprise Manager?  There must be something there.

Ah, Manage Optimizer Statistics.  Hmm   Object Status link   search for the schema, and run the  report .

All the tables show their statistics as Stale, and Locked.


How the heck did they get Locked?

Again, I don t think the DBAs went and locked them   no request was submitted to do so.  And I know they wouldn t have locked them on their own initiative.  So they must have been locked all along.

How do table statistics get locked on creation?

Maybe the Import Data Pump locked them?  Maybe a default parameter during the import?

Search google for impdp / imp lock statistics  and there it was:



Enables you to filter what Export unloads: data only, metadata only, or both.
Syntax and Description

CONTENT={ALL | DATA_ONLY | METADATA_ONLY}ALL unloads both data and metadata. This is the default.

DATA_ONLY unloads only table row data; no database object definitions are unloaded.

METADATA_ONLY unloads only database object definitions; no table row data is unloaded. Be aware that if you specify CONTENT=METADATA_ONLY, then when the dump file is subsequently imported, any index or table statistics imported from the dump file will be locked after the import.


2 Responses to “Locked Out”

  1. chet Says:

    Love these kinds of posts…I have been reading The Daily WTF for years now, and I have probably learned more (what not to do) there than anywhere.

    With the DP stuff too, most people don’t use it all that often and between uses, stuff is forgotten. Nice job figuring it out though, and thanks for sharing your “misery”.


  2. David Aldridge Says:

    Hmmm, that’s interesting.

    In my experience ETL/ELT and statistics don’t mix too well because table contents vary so widely (empty, 1,000 rows, 1,000,000 rows) during a single load cycle. I tend to delete the statistics and lock them, and let dynamic sampling take over. The sort of queries that ELT usually uses do quite well with that approach.

    Takes a bit of DBA education though — they tend to leap in and “proactively” gather statistics where they see none, and that can bring down the data load.

Leave a Reply

Posting code can be a pain. To make sure your code doesn't get eaten, you may want to pre-format it first by using HTML Encoder