Talkin’ ‘Bout My Generation…

With apologies to The Who.

I’ve written before about my obsession with sql.bsq :-)

The genesis of which began during a stint at a customer site where I simply providing on-call support without any specified tasks.  This site made heavy use of Pro*C, with a common problem being making sure that local variables were declared with the same precision as their database column counterparts.

I couldn’t understand why the developers were declaring their variables by hand — didn’t the data dictionary have all of the information to generate this stuff?  And didn’t SQL*Plus have an ability to spool output to a flat file?  How hard could it be to just ask for the table definition and turn it into a .h file?

And so, my first code generation script:

rem SCCS
rem File Name : %W%
rem Check-in Date :
rem SCCS Version : %I%
rem Path : %F%
rem NAME
rem procvars.sql
rem Select script that will create Pro*C variable declarations for
rem a given table. All variables except NUMBER are declared as
rem VARCHARs (NUMBERS are declared as long or double, depending on
rem their data scale).
rem This script spools its output to a file with the name
rem [table_name]_vars.h
rem Who When Reason
rem —— —— —————————————–
rem Dom 1/10/94 Creation

The interesting thing about this script (I liked source control, even back then), was that it started me down the path of exploring the data dictionary.  I was fascinated by the data dictionary in Oracle 6 — it had EVERYTHING in it.  I’d worked with other databases before (dBase III & IV, SQL/DS), but none of them had anything as detailed as the Oracle data dictionary.  I loved the fact that it was also self-describing — selecting from dba_views gave the names of every view.  To this day, I don’t understand people who have those huge posters of the data dictionary — why would you put up something which would become obsolete so quickly when you could just query the system for the info?

Anyway, the script referenced all_tab_columns, and I wondered why it was so slow — and that’s when I started digging into tab$, col$ and sql.bsq.  I liked going after the base tables instead of the views — and it sped up things considerably.

After that, I began writing scripts (using the same framework) to generate DDL for all kinds of things — since at the time I wasn’t familiar enough with the Unix strings command to pull DDL from Export dump files.

I still have those scripts on my hard drive — they’ve followed me everywhere I go.  Scripts with ALL CAPS names limited to 8 characters long.


When I first was confronted with the need to maintain source code controlled database scripts, these old scripts became the basis for how I would attack the problem…

Why code stuff by hand when you could generate it?  And not out of thin air either, but based on the actual definition in the system.

Repo Man

A long, long time ago, I used to be know as Mr. Replication for Oracle.  I got the opportunity to work as part of consulting along side the developers of the original replication option for Oracle 7 (7.1.6).

This topic came up the other day when I overheard one of my colleagues working with MySQL replication and asking questions that were causing me to have flashbacks (not the query kind).

Sure enough, I checked my bookshelf and found 2 yellowed copies of Oracle magazine from 1995 (Volume IX, numbers 5 & 6) which contained detailed articles on strategies and techniques for replication.  And in the “everything old is new again” department, much of the content was relevant to the problems my friend was working on with MySQL.

I handed the magazines to my friend, recommending them as having been written by a real expert in replication.  He commented on the age of the materials, and chuckled when he saw my name as the author.

I’ve placed the original paper over on the right in the content section.  Maybe some other MySQL folks will find it useful.


(By the way, this is not a knock on MySQL — I personally think that every Oracle DBA should be conversant in competing databases like MySQL, SQL Server and/or DB2.  Knowledge about when each one is suitable for particular uses never hurts).

Oddly, I didn’t get top billing in either magazine — instead there was this other article about something called “Optimal Flexible Architecture (OFA) for Oracle 7″… :-)

Change Management

I know, I’m staring at a box with the word Pandora written all over it.

Over the years, I’ve worked on many ways to let database schema evolution and software change management co-exist peacefully.

During the early years, I was happy if the init.ora file was under SCCS control.  I re-read the original OFA document and couldn’t find any reference to pfile standards, but I recall that we thought it would be neat if changes to the init.ora file were tracked and kept under version control with history. 

James Morle has an excellent section in his book, Scaling Oracle8i, which discusses the init.ora file and mentions SCCS here.

This is one of the reasons I don’t use binary spfiles nowadays….

Anyway, back then we didn’t worry so much about schema evolution and change management.  Who would’ve thought that these systems would last long enough to have such a requirement? :-)

I was introduced more formally to database source code control by John Beresniewicz, whom I met during my time as CTO at Savant Corporation.  John was the primary developer of an application (Savant Q) which had a repository and several robust PL/SQL packages.  It was one of the first retail applications I encountered which made heavy use of PL/SQL.  As such, John had everything under source code control using MS SourceSafe.  Since the company was really small, I ended up chipping in on the installation code.

As with many retail applications, the focus of the installation code was on “clean” installations instead of upgrades.  So, our source code control reflected that with heavy control on the creation commands, and specialized files for delta upgrades.  Packaging usually meant making an image of the entire source tree for burning to CDs.

Source code control for packaged software differs somewhat significantly from that for custom internal corporate applications.  Heck, source code control at many corporations can be a hit-and-miss affair for all kinds of code, let alone database schema evolution.

My current company has a very strong source code control ethic, which has led me to work on ways to manage schema evolution in line with that sense of responsibility.

The goals differ somewhat from the ones we used on packaged software.  “Releases” occur very frequently, often every 2 months.  The emphasis is on “upgrades”, not installations from scratch.  (Every once in a while, I get asked about whether or not we could create the corporate databases using only the scripts from source code control.  While I’d like to support that, the real need for such a capability is low.  The idea that I’d take 5-7 year old scripts to create an empty database and re-import 2-3 TB of data seems silly to me when it would be much, much more practical to practice doing backups and restorations).

I intended this topic to be the subject of my paper at Hotsos 2007, but I ended up not being able to attend the conference, and with that the paper never got written. :-)

2 recent blog posts *here and here) by Dominic Brooks got me to thinking about this topic again, so I’ll be posting on it over the next few days.  Perhaps with your feedback, I’ll eventually coalesce these posts into a decent paper on the subject.