SuperscriptMay 7th, 2007 — ddelmoli
…back to Change Management.
In my earlier post, I talked about the strong change management ethic at my current company. It was interesting to see how my predecessor here adapted database schema evolution to that ethic.
I need to add here that I find “ancient” software development organizational ideas like “chief” or “surgeon” programming teams to be fascinating. In particular because each model describes “super” programmers who end up being responsible for 80-90% of the code anyway due to shear talent and dominant personality. Almost everywhere I’ve gone has had a local expert who seemed to qualify for that kind of title. The relevance of ideas from Mills and Brooks is commented on here.
The biggest thing I’ve taken away from those ideas of “super” programmers is that, as a manager, I need to help every member of my staff become as close to “super” as possible. That means removing as many non-productive, non-programming, non-learning tasks as possible.
Another way to put this is to find out ways to reduce the effort and work required to do non-programming tasks.
This is where the DDL generation scripts came into the picture.
When I arrived, every programmer hand-coded the scripts which were checked into the source code control system. Everyone had to remember to include the source code control tags, and script names weren’t obvious. Also, it wasn’t always clear that every changed object and/or procedure was being captured with every release.
To address those issues, I:
- Required that all code checked into the source code control system had to be generated via the DDL generation scripts. We modified the scripts to include the source code control tags automatically. We also did things like include primary key indexes, grants and initial stats analysis in table creation scripts. We also injected source code control tags into the stored procedure and package code so that they could be seen when browsing the database.
- Modified the scripts to include the object type, schema name and object name in the generated file name. So, a script that created the SCOTT.EMP table ended up being called mk-tbl-scott-emp.sql (We used mk for created, rm for drop and ch for alter).
- Turned on DDL auditing for all objects in the development database. This allowed us to double-check and make sure we captured and verified which objects were changed leading up to a release.
A note on DDL auditing — I like to try and have this turned on in every database I work with. Development, Test, QA, Production, you name it. I think it’s always useful to have a log of structure changes to the database — and why code up a special trigger when the database will track it automatically? The only objection I’ve heard to this is space — and the space required is so ridiculously small. Sometimes people complain about performance — and I say that if you have a system that relies on 1000′s of DDLs per day, you’ve got a system that needs work. Who does DDL as part of an OLTP transaction?
Our biggest remaining challenge with this approach was how to create the “master” installation “super” scripts. I’ll cover that in a later post…
(One other concern is the shear number of files this approach can generate. If Oracle did this for the SYS schema, you’d have over 4,000 individual files. On the other hand, this works well when you only need to re-create a single object without searching through 4-5 cat scripts to find the object definition. Other systems use object definition files which are then read by specialized programs to generate installation scripts — that seems like a lot of work for an internal corporate application).