RMOUG TD 2013

Last week I attended the Rocky Mountain Oracle User’s Group Training Days Conference in Denver, Colorado. RMOUG TD is one of the largest independent Oracle User’s Groups for Oracle database technology and draws a large number of Oracle Aces and Ace Directors, Oracle OakTable members and senior Oracle corporate technology leaders.

On Monday I attended a session by Cary Millsap from Method-R Corporation. In this full day session, Cary talked about how to use Oracle’s trace files to accurately diagnose performance problems related to specific business transactions by profiling the execution of database activity at the detailed call level. Using a variety of tools, Cary sliced and diced Oracle trace files to pinpoint problems related to disk, network and cpu performance for a variety of business transactions. Cary’s company, Method R, develops tools that make it easy to analyze trace files both from the command line as well as from within Oracle SQL Developer.

Monday evening I attended a VIP reception for Oracle Aces (having received by Oracle Ace designation in January), where I spent time talking to Kyle Hailey, a fellow OakTable member and Cornell alum, who has spent time as a Director at Oracle, Director of Performance Products at Embarcadero and is currently a Performance Architect and Technical Evangelist at Delphix. Delphix is a company that specializes in creating virtual copies of databases in order to provide massive storage savings and full-copy databases within organizations. It’s estimated that organizations make between 5 and 8 copies of their production databases, and Delphix provides software that reduces that number to 2, while enabling many “virtual” copies of the database to be used. A single physical copy is used as the basis for many virtual databases, all of which maintain only their changes against the base copy. Delphix’ technical team is top-notch, with former members of Oracle’s ASM storage team, RAC team and the Sun ZFS file-system development team. Delphix also provides a nice “timeflow-like” user interface influenced by Apple.

I believe the Delphix technology has significant benefits in providing full-copies of production data into QA, Test and Dev environments without requiring huge physical storage.

One concept I’ve been playing with is an 8-rack Exadata solution in which all of the storage cells are allocated to production servers within the 8-rack (for example, assume 5 racks are used for production, 2 racks for QA, and 1 rack for Development). Delphix provides a virtual dNFS-like mount point that could be used by the QA and Dev database servers to have full-copy of production data, while all of the expensive storage cells are fully utilized by Production.

Kyle told me their biggest barrier to adoption is the status-quo behavior of places that don’t think you can have full-copies of production in Dev and QA because it’s too hard today – Delphix makes that easy.

On Tuesday, I attended Karen Morton’s presentation on basic tuning for developers. I like to attend some of these introductory sessions to see what questions are asked and what information is being conveyed to new developers. As always, Karen does a nice job simplifying complex topics and making them clear in an introductory manner.

Tuesday lunch was “Lunch with an Oracle Ace” day, where conference attendees were given an opportunity to sit at tables with specific Oracle Aces. I had a table with my name on it full of database developers and had an engaging lunch answering questions.

After lunch I was invited to be a panelist on the Oracle Women in Technology panel (along with Karen Morton from Enkitec, Maria Colgan from Oracle, and Debra Lilley from Fujitsu. I was the male representative on the panel and was asked to talk about the value of diversity and my experiences in the Oracle field (where I’ve worked for over 23 years now).

Wednesday morning I attended 2 sessions – one by Tyler Muth from Oracle Corporation on Exadata tuning. Tyler did an excellent presentation on graphing Exadata performance results using R. After Tyler I attended a presentation on using GoldenGate with Exadata from the folks at Enkitec.

In the afternoon I gave my 2 presentations – one on Fuzzy String Matching Algorithms (based on phonetic matching) and the other on Recursive SQL Constructs – both were well received.

Of Wine and Fish

In my last post touching on my case for Data Engineers, my friend Greg Rahn provided a humorous quote about data from Andy Todd:

“Data matures like wine, applications like fish”

Which, near as I can tell, came from an Open Source Developer’s Conference in Brisbane, 2009 at which Andy talked about, of all things, “Change in Database Schemas and Source Code“.

I’ve dropped Andy an email to see if his presentation is online anywhere, since it touches the topic that is near and dear to my heart.

In this post, though, I’d like to address some of the humor behind the quote — the implication that data gets better as it ages, while applications get worse (and start to smell like stinky fish).

Having worked with old data and old applications, I’m not sure I agree with the sentiment. Imagine the following:

“Mr. Hawkins, I need to you do some analysis on this music file I have — I want to know how many times C immediately follows A in this song.”

“No problem, Mr. Silver, I’ll get right on it — where’s the data and how do I read it?”

“Here’s the data file, Mr. Hawkins”, Mr. Silver hands Jim a grooved red plastic disk with a small hole in the middle of it, the faded words “78 RPM” written on the attached paper label. “And here’s the application code that reads the data file”, Mr. Silver bends over and grunts to lift a oddly shaped box with a huge bell and crank attached to it.

“Good luck, Mr. Hawkins! Let me know when you’ve finished that analysis!”

In my little story, both the data and application code have become ancient and almost unusable, leading me to another quote, this time from Kurt Bollacker:

“Data that is loved tends to survive”

It’s that aspect of loving your data (nod to my friends at Pythian and the estimable @datachick Karen Lopez) that keeps me interested in the efforts to make data transformation and evolution agile and easier.

There’s a balance to be struck in keeping data fresh and usable — it doesn’t just get better with age, but rather needs to be continually assessed against use cases in order to keep it useful. Applications need the same attention too, lest they start to smell like last week’s catch.

The trick is to minimize the effort in keeping both fresh — some of you may recognize this as minimizing technical debt. Really good software and data engineers (and perhaps this should be the responsibility of the software and data architects) constantly assess data and code against current and future use cases. If they’re smart, they invest in changes which reduce current and probable future technical debt on an ongoing, even agile, basis. The extra challenge for the data engineer is to balance this need not only for individual applications and use cases, but to discern ways to leverage data as is while not placing too much burden on the applications.

Hate the player, not the game — or my case for Data Engineers

Has “database” become a dirty word within your organization lately? If you’re someone who has been a data technologist for the better part of your career, you may be wondering why the technologies you work with everyday seem to be acquiring such a bad rap. From NoSQL to No DB the current influx of brogrammers seem to take extreme pride in describing how they’re able to write code while avoiding any kind of database technology whatsoever.

The impetus for this post actually started with something I read on the ‘Net the other day about Command Query Responsibility Segregation (CQRS), and how I was initially excited about the concept.

Martin Fowler has a nice, gentle introduction the topic here.

Before I get into the post, however, I think it’s useful for me to describe some of my attitudes toward data management. What’s really odd is that while I rather strongly disagree with the tone of Uncle Bob Martin’s Rant, I actually strongly agree with his assertion about the high value of use-case driven development.

I’ve had gentle debates about the meaning of “data as truth” with several people, and the age-old debate of whether data is more “important” than application code. Generally I’ve found that such debates end up as religious arguments instead of attempting to get to the value of acting on data / or data in action. Because in the end it’s hard for data to have value unless its acted on by a set of processing directives (applications), and while it’s possible to have valuable applications that don’t require knowledge about the past (basic rule engine codifications), in general they need each other.

Why I call myself a data engineer

I’ve been impressed with EMC’s attempt to define a Data Science curriculum. In particular, I like how they describe the different skills and roles necessary for a successful data science team, including the hot new title of data scientist. The data science team often includes a data architect, a data engineer, and a database administrator. So, what is a data engineer? In a blog by Steve Todd, Director of EMC’s Global Research and Innovation Portfolio, he has the following characterizations:

The “Database Administrator” provisions and configures the database environment to support the analytical needs of the working team. The “Data Engineer” tends to have deep technical skills to assist with tuning SQL queries for data management and extraction. They also support data ingest to the analytic sandbox. These people can be one in the same, but many times the data engineer is an expert on queries and data manipulation (and not necessarily analytics as such). The DBA may be good at this too, but many times they may simply be someone who is primarily skilled at setting up and deploying a large database schema, or product, or stack.

Many, many DBAs wear both hats, but I think it’s not a good idea — in general I think that DBA is to data engineer as system administrator is to software engineer, but the lack of data engineers has forced DBAs into dual-roles, often for which they are not well-suited. While I have basic DBA skills, I’m much better at the skills listed under the data engineer — and I enjoy working with the data scientists or application developers who have questions about the data and/or how they’d like it structured to support their use cases.

This is one of the reasons why I agree with Uncle Bob’s rant in which he also rails against frameworks in addition to the database — I just wish frameworks had received equal billing in the rant and title, but I’m guessing that the No DB vitriol resonated more highly with readers. In general I like making sure data is organized in such a way as to support as many use cases as possible. That includes being performant for each use case — which may mean taking advantage of techniques to denormalize, duplicate and synchronize, cache and distribute data.

I suppose I could write a similar rant on No Data Frameworks, but then I’d probably step into the ORM battle, which really isn’t the focus of this post. But just to follow on to Uncle Bob’s rant — the reason I dislike many ORM Data Frameworks is that they tightly bind the application to a particular physical implementation of a data layout, which then limits and constrains my ability to adapt the data layout for new use cases, and leads to “persistence avoidance” in application code.

True story — on a recent Agile project, I was providing guidance on the data layer when I noticed that a bit of information for a new use case wasn’t being captured. I suggested to the team that it would be easy to extend the data layer in order to retain the additional information and I was met with groans: “But that means touching the persistence framework — that’s a big change!” — I was flabbergasted. Isn’t the data layer usually blamed for being inflexible? Are you telling me that it’s actually the framework causing the inflexibility?

Again I point back to Uncle Bob on Clean and Screaming Architecture.

If you’re still reading this, I’m sure you’re wondering how this ties in to CQRS and the original blog title.

When I first read about CQRS in Martin Fowler’s post, I became really interested — the idea that you would use different models for commands (“change” the data) and queries (“read” the data) made me think that frameworks that directly map models into applications could be retired in favor of messages related to use cases instead of model objects. To me, this means a data service API or set of virtual data layers which provide interfaces to data for applications, regardless of how the data is physically stored or organized. Huzzah! This would free me as a data engineer to ensure that I organized the data in ways which efficiently supported use cases. Since I tend to work in full-featured RDBMS systems, that meant I could wrap data using a data service API using whatever works, including things like stored procedures or RESTful web APIs using something like Oracle’s APEX listener.

So imagine my dismay when reading about CQRS and coming upon a whole series of blog posts about implementing CQRS expressly to “get rid of the database“. I intently read through the entire series trying to figure out what was wrong with the database that necessitated “getting rid of it” to implement CQRS. All to no avail. I’ve left a comment asking for that information, because I’m generally curious about it, but I have a guess.

It’s not about technology — it’s about the organization and its associated personalities that foster such an attitude.

Really now. In an organization with responsive data engineers there shouldn’t be a need to “get rid of the database”. One of the best reasons to have a database is that it provides so many ways to build the different kinds of models and transform the data between them with minimal need for additional frameworks or mountains of custom code.

In the end, I’m guessing that after years of hearing “No, we can’t do that” from the DBA’s-designated-as-data-engineers, the application team had come to equate the people with the technology. The implication is that the technology is the constraint instead of the people responsible for it.

So, what’s a way out? If your existing technology is the game, make sure you get the best players for every role and responsibility — don’t make your DBAs “play out of position” or else they’ll become hated representations of barriers to progress. If your organizational structure is the game, hate the game, change the game and advocate for skilled data engineers who can make your data more responsive to your businesses use cases. If you believe in “data as truth”, then invest in people who can make that data as useful as possible to as many use cases as you have.

Pascal matrix SQL puzzle solution

I’ve been impressed with the solutions to my little problem of generating a symmetric Pascal matrix using SQL. Charles Hooper in particular has provided some very nice commentary on the problem, complete with diagrams and 2 alternative solutions.

I thought I’d walk through my solution in order to explain my thought process and see if it resonates with anyone.

Usually when I think about generating rows with SQL I think about the Oracle “trick” of using the CONNECT BY clause to generate rows:

select level from dual connect by level <= k

You’ll see a lot of constructs like this in the solutions provided by commenters.

However, lately I’ve been tending more toward the ANSI SQL Recursive construct instead, which works in Oracle and many other SQL databases:

with f (n) as (
select 1 from dual
union all
select n+1 from f where n < k
)
select n from f;

I particularly like the way this statement works, even if it’s technically not recursive, as it defines rows in terms of prior rows, which leads to all kinds of unique possibilities of ways to construct new rows.

I think the symmetric Pascal matrix lends itself to this kind of technique especially because it involves factorials.
Charles (and others) have used the Wikipedia definition of the Pascal matrix for each cell entry which is Aij = (i+j-2)! / ((i-1)!(j-1)!). And most of the solutions have used a clever way to calculate the necessary factorials:

select exp(sum(ln(level))) from dual connect by level <=n 

However, the neat thing about factorials is that they are easily defined in recursive terms:

f(0) = 1
f(n+1) = (n+1) * f(n)

And this definition maps really nicely to the ANSI SQL Recursive construct:

with f(n,nf) as (
select 0,1 from dual
union all
select n+1, n+1 * nf from f where n < k
)
select n,nf from from f;

Given this idea of using the Recursive construct, I set about seeing if I could use it to solve my challenge.

The first thing I did was to construct the row (i) and column (j) indices, doing all of the rows for each column — to do that I started with a simple row generator:

with m(i) as (
select 1 from dual
union all
select i+1 from m where i < s
)
select i from m;

With s = 4 this generates a simple list from 1 to 4:

1
2
3
4

Next, I wanted to generate the columns, repeating the rows for each column as I moved from column to column. I did this by “re-setting” the row counter and incrementing the column counter after finishing each column:

with m(s,i,j) as (
select 4,1,1 from dual
union all
select 
s,
case when i < s then i+1 else 1 end,
case when i < s then j else j+1 end
from m where i < s or j < s
)
select i,j from m;

In this statement I included the size (s) in the initial query and “passed it down” each row in order to easily refer to it. This query produces:

1,1
2,1
3,1
4,1
1,2
2,2

4,4

It stops when both i and j reach 4.

Now that I’ve got the cell indicies I started to think about how to calculate the information necessary for the values. Based on the cell definition from Wikipedia, I need 3 values:

(i+j-2)!, (i-1)!, and (j-1)!

What’s interesting about the required values is that they look like they could be defined via information from the prior item since they refer to things like i-1 and factorial — both of which should be pretty easy to generate. Let’s try to do (i-1)! (which I’ll refer to as im1f or “i minus 1 factorial”)

with m(s,i,j,im1f) as (
select 4,1,1,1 from dual /* This is cell 1,1. (i-1)! is (1-1)! is 0! which equals 1 */
union all
select 
s,
case when i < s then i+1 else 1 end,
case when i < s then j else j+1 end,
case when i < s then im1f*i else 1 end
from m where i < s or j < s
)
select i,j,im1f from m;

Basically, this just multiplies the prior (i-1)! times i to get the new (i-1)!, and “resets” to 0! whenever we move to the next column. This produces:

1,1,1
2,1,1
3,1,2
4,1,6
1,2,1
2,2,1
3,2,2
4,2,6

4,4,6

We can get (j-1)! in a similar fashion:

with m(s,i,j,im1f,jm1f) as (
select 4,1,1,1,1 from dual
union all
select 
s,
case when i < s then i+1 else 1 end,
case when i < s then j else j+1 end,
case when i < s then im1f*i else 1 end,
case when i < s then jm1f else jm1f*j end
from m where i < s or j < s
)
select i,j,im1f,jm1f from m;

So far, so good — I’m liking this approach as each cell takes advantage of the prior cell’s information to simply get the necessary values. Now, how do I get (i+j-2)!

I started out seeing if I could simply get (i+j-2) or ipjm2:

with m(s,i,j,im1f,jm1f,ipjm2) as (
select 4,1,1,1,1,0 from dual
union all
select 
s,
case when i < s then i+1 else 1 end,
case when i < s then j else j+1 end,
case when i < s then im1f*i else 1 end,
case when i < s then jm1f else jm1f*j end,
case when i < s then ipjm2+1 else j end
from m where i < s or j < s
)
select i,j,im1f,jm1f,ipjm2 from m;

What struck me about this was then when i=1 then i+j-2=j-1, and I’m already calculating (j-1)!, so it looks like I can use it when I “reset” back to i=1 when switching columns. Let’s see what the pattern looks like for the factorial as we go “down” the column row by row:

(j-1)! for the first row
previous row value * (i+j-1), for subsequent rows

Which is easy for us to write into our query now as ipjm2f:

with m(s,i,j,im1f,jm1f,ipjm2f) as (
select 4,1,1,1,1,1 from dual
union all
select 
s,
case when i < s then i+1 else 1 end,
case when i < s then j else j+1 end,
case when i < s then im1f*i else 1 end,
case when i < s then jm1f else jm1f*j end,
case when i < s then ipjm2f*(i+j-1) else j*jm1f end
from m where i < s or j < s
)
select i,j,im1f,jm1f,ipjm2f from m;

Which now produces:

1,1,1,1,1
2,1,1,1,1
3,1,2,1,2
4,1,6,1,6
1,2,1,1,1
2,2,1,1,2
3,2,2,1,6
4,2,6,1,24
1,3,1,2,2
2,3,1,2,6
3,3,2,2,24
4,3,6,2,120
1,4,1,6,6
2,4,1,6,24
3,4,2,6,120
4,4,6,6,720

At this point, it’s simple to just put together the whole formula and display it as the value for each cell, noting that the initial value for every column (row 1) is just 1.

with m(s,i,j,im1f,jm1f,ipjm2f,v) as (
select 4,1,1,1,1,1,1 from dual
union all
select
s,
case when i<s then i+1 else 1 end,
case when i<s then j else j+1 end,
case when i<s then im1f*i else 1 end,
case when i<s then jm1f else jm1f*j end,
case when i<s then ipjm2f*(i+j-1) else j*jm1f end,
case when i<s then (ipjm2f*(i+j-1)) / (i*im1f*jm1f) else 1 end
from m where i<s or j<s
)
select i,j,v
from m;

Which works in Oracle 11g without any special Oracle functions or syntax to produce:

1,1,1
2,1,1
3,1,1
4,1,1
1,2,1
2,2,2
3,2,3
4,2,4
1,3,1
2,3,3
3,3,6
4,3,10
1,4,1
2,4,4
3,4,10
4,4,20

And also works perfectly well in PostgreSQL 9.1 with the addition of the “recursive” keyword:

with recursive m(s,i,j,im1f,jm1f,ipjm2f,v) as (
select 4,1,1,1,1,1,1 from dual
union all
select
s,
case when i<s then i+1 else 1 end,
case when i<s then j else j+1 end,
case when i<s then im1f*i else 1 end,
case when i<s then jm1f else jm1f*j end,
case when i<s then ipjm2f*(i+j-1) else j*jm1f end,
case when i<s then (ipjm2f*(i+j-1)) / (i*im1f*jm1f) else 1 end
from m where i<s or j<s
)
select i,j,v
from m;

And there you have it — an ANSI SQL version.

Now, while I think the code is pretty elegant, it does have a drawback — each cell value requires the prior cell value in order to be calculated. The cell calculations are NOT independent, which ends up making this a serial algorithm. Many of the other solutions in the comments compute the cell values independently of each other, which facilitates parallelizing the production if that’s necessary and should be a consideration when looking at the various techniques.

I hope you’ve enjoyed this rather long post describing my process for creating the query, and that such a process might be useful to you as you construct similar queries.

New SQL Puzzle

Sorry for the drought — to keep everyone’s mind fresh, how about a little puzzle?

Use SQL to create a symmetric Pascal matrix, with the output being (i,j,v). So that a Pascal matrix of size 4 would end up as:

1,1,1
2,1,1
3,1,1
4,1,1
1,2,1
2,2,2
3,2,3
4,2,4
1,3,1
2,3,3
3,3,6
4,3,10
1,4,1
2,4,4
3,4,10
4,4,20

Try to stay with ANSI SQL 2008 if possible — bailing out to other languages and functions is discouraged :-)

Efficiency is an interesting idea here, as it’s pretty easy to do this by brute force calculations of factorials for every entry — but where’s the fun in that?

Extra credit for solutions which can take in the matrix size as some sort of parameter.