I was invited to post a guest editorial on Ryan Naraine’s Zero Day blog over on ZDNet on the topic of database patching, which you are welcome to read.

In anticipating some responses to that post, I’d like to distill further what I intended to convey. From my exposure to database operations of enterprises large and small, the one issue that keeps haunting me is the database patching issue, about which I’ve posted in the past. Some enterprises do a good job of it, but they are the minority. In most cases, the patching issue seems so insurmountable that instead of doing at least selective patching, companies have a “deer in the headlights” reaction and choose stagnation.

This is asking for trouble. I’ve said it once and I’ll say it again – forget all the sophisticated zero-day hacks. Most database attackers will use the easy way in – a published exploit of a known vulnerability. You can’t protect yourself against everything, but you shouldn’t knowingly leave your DBMS wide open. Imagine that you read in the paper that burglars had a master key that can open all locks of a certain brand – wouldn’t you check to see if your door lock was of that make, and have it changed if it were?

So this is why patching is important, but we know it’s also difficult, which is why I proposed this pragmatic approach. It basically says – minimize what you have to patch, prioritize what’s important to patch and the trade-offs with business interruption and cost, then patch according to your priorities. Go into this with your eyes wide open rather than just gambling on not being next… Doing something is better than doing nothing.

When it’s logistically difficult to patch regularly, or to gain an extra layer of security, virtual patching for databases provides a low-cost, low-overhead solution with minimal interruption to daily operations.