monitoring


Paul Wright published an interesting post about how you can find traces of Java privilege escalation attacks in the database. Great stuff!

Of course, Hedgehog already protects against these published attacks as Paul showed earlier here. Hedgehog comes with build-in vPatch protections that cover the DBMS_JVM_EXP_PERMS and DBMS_JAVA attacks.

Recently, I read a very interesting paper by Alexandr Polyakov talking about how an unprivileged user can get OS access to the database machine by stealing NTLM challenge-response authentication strings.
I really liked the way it was written and the fact that it uses automated metasploit plug-ins that will try to evade detection by using obfuscation techniques.

Since the paper mentioned Hedgehog, I took it as a challenge to protect against such an attack :-). One obvious solution is to monitor the CREATE INDEX with INDEXTYPE of ctxsys.context. The way Hedgehog monitors transactional information, using evasion techniques like base64, translate, etc. is not effective as we read the command directly from the memory when it’s being parsed.

The rule I’ve created is – “cmdtype = ‘create index’ and statement contains ‘ctxsys.context'”. Now, although this is a somewhat simplistic version of the rule, I believe it will still be effective. One other option is to catch ‘create index’ with accessed objects including ODCI stuff. Next, I’m going to try this with metasploit.

Here is the screenshot of the rule:

Hedgehog CTXSYS rule

Hedgehog CTXSYS rule

Running the clear text version of the attack produces the following:

Alert on ctxsys index

Alert on ctxsys index

Any other ideas out there?

Fern Halper, an analyst with Hurwitz & Associates wrote in her blog “Data makes the world go ’round” about database activity monitoring (as well as highlighting some of what my company Sentrigo does).

In the summary of her post she raises an important issue – that most DBAs are reactive rather than proactive when it comes to monitoring their databases. I’ll take this even further… it’s not just DBAs (and I’m not going to get into the whole issue of who owns database activity monitoring…) but companies in general are too reactive when it comes to database security.

Yes, I know that security doesn’t generate revenues, it doesn’t even reduce costs – at least not in any consistent, measurable way. Security is all about reducing risk and the cost associated with that risk. The problem is that by being reactive, companies are addressing yesterday’s risks, not today’s or tomorrow’s risks. There are a several biases in how the IT security budget is allocated, and one of the biggest biases is the visibility bias: The tendency to invest in protecting against visible threats, even if they are small. Spam is a good example. It doesn’t do much harm, but it’s visible every day in everyone’s inbox (I’m not talking about malware, just the “classic” commercial spam which is 99.9% of spam). Companies are investing more today in reducing the marginal spam to the n-th degree, with diminishing returns, than they are in database security. Far more.

Risk, on the other hand, is not just a question of visibility or sheer quantity. It’s also a question of the potential damage of even a single attack, and the probability of such an attack succeeding. The risk posed by inadequate database security is currently greater than the risk posed by spam, given the counter-measures already in place for the latter. Yet many enterprises, by force of habit and inertia, continue to invest in protecting against threats for which they already have good enough solutions, whereas other areas remain barren. Some companies do, of course, shift their attention to new threats every once in a while, but I wonder how many enterprises do an annual (or more frequent) risk assessment that includes a “green field” threat analysis and gap analysis?

Back after a short and much needed hiatus, I came across this piece by security analyst Eric Ogren on Computerworld’s website. He discusses how DBAs have become public enemy number one because of compliance mandates to exercise segregation of duties, and how this has been blown out of proportion to other, greater risks.

A few days pass, and the story about the Fidelity database breach comes to light (incidentally I chose this article from Computerworld as well). A senior DBA sold 2.3 million records, including bank account and credit card details, to a data broker.

So are DBAs “dangerous” or not?

Unfortunately, there is no denying the risk element. If risk is the arithmetical product of the probability of an incident happening and the potential damage that incident could cause, then due to the latter factor DBAs as well as other highly skilled insiders with access privileges pose a significant risk.

This does not mean, however, that there is a high probability of DBAs becoming malicious insiders. Obviously, the vast majority of DBAs pose no threat to their employers or clients, but the old adage of one rotten apple applies nonetheless. While there is a much higher probability that someone who is not a DBA would try and breach the database, the DBA is in a much better position to succeed should he or she really want to do that.

An external hacker would find it very difficult to achieve this kind of scale (millions of records) without insider cooperation. It is difficult to determine what direct damages this will bring to Fidelity and its customers, but the bad publicity is already quite significant: Running a news search on Google for fidelity data breach yielded 529 results at the time of writing.

Clearly, there is a problem here which cannot be ignored, but on the other hand, Eric’s conclusion was absolutely correct – DBAs are a part of the solution, and I would even stress that they are an essential part of the solution. The fact is, DBAs know more about database security than anyone else. They know more about database vulnerabilities, exploits and hacks, and more about how to address them than anyone else. Trying to implement a database security solution by circumventing or ignoring DBAs would be futile.

It is important, for security as much as for regulatory compliance reasons, to monitor and audit DBA activity. In fact, this should be done for all users who access the database. DBAs are first to understand this. If you work in a bank vault, you know there are CCTV cameras on you. You want those cameras on you. DBAs are in a similar situation and they understand this requirement completely.

What DBAs should not accept are two kinds of solutions that one sometimes comes across (sometimes it isn’t the tool but the implementation process):

  • Solutions that hinder or interfere with the DBA’s daily tasks – DBAs are primarily concerned with running databases efficiently. Any solution that jeopardizes this primary objective is counter-productive, and doomed to fail anyway because DBAs and other staff will find ways to circumvent it.
  • Solutions that ignore DBA input – As I suggested, DBAs are not as opposed to the notion of monitoring their own activities as some people think, so there is no real reason not to involve them. More importantly, I believe it is simply impossible to implement a solid database security solution without DBA cooperation. Any solution that ignores the specific data structures, user profiles, schemas and views simply cannot be doing a good job. Those are all managed by DBAs.

Finally, there is the question of priorities. Obviously my company sells database security monitoring products, so my view is not objective, but I’ll say this: Databases are still the most neglected parts of the enterprise IT infrastructure security-wise, especially when taking the magnitude of the threat into account. The Fidelity incident is just the latest in a long string of examples demonstrating this.

This is a personal as well as a commercial posting for me… Tomorrow is a special day in the short history of my company – after long months of R&D, we are finally releasing our product, named Hedgehog (there’s already some coverage in Dark Reading). These are very exciting times both for me personally and for the entire team at Sentrigo, who’ve made this possible through a lot of hard work and well applied knowledge – I feel very lucky to have such a great team working with me.

Hedgehog is database security monitoring software that monitors DB transactions in real-time, and generates alerts based on a highly flexible set of policy rules. A light-weight sensor is installed on the database machine and monitors the shared memory. It doesn’t use redo logs or DBMS APIs – those would be too slow… The trick is to do it so that it doesn’t use up CPU power.

Hedgehog can be downloaded from Sentrigo’s website, and while it supports only Oracle for the moment, in the coming months we will release versions for MS SQL, DB2 and other major DBMSs. There are basically two version – Hedgehog Standard, which is totally free to use, and Hedgehog Enterprise, which is not free but available for free evaluation. The differences are explained in some detail on the website, but basically it boils down to prevention capabilities and enterprise scalability and integration.

Hedgehog Standard

My sense is that we’re bringing something new to this space, and I’m anxious to see how this will be received. Feedback is of course welcome. Give it a try!

What better way to start a blog about database security than to discuss what is possibly the biggest data breach ever?

It now seems that several banks are suing TJX over claimed losses of tens of millions of dollars – so negligence in data protection carries a cash penalty, not just nebulous damage to reputation. Gross negligence, in fact – this is not some one-off lapse in judgment such as a laptop with sensitive information forgotten on a bus, or a CD lost in the post.

The details recently published about the ongoing investigation provide insight into what possibly happened:

  1. The breach lasted 17 months: For 17 months someone (or more than one person) was systematically stealing data. I can only infer from this that security measures and procedures at TJX were grossly inadequate. It also means the breach was not accidental – it may have been opportunistic at first, but certainly malicious after that. More likely it was malicious from the start.
  2. Insider(s) were involved: It seems that some encrypted credit card data was decrypted using keys, which only an insider with privileged access would have. Whether such an insider was knowingly complicit or duped into divulging such information is unknown, but it shows us all what the sophisticated criminals already know – why bother sweating and hacking your way through firewalls and IDS when it’s so much simpler to use an insider?
  3. Utter lack of visibility: Most astonishing of all, more than 50 experts TJX put on the case have reached no conclusions. Besides not knowing how many thieves were involved, TJX isn’t sure whether there was one continuing intrusion or multiple separate break-ins, according to a March 28 regulatory filing.”
    In other words, the thieves either did a great job of covering their tracks (and they certainly had ample time to do that!), or worse, they didn’t have to do it because their actions were invisible to begin with…

It is clear that even a rudimentary audit could have prevented the breach from going undiscovered for so long. It is also evident that encryption alone wasn’t enough to protect the data, and that perimeter defenses such as firewalls are useless against inside jobs like this one.

But ultimately, the entire thing could have been prevented with real-time monitoring and intrusion prevention at the database level.