Entries tagged with “database security”.

I was interviewed for a nice article about database security on Dark Reading. The interesting question, I think, is not wether to invest in DB security. To me, it’s a given that you have to do it (even though some customers still don’t agree). The question is – how will the threat landscape change if everyone went ahead and deployed DB security protection – activity monitoring, vulnerability assessment, encryption where possible, etc.

If you were a hacker, what would you do?

I have to say that I don’t believe in silver bullets and perfect tools so whatever the enterprise deploys, it will have holes. But, as a hacker, knowing that there is constant monitoring and prevention on every access to the database, I’d probably be very careful and maybe take a different route to the data (file servers, end-point machines, …).

What do you think?

Well, this was bound to happen at one point or another. Chris Gates is going to present at BlackHat some of the work he and others were doing as part of the Metasploit framework. The Metasploit framework now contains some auxiliary modules for doing nasty things to Oracle.

The modules includes detection, version finding, sid enumeration, password bruteforce attacks, privilege escalation, OS escaping and IDS evasion. All of the goodies in one single place. Talk about leveling the playing ground!

With this, pen testers and even smaller companies can test their Oracle installations for vulnerabilities. Of course, the black hats out there can also abuse these modules to attack Oracle databases in a structured, methodical way. All a hacker has to do now is load a USB key with a nice Linux distro of his choice pre-configured with Metasploit and hack away. Even if right now, the modules include known, public vulnerabilities, it’s fairly easy to add new attacks to the arsenal.

The interesting thing about these modules (as well as in some other frameworks like Ingume) is the use of evasion techniques like randomizing the strings (package names, variable names, etc.) and encoding the attacks (base64, translate, etc.). This was always the Achilles’ heel of tools that try to analyze net traffic to identify attacks on the database. If the attack does not match a known pattern and is obfuscated – how can they tell that this is indeed an attack?

I believe that the only true way to protect the database is by viewing the attack from the database point of view. If you see the parsed statements as they happen in memory and see the actual accessed objects from the execution plan, you are not affected by these evasion techniques.

For example – what does the following do?

l_stmt VARCHAR2(32000);
l_stmt := utl_encode.text_decode(‘
KCd8fGxfY3J8fCcpfHwnJycsJ3gnKTsKZW5kOw==’, ‘WE8ISO8859P1’, utl_encode.base64);

Hmmm… I leave it up to the reader to find out what this attack does.

Anybody using Oracle databases, and anyone who is concerned about vulnerability assessment should be familiar with Repscan – the best scanner for Oracle databases, developed by Alexander Kornbrust’s Red-Database-Security.

The scanner, built upon Alex’s extensive experience in doing thousands of pen tests and database reviews, has some very unique features and tests. At Sentrigo, I always considered Repscan as extremely useful, flexible and easy to use and this is why I’m happy to announce that we’ve integrated it with Hedgehog to provide an even stronger database security solution.

One of the unique features that I like is the fact that everything is available from the command line on Linux, Mac and Windows so you can use your favorite scheduling system to run the tests. I know that most users prefer GUI (which is available as well) but I’m a command line type of guy 🙂

You can easily download Repscan from the Sentrigo Website, where you can get the limited trial-version at no-charge. This is a great way to test the waters, and then, move into into the fully-functional product once you’ve tested.

Here are some of the highlights – check it out for yourself, and let me know what you think!

Repscan’s Product Highlights

  • Detects insecure PL/SQL-Code
  • Shows the patch level of all your databases in one-click
  • Finds security problems such as SQL Injections, hardcoded passwords, deprecated functions
  • Detects weak or default passwords
  • More than 115 Oracle tables checked for password information
  • Provides penetration testing reports
  • Detects changed database objects including root kits
  • Detects altered data (including modifications of privilege and user tables)
  • Discovers forensic traces from common security and hacker tools
  • Complements and integrates with Sentrigo’s Hedgehog family of database activity monitoring software

I was invited to post a guest editorial on Ryan Naraine’s Zero Day blog over on ZDNet on the topic of database patching, which you are welcome to read.

In anticipating some responses to that post, I’d like to distill further what I intended to convey. From my exposure to database operations of enterprises large and small, the one issue that keeps haunting me is the database patching issue, about which I’ve posted in the past. Some enterprises do a good job of it, but they are the minority. In most cases, the patching issue seems so insurmountable that instead of doing at least selective patching, companies have a “deer in the headlights” reaction and choose stagnation.

This is asking for trouble. I’ve said it once and I’ll say it again – forget all the sophisticated zero-day hacks. Most database attackers will use the easy way in – a published exploit of a known vulnerability. You can’t protect yourself against everything, but you shouldn’t knowingly leave your DBMS wide open. Imagine that you read in the paper that burglars had a master key that can open all locks of a certain brand – wouldn’t you check to see if your door lock was of that make, and have it changed if it were?

So this is why patching is important, but we know it’s also difficult, which is why I proposed this pragmatic approach. It basically says – minimize what you have to patch, prioritize what’s important to patch and the trade-offs with business interruption and cost, then patch according to your priorities. Go into this with your eyes wide open rather than just gambling on not being next… Doing something is better than doing nothing.

When it’s logistically difficult to patch regularly, or to gain an extra layer of security, virtual patching for databases provides a low-cost, low-overhead solution with minimal interruption to daily operations.

Well, finally I’m writing the third part of the blog. The thing that pushed me to finish this was a talk I had with Tim Hall of Oracle-base fame after his Unconference presentation in Oracle OpenWorld. Tim told me that his Java developers are claiming that adding user context information in an already existing application (Swing) is a non trivial task. You know, I’ve been hearing this from a lot of our customers and while I agree it is not trivial, I will try to outline a method of doing so without changing application code. In this day and age when there are advanced tools such as AspectJ and Spring framework, adding cross-cutting concerns to an application should not be an insurmountable task.

So, without further ad0, I will detail an AspectJ aspect that will wrap around an Oracle connection and add user context information to every statement. This aspect can be used with existing programs and also adapted and extended to catch login information in a Swing based application. I will build of the previous examples in providing the necessary infrastructure of domain and DAO classes.


I promised to blog a bit about my traveling, so here I go:

I was visiting customers in India and the US and giving presentations to Oracle user groups in the US. Amazingly, the state of US airports is just getting worse every month. Flying from Israel to India and from India to NY went without any problems. However not did a 35 minute flight from NY to Boston take 3 hours, but they managed to lose my suitcase in the process. Every flight I had in the US in the previous week was late.

Enough moaning and back to Oracle security… I would like to share with you some insights I had while giving presentations. First, it looks as if database security is getting more and more attention from both DBAs as well as IT managers. By show of hands at the presentations, I could see that at least some of the DBAs are handling security issues as part of their day-to-day job. But still, DBAs are not hearing the following from their managers – “last year you met your MBOs because no database breach had occurred. Here is your bonus…” – though many have heard the bonus speech for HA or performance MBO achievements.

Second, almost no one had deployed the July 2007 Critical Patch Update from Oracle. From a crowd of about 50, only 2 raised their hands.

Third and most startling of all, only about a third of the DBAs have ever deployed an Oracle CPU. Let me repeat that again – more than two thirds of DBAs in this small but significant sample have never deployed an Oracle CPU. Ever.

So this got me thinking – do we care about Oracle CPUs at all? Oracle was getting a lot of heat from security researchers for not providing security patches or providing them with irregular intervals. Finally, Oracle is stepping up to the plate with the patches. They provide them on regular basis, they announce the the patch before issuing it so organizations can prepare for them. They are improving coding techniques and code vulnerability scanning tools. And after all that, customers are still not protected. The reason for this is that the database is an extremely complicated piece of software and is the life-line of the organization. An enterprise will need to test the CPU thoroughly before deploying and testing takes a lot of time (months). This is further complicated by the fact that many organizations have applications running on top of Oracle databases, and those applications are not “forward compatible” and certified by their vendors to run on future Oracle versions.

Ironically, from a security standpoint the situation after a CPU is announced is actually worse than before it is announced: The hackers get a road-map of all the vulnerabilities while most organizations have not yet plugged those holes. This is a similar notion to hacking IPS software in order to retrieve vulnerabilities (see this black hat presentation).

I’m not saying that Oracle should stop providing CPUs. Quite the contrary, I’m saying that organizations must deploy CPUs as quickly as possible to keep this sensitive period short. Even considering the objective difficulties in applying patches, it seems that enterprises are not taking database vulnerability seriously enough. Also, organizations must have other solutions to mitigate the threat in post-CPU release period. Those solutions must not change the Oracle software at all or else they will fall into the same trap of interdependency, stability issues and so forth. They must provide virtual patches to externally test for attacks and plug the security holes from the outside.

I am curious to know other people’s experiences and views on this topic – so fire away…

This is a personal as well as a commercial posting for me… Tomorrow is a special day in the short history of my company – after long months of R&D, we are finally releasing our product, named Hedgehog (there’s already some coverage in Dark Reading). These are very exciting times both for me personally and for the entire team at Sentrigo, who’ve made this possible through a lot of hard work and well applied knowledge – I feel very lucky to have such a great team working with me.

Hedgehog is database security monitoring software that monitors DB transactions in real-time, and generates alerts based on a highly flexible set of policy rules. A light-weight sensor is installed on the database machine and monitors the shared memory. It doesn’t use redo logs or DBMS APIs – those would be too slow… The trick is to do it so that it doesn’t use up CPU power.

Hedgehog can be downloaded from Sentrigo’s website, and while it supports only Oracle for the moment, in the coming months we will release versions for MS SQL, DB2 and other major DBMSs. There are basically two version – Hedgehog Standard, which is totally free to use, and Hedgehog Enterprise, which is not free but available for free evaluation. The differences are explained in some detail on the website, but basically it boils down to prevention capabilities and enterprise scalability and integration.

Hedgehog Standard

My sense is that we’re bringing something new to this space, and I’m anxious to see how this will be received. Feedback is of course welcome. Give it a try!

What better way to start a blog about database security than to discuss what is possibly the biggest data breach ever?

It now seems that several banks are suing TJX over claimed losses of tens of millions of dollars – so negligence in data protection carries a cash penalty, not just nebulous damage to reputation. Gross negligence, in fact – this is not some one-off lapse in judgment such as a laptop with sensitive information forgotten on a bus, or a CD lost in the post.

The details recently published about the ongoing investigation provide insight into what possibly happened:

  1. The breach lasted 17 months: For 17 months someone (or more than one person) was systematically stealing data. I can only infer from this that security measures and procedures at TJX were grossly inadequate. It also means the breach was not accidental – it may have been opportunistic at first, but certainly malicious after that. More likely it was malicious from the start.
  2. Insider(s) were involved: It seems that some encrypted credit card data was decrypted using keys, which only an insider with privileged access would have. Whether such an insider was knowingly complicit or duped into divulging such information is unknown, but it shows us all what the sophisticated criminals already know – why bother sweating and hacking your way through firewalls and IDS when it’s so much simpler to use an insider?
  3. Utter lack of visibility: Most astonishing of all, more than 50 experts TJX put on the case have reached no conclusions. Besides not knowing how many thieves were involved, TJX isn’t sure whether there was one continuing intrusion or multiple separate break-ins, according to a March 28 regulatory filing.”
    In other words, the thieves either did a great job of covering their tracks (and they certainly had ample time to do that!), or worse, they didn’t have to do it because their actions were invisible to begin with…

It is clear that even a rudimentary audit could have prevented the breach from going undiscovered for so long. It is also evident that encryption alone wasn’t enough to protect the data, and that perimeter defenses such as firewalls are useless against inside jobs like this one.

But ultimately, the entire thing could have been prevented with real-time monitoring and intrusion prevention at the database level.