insider threat


Just published a blog entry on my McAfee official blog. It talks about some of the trends of database security as we see them from the global McAfee Threat Report.

Just today I reviewed Verizon’s Intellectual Property Theft and it has a large section about databases, privileged users and compromised assets.

The one figure that caught my eye is this:

Compromised assets by percent of breaches involving Intellectual Property theft

As another year comes to a close, it’s time for both new year’s resolutions as well as predictions.

On the resolutions front, I hope to be much more active on my blog next year.  As we grow as a company, I seem to have less time for my musings, as I spend more time with customers and those we hope will become customers.  Overall, it’s a good problem to have…

As far as predictions go, this is always dangerous ground.  A year from now, someone will undoubtedly come back and point out that I really missed some major new trend, or called one that never came to be.  But, these are simply best guesses based on what I’m seeing out there, and I’d be happy to hear from those who have additional trends of their own. You can also read all about it here and here.

Hackers are getting better tools

This one will increase the frequency of attacks, based on several factors:

  • Automation will let good hackers move faster
  • Less skilled hackers will now be able to use more sophisticated attacks
  • Lesser known sites will see more “random” attacks as the tools look for vulnerabilities instead of the hackers targeting specific companies and finding a way in

More attacks will be based on outsiders turned insider

As the perimeter defenses become better, most companies have continued to neglect the risk of the privileged insider.  So, the easy money may go to alternative approaches to getting insider access.  Bribery and even extortion come to mind, but so does getting hired as a consultant or even an employee, mainly to get at the data.

I also put drive-by malware attacks in this category, as the unsuspecting user simply browsing a site lets malware in that attacks from the inside.

Organizations will focus on minimizing surface area of attack

The less content you have, the easier it is to lock it down.  Just as the e-Discovery era brought about email retention policies, we’re beginning to see people deleting sensitive records as soon as they are no longer needed, reducing the information at risk.  At the same time, tools like tokenization will limit the number of databases with actual information to just one, while apps only store pointers.  By securing the one live repository (I’d recommend Sentrigo for this of course!), you’re now protected.

Databases finally make it to the cloud

There’s been much noise about the cloud, but so far I haven’t seen many customers putting business critical apps, with sensitive data, in the cloud.  One reason has certainly been concern about data security (and compliance).  With solutions like Hedgehog, you can deploy a small sensor that gets installed whenever and wherever the cloud provider puts your database, and it is just as secure as in your own datacenter.  And you can monitor the admins at the provider as well.  As companies get comfortable with these technologies, critical databases will move to the cloud.

Compliance will remain a “bare minimum” effort

Not so much a new trend, but I expect in the continuing difficult economy, we will still see most companies investing the least amount possible to comply with regulations, rather than taking an approach of what they consider best practices to secure data.  Thus, we’ll still see breaches of “compliant” companies, and as a result there will be pressure on auditors to enforce more strictly, and pressure on regulators to update standards to fill commonly exploited gaps.  To stay on top of this, flexibility will be required.

So, here they are. I’d love to hear your thoughts…

As I wrote in a previous post, truncating tables or scrambling content might not remove the actual data from the datafiles. The examples I gave in that post were Oracle related and now I’ll show the same using MS SQL Server 2005. I’d like to thank Dmitriy Geyzerskiy for providing the actual working example.

create database Test

go

use Test

go

— Create a dummy table
create table aaa (a varchar(100));

go

BEGIN TRANSACTION
— Populate with dummy data (object names)
insert into aaa
select name from sys.all_objects;

COMMIT;

— Make sure the data is flushed to the disk
CHECKPOINT;

–get the file and page offsets
SELECT
CONVERT (VARCHAR (6),
CONVERT (INT,
SUBSTRING (sa.first_page, 6, 1) +
SUBSTRING (sa.first_page, 5, 1))) as [File offset],
CONVERT (VARCHAR (20),
CONVERT (INT,
SUBSTRING (sa.first_page, 4, 1) +
SUBSTRING (sa.first_page, 3, 1) +
SUBSTRING (sa.first_page, 2, 1) +
SUBSTRING (sa.first_page, 1, 1))) AS [First page]
FROM
sys.system_internals_allocation_units AS sa,
sys.partitions AS sp
WHERE
sa.container_id = sp.partition_id
AND sp.object_id = OBJECT_ID(‘aaa’);

–Allow DBCC output in user window
DBCC TRACEON(3604)

–truncate the table
TRUNCATE TABLE aaa

–examine the contents of the page (all the objects from the truncated table are there)
DBCC PAGE (‘Test’, — database name
1, — [File offset] from previous query
73, — [First page] from previous query
3) — extended output option

I had an interesting conversation with Alexander Kornbrust yesterday about cloning databases. Most DBAs I know copy database files from production to create staging, integration and test environments. Those environments contain a lot of sensitive information (PII, CC, etc.) which is usually either deleted, scrambled or truncated. The problem with these solutions is that most DBAs forget that the database performs logical deletes and not physical deletes. This can be easily demonstrated on Oracle by the following simple steps that create a table, populate it using dummy data, truncating it and showing the data from the dump file:

  • create table test(t varchar2(30));
  • insert into test select object_name from user_objects where rownum < 1000;
  • commit;
  • select dbms_rowid.rowid_relative_fno(rowid), dbms_rowid.rowid_block_number(rowid) from test where rownum < 2;
  • truncate table test;
  • For the following step, replace ‘x’ and ‘y’ with the results from the previous select
  • alter system dump datafile x block y;
  • show parameter user_dump_dest
  • Check out the new file in the user_dump_dest directory. The file will contain the truncated data in the block.

Of course, this is just an example but it is worth thinking about. It is also worth considering TDE to protect the data files from direct reading.

DBAs out there – what do you do to remove sensitive information from your non-production environments?

You know that data breaches have become part of big business reality when the Harvard Business Review publishes a hypothetical case study entitled “Boss, I Think Someone Stole Our Customer Data”. The case study does a very good job of illustrating the initial confusion and many gray areas that enterprises face when confronted with a possible breach.

When the first signs of a possible breach are raised, often there would be some uncertainty regarding the nature of the breach, its extent and whether there has been a breach at all. Insider breaches are especially tough, because insiders have a better shot at covering their tracks than intruders from the outside, and have more visible attack surfaces to begin with (this is one place where database monitoring can help).

Once it is established that a breach had occurred – and this does not have to be with 100% certainty, it’s enough to establish that a breach is likely – there are many things an enterprise needs to do, and do quickly.

Finding the culprit(s) (the “who done it”) would be many people’s instinct, but actually this should be quite low on anyone’s list, and usually takes a long time to do anyway. The top 3 immediate steps that I would take are as follows:

(more…)

Back after a short and much needed hiatus, I came across this piece by security analyst Eric Ogren on Computerworld’s website. He discusses how DBAs have become public enemy number one because of compliance mandates to exercise segregation of duties, and how this has been blown out of proportion to other, greater risks.

A few days pass, and the story about the Fidelity database breach comes to light (incidentally I chose this article from Computerworld as well). A senior DBA sold 2.3 million records, including bank account and credit card details, to a data broker.

So are DBAs “dangerous” or not?

Unfortunately, there is no denying the risk element. If risk is the arithmetical product of the probability of an incident happening and the potential damage that incident could cause, then due to the latter factor DBAs as well as other highly skilled insiders with access privileges pose a significant risk.

This does not mean, however, that there is a high probability of DBAs becoming malicious insiders. Obviously, the vast majority of DBAs pose no threat to their employers or clients, but the old adage of one rotten apple applies nonetheless. While there is a much higher probability that someone who is not a DBA would try and breach the database, the DBA is in a much better position to succeed should he or she really want to do that.

An external hacker would find it very difficult to achieve this kind of scale (millions of records) without insider cooperation. It is difficult to determine what direct damages this will bring to Fidelity and its customers, but the bad publicity is already quite significant: Running a news search on Google for fidelity data breach yielded 529 results at the time of writing.

Clearly, there is a problem here which cannot be ignored, but on the other hand, Eric’s conclusion was absolutely correct – DBAs are a part of the solution, and I would even stress that they are an essential part of the solution. The fact is, DBAs know more about database security than anyone else. They know more about database vulnerabilities, exploits and hacks, and more about how to address them than anyone else. Trying to implement a database security solution by circumventing or ignoring DBAs would be futile.

It is important, for security as much as for regulatory compliance reasons, to monitor and audit DBA activity. In fact, this should be done for all users who access the database. DBAs are first to understand this. If you work in a bank vault, you know there are CCTV cameras on you. You want those cameras on you. DBAs are in a similar situation and they understand this requirement completely.

What DBAs should not accept are two kinds of solutions that one sometimes comes across (sometimes it isn’t the tool but the implementation process):

  • Solutions that hinder or interfere with the DBA’s daily tasks – DBAs are primarily concerned with running databases efficiently. Any solution that jeopardizes this primary objective is counter-productive, and doomed to fail anyway because DBAs and other staff will find ways to circumvent it.
  • Solutions that ignore DBA input – As I suggested, DBAs are not as opposed to the notion of monitoring their own activities as some people think, so there is no real reason not to involve them. More importantly, I believe it is simply impossible to implement a solid database security solution without DBA cooperation. Any solution that ignores the specific data structures, user profiles, schemas and views simply cannot be doing a good job. Those are all managed by DBAs.

Finally, there is the question of priorities. Obviously my company sells database security monitoring products, so my view is not objective, but I’ll say this: Databases are still the most neglected parts of the enterprise IT infrastructure security-wise, especially when taking the magnitude of the threat into account. The Fidelity incident is just the latest in a long string of examples demonstrating this.

While it’s not headline news yet (and may never achieve such lofty status), a recent database breach at UWF was exposed and later reported in local news. What exactly happened and how many records were compromised is, as usual in such cases, unknown.

This made me think: We hear of breaches at universities all too frequently. Privacy Rights Clearinghouse, a website that documents data breaches, lists over 140 breaches in universities since January 2005. That’s more than one per week on average. Ouch.

Why is that?

The crucial factor here is that universities have very large populations of “insiders”. Students are like employees: They are authorized users. They have logins and passwords. They are also young and rebellious, and many are tech savvy – e.g., computer science students, to state the painfully obvious. Some are “hackers”, looking to prove they can hack, or influenced by some anarchist/Marxist/New Age book they browsed in the library, and others may be more traditionally motivated by money, criminal intent or a deep desire to change their grades…

This is also a transient population, and very hard to control. Every 3-4 years the population changes almost completely. Unlike employees, they do not stay long enough to develop any kind of loyalty, plus of course the don’t get paid – quite to the contrary, they’re the ones paying.

What about the data itself? Naturally grades are very important to students, but they are of little value to anyone else. Other student data is a lot more interesting, including Social Security numbers, bank account details and other personally identifiable information – the bread and butter of identity thieves. At least gone are the days when SSNs were used as student numbers – although many of those still lurk in alumni databases around the US, which highlights another point: Although the population is transient, the data is not. It stays. A large-ish university will have hundreds of thousands of former student records. Quite the honeypot.

Universities mostly lack the IT resources that Fortune 500 companies have, but the challenge they face in securing their data is no less daunting. I think that one simple, non-technical solution would be not to collect unnecessary data in the first place, and if it must be collected for current students, dispose of it once the student graduates. As an alumnus, why would I possibly need my alma mater to retain my Social Security number?

Technically there are many things the universities can do, but I don’t want to already sound tedious on my second post (hint: If you don’t monitor database activity, you won’t know if the DB was breached, when, how, by whom and how badly – but enough of the hard sell)

What better way to start a blog about database security than to discuss what is possibly the biggest data breach ever?

It now seems that several banks are suing TJX over claimed losses of tens of millions of dollars – so negligence in data protection carries a cash penalty, not just nebulous damage to reputation. Gross negligence, in fact – this is not some one-off lapse in judgment such as a laptop with sensitive information forgotten on a bus, or a CD lost in the post.

The details recently published about the ongoing investigation provide insight into what possibly happened:

  1. The breach lasted 17 months: For 17 months someone (or more than one person) was systematically stealing data. I can only infer from this that security measures and procedures at TJX were grossly inadequate. It also means the breach was not accidental – it may have been opportunistic at first, but certainly malicious after that. More likely it was malicious from the start.
  2. Insider(s) were involved: It seems that some encrypted credit card data was decrypted using keys, which only an insider with privileged access would have. Whether such an insider was knowingly complicit or duped into divulging such information is unknown, but it shows us all what the sophisticated criminals already know – why bother sweating and hacking your way through firewalls and IDS when it’s so much simpler to use an insider?
  3. Utter lack of visibility: Most astonishing of all, more than 50 experts TJX put on the case have reached no conclusions. Besides not knowing how many thieves were involved, TJX isn’t sure whether there was one continuing intrusion or multiple separate break-ins, according to a March 28 regulatory filing.”
    In other words, the thieves either did a great job of covering their tracks (and they certainly had ample time to do that!), or worse, they didn’t have to do it because their actions were invisible to begin with…

It is clear that even a rudimentary audit could have prevented the breach from going undiscovered for so long. It is also evident that encryption alone wasn’t enough to protect the data, and that perimeter defenses such as firewalls are useless against inside jobs like this one.

But ultimately, the entire thing could have been prevented with real-time monitoring and intrusion prevention at the database level.