DBA


Happy New Year everyone!

As promised, in this blog post I will deal with the PL/SQL fuzzer I’ve created in my spare time and during flights. The goal for creating it was to provide an easy tool for the DBA to test PL/SQL code inside the database. This tested code can be internally developed or by a 3rd party. Before describing the architecture of the fuzzer and showing examples, I would like to make the following clarifications / warnings:

  • Fuzzing on production is a BIG no-no.  Never run the fuzzer on any database you care about. Always use test copies because running the fuzzer may crash / corrupt the database.
  • The fuzzer cannot guaranty that the code is not vulnerable, it can only try and find existing vulnerabilities. Running the fuzzer on a procedure and receiving a clean result does not mean that this procedure is free of vulnerabilities because the fuzzor does not analyze the code and does not visit all the code paths.
  • The fuzzer is in no way shape or form a finished product. It will blow in your face. It will fail when running your code. It contains multiple bugs. USE RESPONSIBLY!!!

Now that the warnings part is over, let’s talk about the design.
I chose PL/SQL for the following reasons:

  • Easy to run SQL statements
  • Built-in the database
  • Cross platform
  • Good enough for the task
  • DBAs already speak it fluently
  • Can be easily scheduled as a DB job from inside the database

The design is fairly simple and is based on the following requirements:

  • Must use database tables to track executions across invocations and to change various fuzzing parameters
  • Must try and find interesting (dynamic) code using discovery
  • Must easily generate reports on the fuzzing results

(more…)

Ah, finally home after 10 days of travel. I attended the UKOUG event in Birmingham and did a database security presentation and participated in a security round table. I also attended very interesting presentations by Pete Finnigan and Paul Wright.

One noteworthy presentation was called Breaking Oracle which showed how to create scenarios where the Oracle database crashes or spins. I thought that some of the examples in the presentation were major security issues that allow users to crash or spin Oracle with very simple queries.
Please don’t try this on your database –
select 1 from dual where regexp_like(‘ ‘,’^*[ ]*a’);
Or this:
SQL> create table t2(col1 varchar2(60));
SQL> create table t1(c1 varchar2(60),
c2 varchar2(1),
c3 varchar2(60),
c4 varchar2(60));
SQL> explain plan for
select 1 from t1 a, t2 b ,t1 c
where b.col1 = ‘xxslc_department’
and a.c1 not between c.c3 and c.c4
start with a.c2=’p’
connect by prior a.c1 between a.c3 and a.c4;

I thought long and hard about what I was going to present during this conference. I did not want to do the usual stuff of unsecure/default passwords, securing the listener or applying patches. I wanted to present something that would give the attendees a real call-to-action they could take with them immediately after the conference. So, I decided to do something simple that would demonstrate SQL injection on a made-up function and show how you should protect this function. Also, I wanted to show how DBAs could find such vulnerable code in the database and fix it.

(more…)

It’s that time of the quarter again. Oracle just released another CPU, this time with 15 DB vulnerabilities compared with the 11 in the July CPU and 15 in April. There are also some interesting vulnerabilities for Oracle EBS and application server. Sentrigo is represented by Guy Pilosof and myself in the credits section.

The vulnerabilities contain the usual mix of SQL injections, buffer overflows and network attacks but I’m missing some of the usual suspects (vulnerable components) like AQ (advanced queuing) while seeing some old friends like CDC and LT. It’s interesting to note that again, one of the DB vulnerabilties is remotely exploitable without authentication. Of course, you should patch as soon as possible, but as we all know there are many factors that prevent you from doing so as frequently as you’d like.

I can bet on a good Sushi dinner that right now, security researchers and hackers around the world are busy reversing the CPU and understanding exactly what the vulnerabilities are. Expect POC scripts to pop up on various websites in the next few days. We at Sentrigo are already familiar with some of the vulnerabilities and have protections for them, while additional ones will be added over the next few days.

The usual advice applies:

  • Patch as soon as possible
  • Install only what you use, don’t install features you are not going to use and remove them if installed by default – many vulnerabilities are in rarely used components like Oracle Spatial, etc.
  • Use the least privilege principle – give the minimum permissions required for the task – every permission can be used to attack the database (create view, create procedure, etc.). Many packages can be used for an attack. Lock them down.
  • Check for default and weak passwords – there are many tools out there. Check after every patch as there were cases it restored default accounts.
  • Secure the network – use firewalls, valid node checking, etc.
  • Use secure coding – bind variables, bind variables, bind variables.
  • If you can’t patch quickly and your databases remain vulnerable, try virtual patching as a stop-gap solution.

So, when are you going to patch?

I’ve just noticed that Microsoft had removed the DBCC BYTES command from DBCC.
On 2005:

DBCC TRACEON(2588)
DBCC HELP (‘?’)
GO

activecursors
addextendedproc
addinstance
auditevent
autopilot
buffer
bytes
cacheprofile
cachestats
callfulltext
checkalloc
checkcatalog
checkconstraints
checkdb
checkfilegroup
checkident
checkprimaryfile
checktable
cleantable
clearspacecaches
collectstats
concurrencyviolation
cursorstats
dbrecover
dbreindex
dbreindexall
dbrepair
debugbreak
deleteinstance
detachdb
dropcleanbuffers
dropextendedproc
config
dbinfo
dbtable
lock
log
page
resource
dumptrigger
errorlog
extentinfo
fileheader
fixallocation
flush
flushprocindb
forceghostcleanup
free
freeproccache
freesessioncache
freesystemcache
freeze_io
help
icecapquery
incrementinstance
ind
indexdefrag
inputbuffer
invalidate_textptr
invalidate_textptr_objid
latch
loginfo
mapallocunit
memobjlist
memorymap
memorystatus
metadata
movepage
no_textptr
opentran
optimizer_whatif
outputbuffer
perfmon
persiststackhash
pintable
proccache
prtipage
readpage
renamecolumn
ruleoff
ruleon
semetadata
setcpuweight
setinstance
setioweight
show_statistics
showcontig
showdbaffinity
showfilestats
showoffrules
showonrules
showtableaffinity
showtext
showweights
shrinkdatabase
shrinkfile
sqlmgrstats
sqlperf
stackdump
tec
thaw_io
traceoff
traceon
tracestatus
unpintable
updateusage
useplan
useroptions
writepage
cleanpage
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

While running the same thing on 2008 does not contain DBCC BYTES.

I wonder what’s the reason for this change (I’ve checked the binary and it does not contain DBCC BYTES so it’s not just a help omision. I can think of several security reasons why you would like to remove this feature like reading interesting parts of memory remotely. On the other hand, it can be totally security unrelated. If anyone out there knows, please do share.

Its been a long time since I’ve written anything here. I’ve been extremely busy with my family move to the bay area. I still can’t believe the amount of paperwork required. I’ve filled virtually hundreds of forms and it’s not over yet. But, after a month here, I can say that we’ve finally settled down. Kids go to school, house is almost fully organized, my wife and I got our iPhones 3G 🙂

House picture Garden

Anyway, back to the subject of this entry – weird statements you see coming from applications when monitoring databases.

  • I’m still amazed to see the number of statements doing things like ‘where 1=1’ just out of sheer laziness of the programmer to check if the condition to append to a dynamic query is the first or the second. It’s not like this really hurts performance on mature databases because the optimizer will strip such predicates away when evaluating the execution plan, but those statements can really throw off a security solution trying to alert on SQL injection. Seeing such statements from applications written by database vendors (you know who you are) can really get me frustrated!
  • Another oddity I mostly see on MS SQL Server databases is the tendency to dynamically create stored procedures on the fly, and then call them to do simple things like updates and inserts. Does anyone really think that this is more secure or provides better performance than simply running the statement?
  • An anti-design pattern I’ve seen many times is choosing the ID of the next row by selecting max(id) + 1 from the table. It really made me laugh when I’ve seen this code in one particular instance responsible for adding rows to the audit table! For example, in a highly transactional environment, two sessions can perform select max(id) + 1 in the same time receiving the same number. Trying to use this as a new id will succeed in one session and fail in the other one thus omitting records from the log.
  • Enough was written about the “When others then null” exception handling…

How about you guys out there? What is the weirdest statement you’ve seen applications perform?

As I wrote in a previous post, truncating tables or scrambling content might not remove the actual data from the datafiles. The examples I gave in that post were Oracle related and now I’ll show the same using MS SQL Server 2005. I’d like to thank Dmitriy Geyzerskiy for providing the actual working example.

create database Test

go

use Test

go

— Create a dummy table
create table aaa (a varchar(100));

go

BEGIN TRANSACTION
— Populate with dummy data (object names)
insert into aaa
select name from sys.all_objects;

COMMIT;

— Make sure the data is flushed to the disk
CHECKPOINT;

–get the file and page offsets
SELECT
CONVERT (VARCHAR (6),
CONVERT (INT,
SUBSTRING (sa.first_page, 6, 1) +
SUBSTRING (sa.first_page, 5, 1))) as [File offset],
CONVERT (VARCHAR (20),
CONVERT (INT,
SUBSTRING (sa.first_page, 4, 1) +
SUBSTRING (sa.first_page, 3, 1) +
SUBSTRING (sa.first_page, 2, 1) +
SUBSTRING (sa.first_page, 1, 1))) AS [First page]
FROM
sys.system_internals_allocation_units AS sa,
sys.partitions AS sp
WHERE
sa.container_id = sp.partition_id
AND sp.object_id = OBJECT_ID(‘aaa’);

–Allow DBCC output in user window
DBCC TRACEON(3604)

–truncate the table
TRUNCATE TABLE aaa

–examine the contents of the page (all the objects from the truncated table are there)
DBCC PAGE (‘Test’, — database name
1, — [File offset] from previous query
73, — [First page] from previous query
3) — extended output option

I had an interesting conversation with Alexander Kornbrust yesterday about cloning databases. Most DBAs I know copy database files from production to create staging, integration and test environments. Those environments contain a lot of sensitive information (PII, CC, etc.) which is usually either deleted, scrambled or truncated. The problem with these solutions is that most DBAs forget that the database performs logical deletes and not physical deletes. This can be easily demonstrated on Oracle by the following simple steps that create a table, populate it using dummy data, truncating it and showing the data from the dump file:

  • create table test(t varchar2(30));
  • insert into test select object_name from user_objects where rownum < 1000;
  • commit;
  • select dbms_rowid.rowid_relative_fno(rowid), dbms_rowid.rowid_block_number(rowid) from test where rownum < 2;
  • truncate table test;
  • For the following step, replace ‘x’ and ‘y’ with the results from the previous select
  • alter system dump datafile x block y;
  • show parameter user_dump_dest
  • Check out the new file in the user_dump_dest directory. The file will contain the truncated data in the block.

Of course, this is just an example but it is worth thinking about. It is also worth considering TDE to protect the data files from direct reading.

DBAs out there – what do you do to remove sensitive information from your non-production environments?

A somewhat technical post on MS SQL Server encrypted triggers.

It turns out that MS SQL Server 2005 has an issue with encrypted triggers in the model database. We’ve created an encrypted database level trigger on DDL operations in all databases including the model database so that when a new database is created the trigger will be created in the new database as well. The problem we’ve encountered is that the encrypted triggers are not correctly copied to the new database.

For example, here is the code for creating the triggers:

exec sp_MSForeachdb
‘use ?;
SET QUOTED_IDENTIFIER ON;
SET ANSI_NULLS ON;
IF EXISTS (SELECT * FROM sys.triggers
WHERE parent_class = 0 AND name = ”TEST_DDL_TRIGGER”)
DROP TRIGGER TEST_DDL_TRIGGER
ON DATABASE;’,
‘?’,
‘USE ?;
SET QUOTED_IDENTIFIER ON;
SET ANSI_NULLS ON;
EXECUTE(”CREATE TRIGGER TEST_DDL_TRIGGER
ON DATABASE WITH ENCRYPTION
FOR DDL_DATABASE_LEVEL_EVENTS
AS
BEGIN
— Do something…
END;”)’;
Now, let’s test this:
create database test;
GO;
use test;
create table tt (id int);

And voila –

Msg 102, Level 15, State 1, Procedure TEST_DDL_TRIGGER, Line 1
Incorrect syntax near ‘0xfa86’.

So, is there any MS SQL Server expert out there who can shed some light on this behavior? It looks like a bug to me.

I promised to blog a bit about my traveling, so here I go:

I was visiting customers in India and the US and giving presentations to Oracle user groups in the US. Amazingly, the state of US airports is just getting worse every month. Flying from Israel to India and from India to NY went without any problems. However not did a 35 minute flight from NY to Boston take 3 hours, but they managed to lose my suitcase in the process. Every flight I had in the US in the previous week was late.

Enough moaning and back to Oracle security… I would like to share with you some insights I had while giving presentations. First, it looks as if database security is getting more and more attention from both DBAs as well as IT managers. By show of hands at the presentations, I could see that at least some of the DBAs are handling security issues as part of their day-to-day job. But still, DBAs are not hearing the following from their managers – “last year you met your MBOs because no database breach had occurred. Here is your bonus…” – though many have heard the bonus speech for HA or performance MBO achievements.

Second, almost no one had deployed the July 2007 Critical Patch Update from Oracle. From a crowd of about 50, only 2 raised their hands.

Third and most startling of all, only about a third of the DBAs have ever deployed an Oracle CPU. Let me repeat that again – more than two thirds of DBAs in this small but significant sample have never deployed an Oracle CPU. Ever.

So this got me thinking – do we care about Oracle CPUs at all? Oracle was getting a lot of heat from security researchers for not providing security patches or providing them with irregular intervals. Finally, Oracle is stepping up to the plate with the patches. They provide them on regular basis, they announce the the patch before issuing it so organizations can prepare for them. They are improving coding techniques and code vulnerability scanning tools. And after all that, customers are still not protected. The reason for this is that the database is an extremely complicated piece of software and is the life-line of the organization. An enterprise will need to test the CPU thoroughly before deploying and testing takes a lot of time (months). This is further complicated by the fact that many organizations have applications running on top of Oracle databases, and those applications are not “forward compatible” and certified by their vendors to run on future Oracle versions.

Ironically, from a security standpoint the situation after a CPU is announced is actually worse than before it is announced: The hackers get a road-map of all the vulnerabilities while most organizations have not yet plugged those holes. This is a similar notion to hacking IPS software in order to retrieve vulnerabilities (see this black hat presentation).

I’m not saying that Oracle should stop providing CPUs. Quite the contrary, I’m saying that organizations must deploy CPUs as quickly as possible to keep this sensitive period short. Even considering the objective difficulties in applying patches, it seems that enterprises are not taking database vulnerability seriously enough. Also, organizations must have other solutions to mitigate the threat in post-CPU release period. Those solutions must not change the Oracle software at all or else they will fall into the same trap of interdependency, stability issues and so forth. They must provide virtual patches to externally test for attacks and plug the security holes from the outside.

I am curious to know other people’s experiences and views on this topic – so fire away…

Back after a short and much needed hiatus, I came across this piece by security analyst Eric Ogren on Computerworld’s website. He discusses how DBAs have become public enemy number one because of compliance mandates to exercise segregation of duties, and how this has been blown out of proportion to other, greater risks.

A few days pass, and the story about the Fidelity database breach comes to light (incidentally I chose this article from Computerworld as well). A senior DBA sold 2.3 million records, including bank account and credit card details, to a data broker.

So are DBAs “dangerous” or not?

Unfortunately, there is no denying the risk element. If risk is the arithmetical product of the probability of an incident happening and the potential damage that incident could cause, then due to the latter factor DBAs as well as other highly skilled insiders with access privileges pose a significant risk.

This does not mean, however, that there is a high probability of DBAs becoming malicious insiders. Obviously, the vast majority of DBAs pose no threat to their employers or clients, but the old adage of one rotten apple applies nonetheless. While there is a much higher probability that someone who is not a DBA would try and breach the database, the DBA is in a much better position to succeed should he or she really want to do that.

An external hacker would find it very difficult to achieve this kind of scale (millions of records) without insider cooperation. It is difficult to determine what direct damages this will bring to Fidelity and its customers, but the bad publicity is already quite significant: Running a news search on Google for fidelity data breach yielded 529 results at the time of writing.

Clearly, there is a problem here which cannot be ignored, but on the other hand, Eric’s conclusion was absolutely correct – DBAs are a part of the solution, and I would even stress that they are an essential part of the solution. The fact is, DBAs know more about database security than anyone else. They know more about database vulnerabilities, exploits and hacks, and more about how to address them than anyone else. Trying to implement a database security solution by circumventing or ignoring DBAs would be futile.

It is important, for security as much as for regulatory compliance reasons, to monitor and audit DBA activity. In fact, this should be done for all users who access the database. DBAs are first to understand this. If you work in a bank vault, you know there are CCTV cameras on you. You want those cameras on you. DBAs are in a similar situation and they understand this requirement completely.

What DBAs should not accept are two kinds of solutions that one sometimes comes across (sometimes it isn’t the tool but the implementation process):

  • Solutions that hinder or interfere with the DBA’s daily tasks – DBAs are primarily concerned with running databases efficiently. Any solution that jeopardizes this primary objective is counter-productive, and doomed to fail anyway because DBAs and other staff will find ways to circumvent it.
  • Solutions that ignore DBA input – As I suggested, DBAs are not as opposed to the notion of monitoring their own activities as some people think, so there is no real reason not to involve them. More importantly, I believe it is simply impossible to implement a solid database security solution without DBA cooperation. Any solution that ignores the specific data structures, user profiles, schemas and views simply cannot be doing a good job. Those are all managed by DBAs.

Finally, there is the question of priorities. Obviously my company sells database security monitoring products, so my view is not objective, but I’ll say this: Databases are still the most neglected parts of the enterprise IT infrastructure security-wise, especially when taking the magnitude of the threat into account. The Fidelity incident is just the latest in a long string of examples demonstrating this.

« Previous Page