Wednesday, July 6, 2011

Does what we do matter? Explanations from Lenny and Alan

Yesterday, I read a great post by Lenny Zeltser on four reasons why security assessment reports get ignored or unread.  The reasons he put forth are spot on; and, as a DoD auditor, I see this first hand.  I think many times I see this, the reasoning is actually a combination or a blend of some of his reasons.

If you have not read his post, allow me to direct you there....the post is not long.

I'm not one to rant (much) but some of these reasons really fly in the face of the DoD.  The Department of Defense has specific guidelines and regulations that must be followed in securing the IT infrastructure.  These guidelines are called out in various instructions, like DoD 8500.2.  I do not have a problem going on site and finding controls that are not compliant; there may be a justifiable reason, or plain ignorance may leave a control or two in a non-compliant state.  The issue I have with some units/networks/enclaves/bases is when we go to audit a site multiple times and we find the same non-compliant controls.  Sometimes we find more non-compliant controls than a previous audit.  Then, I know that there is an issue.

Point three (from Lenny's post) is how I attributed the non-compliances we found; I used to assume that an over-worked, under-staffed IT/IA department had too many fires to put out.  The commanding officer can not get his email.  Or reach Facebook.  Or, a router has gone down.  Or, the SQL database is down, and the main application used by unit is unusable.  I get it.

However, there are units that we have audited more than three times (as the accreditation cycle revolves) and the units have the same number of non-compliances, or, in one base's situation, more.  One base that we have audited multiple times actually got worse between audits.  On these trips, I saw ambivalence by the administrators.  It was almost that they did not care that we were there doing our job, as they were almost non-responsive to us when we asked for help.  We may have seen ignorance, but how can you not know that you have a SAN in the data center, or that half the servers are virtualized and therefore subject to the ESX checklist.  Over the four years, they received from us at least three DIACAP reports, including POAMS, that they could use to track open issues.

It was only when the Inspector General's office started sniffing around and threatened to pull the plug because there was no activity to remediate open findings that the unit sprang into action.

Alan Paller had a great editorial opinion in the December 17 2010 issues of the SANS NewsBytes.  Because I don't have a link for the quote, I'll reproduce it below:
EDITORIAL: "Accredit and Forget It": How Some U.S. Government Agencies
  Fib On Cyber Security (Alan Paller)
First a few words about how the system works: Before a federal system is allowed to go online, it must be given "Approval To Operate" (ATO) status.  Only a Designated Accrediting Authority (DAA) is allowed to accredit a system and give it an ATO.  Any security weaknesses exposed to the DAA generally needs to have a fix defined and scheduled for implementation and listed in an Information Technology (IT) Security Plan of Action and Milestones (POA&M).  If there is no plan to fix the weaknesses, the system is not supposed to be granted ATO status.  That's how the system works, but with one damaging addition.  A lot of the most important fixes are not made - ever.  They stay on the POA&M for so many months or years, without action, that the whole process has been given the nickname "accredit and forget it." Sometimes the agencies notice how long they have ignored an important action. When they do, they take it off the POA&M and put it back on, with a new start date. That way it doesn't look like it was ignored, even though it was.  Then last week we learned from a contractor that one of the large civilian agencies has automated the process of changing the date.  If an action has stayed on a POA&M for too long, the computers automatically change its start date so it appears to have been just added. That way it doesn't look like the agency is skimping on security.  If senior executives in the White House want to wake up a the CIOs and show them security matters, they could make that "automated fibbing system" a very public career-ending mistake for the CIO of that agency.

I truly believe that this occurs more often than not, and the data from auditing trips bears it out.  I would love for there to be some sort of check an balance for when you know the client (system/unit/enclave/base) is just paying you lip service in order check off a box (see point 1 in Lenny's post.)  I have seen a few of my (now ex-) coworkers leave this sector because the constant flouting of the open controls or mis-management drove my co-workers to the realization that our work does not matter.  And sad to say, I can not disagree with those (now) ex-coworkers.

We are supposed to move to NIST-controls.  We are supposed to start embracing SCAP tools.  I do not know if that will help, but I am hoping that some kind of change will bring about more remediation.

As I said before, I do not like to rant; I would rather work on a solution to the problem.  But, it is getting more and more frustrating as the problem becomes more pervasive.

No comments:

Post a Comment