Application Security: Exactly What Can Your Users Access?
The vast majority of the clients I work with use either hosted solutions or vendor-supplied systems for their core processing. When conducting risk assessments or audits, I find myself coming up against stock answers when examining application security controls. The most popular is "Oh, that's covered by the SAS 70" or "We don't need to worry about that because it's the vendor's responsibility." Application security is viewed more as an exercise in granting and managing privileges, rather than determining what exactly those privileges allow the user to access.
But what exactly can the user access?
Financial institutions need to continually push their software vendors for evidence that they're doing what's necessary to ensure the integrity of their products.
I have experience in software quality assurance testing. I know first hand how quirky application software can be. I have a fairly impressive list of stories about how users exploited software bugs as if they were features, allowing them to do things not allowed or intended. Most of the time what was done was with the best of intentions. But what about those cases where deficiencies were exploited to circumvent controls or hide activity?
My point is that vigilance is never ending with regards to testing software. Financial institutions need to continually push their software vendors for evidence that they're doing what's necessary to ensure the integrity of their products. And now with this bulletin, where it states that "bank management remains responsible for ensuring that the application meets the bank's security requirements at acquisition and thereafter," this vigilance is expected.
Take a look at your Information Security and Vendor Management programs and see how your institution stacks up against this new guidance. You may be surprised by what you find.
Or have you already been surprised?