Skip to main content

Using OpenAM as a Trusted File Authorization Engine

A common theme in the DevOps world, or any containerization style infrastructure, may be the need to verify which executables (or files in general) can be installed, run, updated or deleted within a particular environment, image or container.  There are numerous ways this could be done.  Consider a use case where exe's, Android APK's or other 3rd party compiled files need to be installed or used.

How to know the file is not-malicious, is of a trusted origin or hasn't been tampered?  A simple way is to have a white listing engine that contains a definition of particular files that could be installed or run within an environment.  This is were OpenAM can come in.  The policy engine in AM, is not just tied to HTTP based URL's and verbs.  Any arbitrary resource set can be defined with associated actions along with the necessary policies.

But how can we describe a file in the context of a resource?  A common method is to take a hash of the file object using a decent and modern algorithm such as SHA256.  Most iOS or Linux distro's will have a sha256 sum tool that can be used for this.  But first, we need to create a resource type in AM for our files.


So here we create a simple resource type as an "exe", with a pattern that will consist of the SHA256 sum of the file we want to check.  The actions for each file will consist of what the OS could do against the file...


The actions simply map into basic file tasks such as install, run delete etc.

Next we need to create a set a policies that will map to each file we want to add to our white list.  The policies are wrapped by a policy set.



This will act as a container for each policy we need to create.  But what does a policy look like? Each policy will contain the hash, some actions and also return the name of the file back in the response attributes.




It's important to set the subjects condition to be a "Not Never Match" as we are not performing a user test, so no user session will ever be passed for evaluation.


But how can we get all of these policies created and imported into AM? Well, this just requires a little bit of automation on a trusted machine you want to use as the white list source. It's pretty simple to take a hash of a file, and in turn create a JSON payload of the policy definition that can be sent into OpenAM to the ../openam/json/policies?_action=create endpoint in a POST request.  It's pretty trival to script this process and run against hundreds of file at once.

So, we now have a policy set with hundreds of file definitions.  Now what?  Well, we can now call the policy evaluation endpoint over ../openam/json/policies/?_action=evaluate with any file hash we find and get a result back from AM that tells us what we can do with the file within the OS or image.



The response back from AM is equally as simple:



Here we get a list of actions we can perform against the file - we can install and run it, but not delete or update.

Another basic example of how modern authorisation is moving away from just modelling HTTP pages and verbs and focusing on REST based policy decision point and evaluation for virtually any resource.

Comments

Popular posts from this blog

WebAuthn Authentication in AM 6.5

ForgeRock AccessManagement 6.5, will have out of the box integration for the W3C WebAuthn. This modern “FIDO2” standard allows cryptographic passwordless authentication – integrating with a range of native authenticators, from USB keys to fingerprint and facial recognition systems found natively in many mobile and desktop operating systems.
Why is this so cool? Well firstly we know passwords are insecure and deliver a poor user experience. But aren’t there loads of strong MFA solutions out there already? Well, there are, but many are proprietary, require complex integrations and SDK’s and ultimately, don’t provide the level of agility that many CISO’s and application designers now require. 
Rolling out a secure authentication system today, will probably only result in further integration costs and headaches tomorrow, when the next “cool” login method emerges.
Having a standards based approach, allows for easier inter-operability and a more agile platform for change.
AM 6.5 has int…

OAuth2 With Contextual Binding

I've blogged a few times regarding the trend of implementing Zero Trust and CARTA (Continuous Adaptive Risk and Trust Assessment) style journeys during typical Web single sign on flows.  I want to riff on that process a little, with an update on how to implement something similar for OAuth2/OIDC access tokens.

Why is this important? Well sometimes it is important to apply some context to a particular authorization flow.  Not all access decisions are the same.  Think of the following nuanced situations:

Two users with the same set of scopes, have different API consumption patternsA particular user has downloaded a malicious app which alters the botnet reputation of the request IP addressA particular user has registered their work email address with a site that experienced a credentials breachA media site is behind a paywall and limits access to organisational IP ranges, but a user frequently works in the field These sorts of flows, are a little bit different to the standard Proof of…

Implementing Zero Trust & CARTA within AM 6.x

There is an increasing focus on perimeterless approaches to security design and the buzzy "defensive security architectures".  This blog will take a brief look at implementing a contextual and continuous approach to access management, that can help to fulfil those design aspirations.

The main concept, is to basically collect some sort of contextual data at login time, and again at resource access time - and basically look for differences between the two.  But why is this remotely interesting?  Firstly, big walls, don't necessarily mean safer houses.  The classic firewall approach to security.  Keeping the bad out and the good in.  That concept no longer works for the large modern enterprise.  The good and bad are everywhere and access control decisions should really be based on data above and beyond that directly related to the user identity, with enforcement as close as possible to the protected resource as possible.

With Intelligent AuthX, we can start to collect and s…