Implementing Zero Trust & CARTA within AM 6.x

There is an increasing focus on perimeterless approaches to security design and the buzzy "defensive security architectures".  This blog will take a brief look at implementing a contextual and continuous approach to access management, that can help to fulfil those design aspirations.

The main concept, is to basically collect some sort of contextual data at login time, and again at resource access time - and basically look for differences between the two.  But why is this remotely interesting?  Firstly, big walls, don't necessarily mean safer houses.  The classic firewall approach to security.  Keeping the bad out and the good in.  That concept no longer works for the large modern enterprise.  The good and bad are everywhere and access control decisions should really be based on data above and beyond that directly related to the user identity, with enforcement as close as possible to the protected resource as possible.

With Intelligent AuthX, we can start to collect and store non-identity related signals during login - save those either in the user's profile store or within the session - so they can be used as a baseline comparator later on.


The classic flow is something like the following:




Contextual data is captured during authentication, which at its most basic, could be something like a hash of the device finger print, IP address or User-Agent.  That information is then stored against the user’s profile (or anywhere accessible in honesty, such as session properties).  At authorization time, a replay of that context is provided - in the form a signed JWT to prevent tampering or spoofing - and that is then analysed by AM's PDP to look for any differences.  The concept is basically the passing and analysing of a "context envelope" between each event stage.

Zero Trust and CARTA

There are two models this approach fits nicely withing.  Firstly Forrester's Zero-Trust model, and latterly Gartner’s CARTA (Continuous Adaptive Risk & Trust Assessment) concept.  Any slight changes that may have occurred in the time since login, will be captured and, even if the session/cookie/access_token is live and valid, if the context has altered access is denied, redirected, audited differently and so on.

Why is that important?  Numerous scenarios can arise when token validity is not enough.  What about session hijacking, man-in-the-middle (MITM) attacks, replay attacks and so on?  Applying a layer of context provides a type of access binding, similar in concept to things like proof-of-possession.  Reducing the gaps that may exist between token issuance and token use.

The classic half-parabola, sees assurance at its highest just after login time – perhaps the application of MFA has provided a nice high value.  But the seconds, minutes and hours after the initial login, will see the level of associated trust degrade:



So by the time the user is accessing resources, long after authentication time, the actual assurance could be low.  To solve this, things like step-up or transaction based authentication can be used to regain trust.

What we're looking to achieve though, is the concept of “continuous” trust.  This takes the above and makes tiny interruptions, in an attempt to re-calibrate the trust level up again.  This results in a much more consistent assurance level and the ability to provide fine grained responses if any trust difference is found.  For example, there is more benefit in perhaps allowing access for a particular risky event, but with caveats - such as greater auditing, access throttling or data redaction.  A black and white allow and deny, can be replaced with a more "grey scale" response structure.  This reduces user interruptions, but also allows downstream systems, using session properties or PDP advice payloads, to dynamically personalise the access and content.




Capturing Context At Login Time

So let's create a basic authentication tree in ForgeRock AM looking something like the following:

Basic tree capturing IP and User Agent


So we just group together the Save IP To Profile and Save User-Agent To Profile nodes after a success DataStore authentication has taken place.

The settings for each capture node, allow for the hashing of the captured value.  This is important for privacy preservation (also worth noting that consent would be needed and end user notification given, explaining that this level of information is being stored…).


Optional hashing of context data


So a basic login, using already available fields, would look something like the following:


Example of context storage



Great.  So how can we use this stuff at authorization time?  Pretty simple really.  We just use a scripted authorization condition to check the inbound values against those stored on the user profile.

The newer agents 5.0 (https://backstage.forgerock.com/docs/openam-jee-policy-agents/5/java-agents-guide/#jee-agent-continuous-security) or simple REST calls via IG or native apps, can provide AM with any environmental attribute.  This could also be provided via a header.

The way that context data comes across, would ideally be within a JWT.  Why?  Well it helps to prevent tampering or spoofing.  The signing of the JWT, can be done using HMAC for PoC's, with the symmetric signing key, captured using standard collector and setter authentication nodes.  You could also use the same process to capture a public key.  The HMAC key (or public key) are just stored against the user's profile.

There are numerous tools that can be used for PoC's, to create test JWT's for this purpose.  Eg jwtgen for one.  A quick script to use jwtgen is available here.

Integration Over REST


A simple request to the ../openam/json/policies?_action=evaluate endpoint looking something like the following:

{
    "resources": [
    
        "http://app.example.com:8080/main.html"
    
    ],
    
    "application":"ResponsiveAccess",
    
    "environment" : {
        
        "context":["<<JWT>>"] 
        
    }
}


The script verifies the JWT (exp and iat freshness, signature and claims comparison) and provides the necessary advice and response attributes.


Integration Using Policy Agents


The newer agents 5.0 can provide access to continuous security header and cookie data, when making a call into the AM policy decision point.  This is pretty trivial to setup.

Within the agent profile, specify what context data to capture and submit.



I guess, you would need to get the context JWT added as an http header in the request.  The agent could then pass that header into the scripted authorization.

That is all there is to it.  Within the protected resource policy, simple reference the Zero Trust script to compare the user-agents and IP addresses.



For the IG integration - where an API response is dynamically redacted - an example scripted handler is available here.  This script, basically intercepts the GET response against an API, and dynamically at run time, redacts a particular value, based on the advice of AM.



To demo this, simply login via the capture context tree to save the necessary user-agent or IP address.  Generate a context JWT with either the correct or erroneous IP claim, and present that during the authorization request for the necessary use case.

NB - this blog was updated August 2nd 2018, with reference to Google and new screen shots.
NB - this blog was updated again, April 8 2019, with further scripted examples.

Comments