Pages

Wednesday, 5 December 2018

Notifications During Authentication Life Cycle

A quick blog discussing some of the simpler ways of handling authentication session life cycle notification in ForgeRock Access Management.

Firstly, a few definitions.  Authentication - working out who someone or something claims to be.  Generally handled via a login flow.  Authentication life cycle?  Well, that login process needs a start and an end - and also, at the end of the login process, there is typically a session life cycle process too. So what are notifications.  Pretty simply, messages sent to 3rd party systems that rely on either the authentication or session service to perform local actions.  Eg an application using a session token to allow access.

So why is this interesting?  An example couple of use cases include notifying a 3rd party when a user on a particular device has logged in - perhaps a honey pot system - or notifying a relying party that a session has ended, in order to terminate any local sessions within an application.

Webhooks

Let's start at the end first.  In ForgeRock Access Management 6.0, a feature called Treehooks was created - with a specific Treehook, called a Logout Webhook implemented.  This Webhook, replaces some of the functionality that used to be performed by the post authentication plugin onLogout() method.

Webhooks sit within the Authentication config area and are pretty trivial to setup.






The configuration is basically details that describe where the notification will go - namely an HTTP endpoint, delivered over a POST request.  So we simply enter the necessary headers and body etc.  The body by design has access to several variables.  These variables are fully described here, but basically contain information that relates to the issued session object.

So how do we use this webhook?  Firstly just create a basic intelligent authentication tree, and add in the Register Logout Webhook authentication node.  It only has one config item - just a drop down of the previously created hooks.  Chose the appropriate one.



Notify Request Node

In addition to the log out webhook, there is also a ForgeRock Marketplace HTTP Notify Request Node.  This is basically the same as a logout webhook, except it can be placed at any part of the authentication tree.  To configure, simply build and add to your deployment and drag onto the intelligent auth tree canvas. The configuration is similar to the logout webhook, in the sense this is a HTTP POST request, requiring the necessary body and headers.  The main difference here of course, as there is no session created yet, the number of variables is limited to the ${username}.  You could easily extend this of course if more information from the auth tree shared state was needed.


So we now have a final tree that looks something like the following:



This is a simple username and password tree (passwords are gonna live forever right??).  During login a sample API, will receive a message saying a user has logged in.  On the termination of the session (via a logout) the API will also receive a message.  The session termination event type is also captured - this is subtly important as the termination may have come about from a user logout, session idle timeout, session timeout or even an administrative termination.


Friday, 26 October 2018

WebAuthn Authentication in AM 6.5


ForgeRock AccessManagement 6.5, will have out of the box integration for the W3C WebAuthn. This modern “FIDO2” standard allows cryptographic passwordless authentication – integrating with a range of native authenticators, from USB keys to fingerprint and facial recognition systems found natively in many mobile and desktop operating systems.

Why is this so cool? Well firstly we know passwords are insecure and deliver a poor user experience. But aren’t there loads of strong MFA solutions out there already? Well, there are, but many are proprietary, require complex integrations and SDK’s and ultimately, don’t provide the level of agility that many CISO’s and application designers now require. 

Rolling out a secure authentication system today, will probably only result in further integration costs and headaches tomorrow, when the next “cool” login method emerges.

Having a standards based approach, allows for easier inter-operability and a more agile platform for change.

AM 6.5 has introduced two components to allow this to happen: A WebAuthn registration node and a WebAuthn login node. Super simple drag and drop nodes that can augment existing authentication journeys or simply be used on their own.

To demonstrate, take a look at the following user experience flow. This is using a Samsung S8 Edge mobile with fingerprint authentication enabled, accessing AM 6.5 via a Chrome Canary browser. This flow is registering my “credential” against AM.





So what just happened? Well, basically AM triggered some client side JavaScript, that accessed the new credentials management API now available in the Chrome v70 browsers – you can also find it in the latest Microsoft Edge and Firefox browsers too. This API interacted with the Android OS to trigger a security key flow. The Android device then triggered a user interaction for a finger print, which caused a new public/private key pair to be generated and mapped to the user and relying party (AM) triggering the event. The public key was sent back to AM, whilst the private key was stored in the local device’s secure element.

On the AM side, this was simply using a standard intelligent authentication tree and the native callbacks.



So I’ve now registered some credentials. What can I do with that?

Well basically, we leverage that credential during login. Take a look at this video that shows the login journey.


Hardly rocket science eh? So here, again AM triggers some client side JavaScript that initiates the native OS prompting me for a fingerprint to locally authenticate to the device. On result of that, the device can react to a challenge response flow that AM initiates. I basically prove ownership of the private key to the corresponding public key that AM has against my user profile.




Once complete, I’m authenticated and redirected to my user profile as normal. Simples. And not a password in sight.

The powerful aspect is really the interchangeability. An app owner can easily change authenticators, without having to change the backend server side deployment. No complex code changes, or SDK roll-outs, or device integration. Simple and secure with excellent agility.



Tuesday, 2 October 2018

OAuth2 Key Rotation in AM 6.5

With OAuth2 being the defacto authorization model many of our customers use, we made a few improvements to how AM handles the use of secrets in v6.5 that is released later this year.  The nightly build features some neat improvements in the secrets management API.

The API has been overhauled, to make it simpler to use, simpler to integrate and more secure.  As you'd expect. A neat focus was on simplifying the use case of key rotation.  Rotation is an essential part of deployment models - either as a reaction to a breach (and implementing the 3 R's paradigm) or a simple best practice.  Here I'll show a simple demo for rotating an RSA key used to sign OAuth2 stateless access tokens.

Firstly few intro bits regarding the new Secrets Management setup.  We now have a new global configure option, for Secret Stores.


Here we see two out of the box key stores configured.  The basic Java keytore.jceks and a default-passwords-keystore used for bootstrapping access.



The default-keystore config, is useful for testing and dev, where you can quickly access the local file system based Java keystore.  The config for this is pretty straightforward.  The entrypass and storepass settings are de-referenced in the default-passwords-store, where these encrypted values are read from the filesystem to bootstrap access.


In production, many customers tho are likely to want to integrate with PKCS#11 fronted Hardware Security Modules (HSM's) or even a cloud vault.

Within the keystore configuration, there is a new mappings tab.  This tab is the interesting aspect with respect to rotation.  Here we can add in active and multiple secondary keys for specific purposes.  These purposes allow focused use of keys within the secrets API.  A general good practice is to have very focused use of key material, which is declared and can't be mis-configured.


Each secret id maps to the internal secrets API.  The active alias is the one currently in active circulation.  Clicking within an id, shows that other secondary alias's can be setup.  If for example, verification of an access_token fails with the active key, the secondary ones are tried.  Depending on what token type is presented to AM, if a JWT, AM can quickly match the kid in the header and map the appropriate secondary keys in the list.



The alias names are mapped into whatever is in the keystore.

So what does this mean in reality?  Well let me setup my OAuth2 provider to use the detault.rsa.sig.key signing alias for my stateless access_tokens.  Within the realm level OAuth2 service, click on the new Secrets tab.  This shows the default mappings come from my keystore.  On the Advanced tab, make sure to change the signing algorighm to be RS256.



After switching my provider to use stateless JWT's and not stateful server side tokens, I go through and issue myself a bearer token payload...with a nice access_token.



Lots of letters in that one.  Net-net, if I quickly decode the stateless access_token header, I see it was signed with a key id of the following:


If I now introspect the token against the ../oauth2/introspect endpoint, all is good and the token is good to go.



So, now I want to introduce a new signing key to the situation.  There are numerous ways to generate keys, so do this the way you feel is best.  I created a simple script, that creates a 2048 bit RSA private key, then imports into my AM ../openam/keystore.jceks file.


If I do a keytool -list on that keystore, I will see the my new key called newrsasigningkey:


So far so good.  I now simply update the mapping in my keystore config to use my new key and also make the new key my active key.


This basically means, that any new access_tokens being issued, will be signed with the new key,  and any inbound token that needs verifying, will try the new key first, then fall back to use the original test key second.  Simply allowing existing tokens signed with the old key to be validated correctly.

An access_token issued with the newrsasigningkey will have a new key id in the JWT header:




However, both the first and second access_tokens issued can be verified by the ../introspect endpoint.

Another neat bi-product, is that all of the configuration done via the UI above, could be done simply by using the native REST API's.  Easily viewable in the API explorer.  Simply copy and paste the necessary code widget into your app.



NB - worth noting, that when you create the new test key, make sure that the storepass entered via the keytool, matches that in the entrypass value in AM.

Wednesday, 11 April 2018

Zero Trust at Authentication & Authorization Time

In the current focus on zero trust architectures (Google Google's Beyond Corp approach)... it can be quite useful to start to look for subtle differences between the context at login time and the context that is captured and presented during resource access time.


The classic flow is something like the following:

Using context captured at authN time at authZ time

Contextual data is captured during authentication, which at its most basic, could be IP, User-Agent or geo-location.  That information is then stored against the user’s profile (or anywhere accessible in honesty).  

Zero Trust and CARTA


That information is then retrieved and compared at authorization time – following Forrester's Zero-Trust model, or Gartner’s CARTA (Continuous Adaptive Risk & Trust Assessment) concept.  Any slight changes that may have occurred in the time since login, will be captured and, even if the session/cookie/access_token is live and valid, if the context has altered access is denied, redirected, audited differently and so on.

Why is that important?  Numerous scenarios can arise when token validity is not enough.  What about session hijacking, man-in-the-middle (MITM) attacks, replay attacks and so on?  Applying a layer of context provides a type of access binding, similar in concept to things like proof-of-possession.  Reducing the gaps that may exist between token issuance and token use.

The classic half-parabola, sees assurance at its highest just after login time – perhaps the application of MFA has provided a nice high value.  But the seconds, minutes and hours after the initial login, will see the level of associated trust degrade:

Degrading assurance as time goes by



So by the time the user is accessing resources, long after authentication time, the actual assurance could be low.  To solve this, things like step-up or transaction based authentication can be used to regain trust.

Another approach, is the concept of “continuous” access.  This takes the above and makes tiny interruptions, in an attempt to re-calibrate the trust level up again.  This can result in a “saw tooth” trust concept:

Continual trust "bounce"

Capturing Context At Login Time

So let's create a basic authentication tree in ForgeRock AM looking something like the following:

Basic tree capturing IP and User Agent


So we just group together the Save IP To Profile and Save User-Agent To Profile nodes after a success DataStore authentication has taken place.

The settings for each capture node, allow for the hashing of the captured value.  This is important for privacy preservation (also worth noting that consent would be needed and end user notification given, explaining that this level of information is being stored…).


Optional hashing of context data


So a basic login, using already available fields, would look something like the following:


Example of context storage



Great.  So how can we use this stuff at authorization time?  Pretty simple really.  We just use a scripted authorization condition to check the inbound values against those stored on the user profile.


The newer agents 5.0 (https://backstage.forgerock.com/docs/openam-jee-policy-agents/5/java-agents-guide/#jee-agent-continuous-security) or simple REST calls via IG or native apps, can provide AM with any environmental attribute.  

Integration Over REST


A simple request to the ../openam/json/policies?_action=evaluate endpoint looking like the following would do it:

{
    "resources": [
    
        "http://app.example.com:8080/main.html"
    
    ],
    
    "application":"ResponsiveAccess",
    
    "environment" : {
        
        "IP":["127.0.0.5"],
        "User-Agent": ["Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.37"] 
        
    }
}


The script evaluates both the inbound context and the static context on the user profile.  Any differences would result in the necessary advice being triggered:

Mismatch advice



Integration Using Policy Agents


The newer agents 5.0 can provide access to continuous security header and cookie data, when making a call into the AM policy decision point.  This is pretty trivial to setup.

Within the agent profile, specify what context data to capture and submit.



Hear we're just adding the User-Agent - the requesters IP arrives by default, in a attribute called requestIP (see for more details - https://backstage.forgerock.com/docs/openam-jee-policy-agents/5/java-agents-guide/#j2ee-agent-continuous-security-properties).

That is all there is to it.  Within the protected resource policy, simple reference the Zero Trust script to compare the user-agents and IP addresses.



To demo this, simply login via the capture context tree to save the necessary user-agent and IP address.  Using a user-agent spoofer (there are loads of them out there...) alter your browser and see the immediate change to 403 in the PDP response.  Whilst the agents cache, the cache is keyed against not just the protected resource, but also the associated environmental payload object that contains the contextual data.  Hence even a slight change, results in immediate denial of access.

NB - this blog was updated August 2nd 2018 with reference to Google and new screen shots.

Thursday, 1 February 2018

Enhancing User Privacy with OpenID Connect Pairwise Identifiers

This is a quick post to describe how to set up Pairwise subject hashing, when issuing OpenID Connect id_tokens that require the users sub= claim to be pseudonymous.  The main use case for this approach, is to prevent clients or resource servers, from being able to track user activity and correlate the same subject's activity across different applications.

OpenID Connect basically provides two subject identifier types: public or pairwise.  With public, the sub= claim is simply the user id or equivalent for the user.  This creates a flow something like the below:

Typical "public" subject identifier OIDC flow

This is just a typical authorization_code flow - end result is the id_token payload.  The sub= claim is simply clear and readable.  This allows the possibility of correlating all of sub=jdoe activity.

So, what if you want a bit more privacy within your ecosystem?  Well here comes the Pairwise Subject Identifier type.  This allows each client to be basically issued with a non-reversible hash of the sub= claim, preventing correlation.

To configure in ForgeRock Access Management, alter the OIDC provider settings.  On the advanced tab, simply add pairwise as a subject type.

Enabling Pairwise on the provider

Next alter the salt for the hash, also on the provider settings advanced tab.

Add a salt for the hash
Each client profile, then needs either a request_uri setting or a sector_identifier_uri.  Basically akin to the redirect_uri whitelist.  This is just a mechanism to identify client requests.  On the client profile, add in the necessary sector identifier and change the subject identifier to be "pairwise".  This is on the client profile Advanced tab.

Client profile settings
Once done, just slightly alter the incoming authorization_code generation request to looking something like this:

/openam/oauth2/authorize?response_type=code
&save_consent=0
&decision=Allow
&scope=openid
&client_id=OIDCClient
&redirect_uri=http://app.example.com:8080
&sector_identifier_uri=http://app.example.com:8080

Note the addition of the sector_identifier_uri parameter.  Once you've exchanged the authorization_code for an access_token, take a peak inside the associated id_token.  This now contains an opaque sub= claim:

{
  "at_hash": "numADlVL3JIuH2Za4X-G6Q",
  "sub": "lj9/l6hzaqtrO2BwjYvu3NLXKHq46SdorqSgHAUaVws=",
  "auditTrackingId": "f8ca531a-61dd-4372-aece-96d0cea21c21-152094",
  "iss": "http://openam.example.com:8080/openam/oauth2",
  "tokenName": "id_token",
  "aud": "OIDCClient",
  "c_hash": "Pr1RhcSUUDTZUGdOTLsTUQ",
  "org.forgerock.openidconnect.ops": "SJNTKWStNsCH4Zci8nW-CHk69ro",
  "azp": "OIDCClient",
  "auth_time": 1517485644000,
  "realm": "/",
  "exp": 1517489256,
  "tokenType": "JWTToken",
  "iat": 1517485656

}


The overall flow would now look something like this:


OIDC flow with Pairwise

Tuesday, 16 January 2018

Enhancing OAuth2 introspection with a Policy Decision Point

OAuth2 protection of resource server content, is typically either done via a call to the authorization service (AS) and the ../introspect endpoint for stateful access_tokens, or, in deployments where stateless access_tokens are deployed, the resource server (RS) could perform "local" introspection, if they have access to the necessary AS signing material.  All good.  The RS would valid scope values, token expiration time and so on.

Contrast that to the typical externalised authorization model, with a policy enforcement point (PEP) and policy decision point (PDP).  Something being protected, sends in a request to a central PDP.  That request is likely to contain the object descriptor, a token representing the subject and some contextual data.  The PDP will have a load of pre-built signatures or policies that would be looked up and processed.  The net-net is the PDP sends back a deny/allow style decision which the PEP (either in the form of an SDK or a policy agent) complies with.

So what is this blog about?  Well it’s the juxtaposition of the typical OAuth2 construct, with externalised PDP style authorization.

So the first step is to set up a basic policy within ForgeRock Access Management that protects a basic web URL – http://app.example.com:8080/index.html.  In honesty the thing being protected could be a URL, button, image, physical object or any other schema you see fit.

Out of the box authorization policy summary

To call the PDP, an application would create a REST payload looking something like the following:

REST request payload to PDP
The request would be a POST ../openam/json/policies?_action=evaluate endpoint.  This endpoint is a protected endpoint, meaning it requires authX from an AM instance.  In a normal non-OAuth2 integrated scenario, this would be handled via the iPlanetDirectoryPro header that would be used within the PDP decision.  Now in this case, we don't have an iPlanetDirectoryPro cookie, simply the access_token.

Application Service Account

So, there are a couple of extra steps to take.  Firstly, we need to give our calling application their own service account.  Simply add a new group and associated application service user.  This account could then authenticate either via shared secret, JWT, x509 or any other authentication method configured. Make sure to give the associated group the account is in, privileges to the call the REST PDP endpoint.  So back to the use case...

This REST PDP request is the same as any other.  We have the resource being protected which maps into the policy and the OAuth2 access_token that was generated out of band, presented to the PDP as an environment variable.

OAuth2 Validation Script

The main validation is now happening in a simple Policy Condition script.  The script does a few things: performs a call to the AM ../introspect endpoint to perform basic validation - is the token AM issued, valid, within exp and so on.  In addition there are two switches - perform auth_level validation and also perform scope_validation.  Each of these functions takes a configurable setting.  If performAuthLevelCheck is true, make sure to set the acceptableAuthLevel value.  As of AM 5.5, the issued OAuth2 access_token now contains a value called "auth_level".  This value just ties in the authentication assurance level that has been in AM since the OpenSSO days.  This numeric value is useful to differentiate how a user was validated during OAuth2 issuance. The script basically allows a simple way to perform a minimum acceptable value check.

The other configurable switch, is the performScopeCheck boolean.  If true, the script checks to make sure that the submitted access_token, is associated with atleast a minimum set of required scopes.  The access_token may have more scopes, but it must, as a minimum have the ones configured in the acceptableScopes attribute.

Validation Responses

Once the script is in place lets run through some examples where access is denied.  The first simple one is if the auth_level of the access_token is less than the configured acceptable value.

acceptable_auth_level_not_met advice message

Next up, is the situation where the scopes associated with the submitted access_token fall short of what is required.  There are two advice payloads that could be sent back here.  Firstly, if the number of scopes is fundamentally too small, the following advice is sent back:

acceptable_scopes_not_met - submitted scopes too few

A second response, associated with mismatched scopes, is if the number of scopes is OK, but the actual values don't contain the acceptable ones.  The following is seen:
acceptable_scopes_not_met - scope entry missing

That's all there is to it.  A few things to know.  The TTL of the policy has been set to be the exp of the access_token.  Clearly this is over writable, but seemed sensible to tie this to the access_token lifespan.

All being well though, a successful response back would look something like the following - depending on what actions you had configured in your policy:

Successful PDP response

Augmenting with Additional Environmental Conditions

So we have an OAuth2-compatible PDP.  Cool!  But what else can we do.  Well, we can augment the scripted decision making, with a couple of other conditions.  Namely the time based, IP address based and LDAP based conditions.

IP and Time based augmentation of access_token validation
The above just shows a simple of example of tying the decision making to only allow valid access_token usage between 830am and 530pm Monday-Friday from a valid IP range.  The other condition worth a mention is the LDAP filter one.

Note, any of the environmental conditions that require session validation, would fail, the script isn't linking any access_token to AM session at this point - in some cases (depending on how the access_token was generated) may never have a session associated.  So beware they will not work.

The code for the scripted condition is available here.