Pages

Tuesday, 14 November 2017

Ghost Banning and Dynamic Personalisation

So that is a bit of a title.  What’s that all about then?  Login journey’s, especially from a consumer identity and access management (CIAM) or digital identity perspective have become must more complex.  Fine grained authentication has started to take over the linear biometric and MFA (multi-factor-authentication) approaches, where multiple pieces of non-identity contextual data is augmented to the original identity, through powerful choice and matrix flows.

This augmentation process, helps to deliver two really powerful use cases.

Dynamic Personalisation


So what is personalisation referring to, in the CIAM login landscape?  CIAM projects are focused on bringing service providers, who are delivering the latest “killer app” / API / product (delete as applicable) to market as quickly as possible, considerably closer to their  user community, through one-click social registration, single data views and friction free login.  The bi-directional benefits of CIAM, are to deliver better sign up and sign in services for end users – coupled with better data sharing and consent management – as well as fresher data, better analytics and increased trust from a service provider perspective.

The personalisation aspect, is referring to making the user login process responsive – which is going to include everything from user interface, theming and data presentation, right through to pro-active notifications and changes.

The new fine grained authentication in ForgeRock Access Management 5.5, allows all of the non-identity contextual data captured through default login interactions, to be simply made available to downstream protected API’s and applications, via assured session properties.  Those properties are time boxed and dynamic – changing with every interaction, giving the application the ability to dynamically respond to the presented user, even if the credentials

Fine Grained AuthN Trees in AM 5.5
The benefits are a very simple way to capture and release data to the calling application, all using simple REST endpoints that have been available for several years.

Ghost-Banning


So personalisation is a significant benefit to service and application owners, but about leveraging that data from a security perspective?

Well the same data can be used to perform several security related actions.

Increased Auditing


A simple action when presented with numerous different pieces of information, could be to trigger audit or information capture.  Simply using decision nodes within the fine grained authentication tree, basic if/else/switch style gates can be used to pump data to 3rd party tracking or SIEM solutions.

Triggers for Additional Steps


A simple response to the contextual data, that was also leveraged during the authentication chains approach, was to trigger an MFA event based on the previous steps. For example, if the credentials entered, event if correct, but were found to be coming from a previously unknown device - or perhaps via an untrusted device - thinking things like Chrome browser extension version vulnerabilities, or the NHS WannaCry attack on specific Microsoft OS’s – could trigger a step up authentication step or perhaps redirect to a cleansing network.

Contextual Data Via Session Properties

Redirection and Banning


A common “trick” often used on social networks, is the act of “ghost-banning”.  This process, allows users of a system – sometimes malicious, sometimes just in breach of certain terms of service – to be allowed into a system, but then given a minimal set of functionality, or perhaps redirected entirely to a functionally similar system, but on a separate “honey-pot” style network.  The reason?  To allow the service owner a fine grained way of tracking behaviour, improving system response and learning about malicious activity.

So the net-net?  We know that MFA and linear based approaches to authentication and login are not enough.  Not enough from a malicious activity perspective, but also not enough from a deep personalisation standpoint.  Fine grained authentication trees, where end user choice and greated administrative control and integration and delivering much more powerful login use cases in the CIAM space.

Wednesday, 21 June 2017

Creating Personal Access Tokens in ForgeRock AM

Personal Access Tokens (PAT's) are used to provide scoped self-managed access credentials that can be used to provide access to trusted systems and services that want to act on a users behalf, or access user data.

Similar to OAuth tokens, they often don’t have an expiration and are used conceptually instead of passwords.  A PAT could be used in combination with a username when performing basic authentication.

For example see the https://github.com/settings/tokens page within Github which allows scope-related tokens to be created for services to access your Github profile.  Here Github allows you to provide a token that can gain access to services and data stored on Github.  The token is used in conjunction with your username over basic authentication.

PAT Creation

The PAT can be an opaque string - perhaps a SHA256 hash.  Using a hash seems the most sensible approach to avoid collisions and create a fixed-length portable string.  A hash without a key of course wont provide any creator assurance/verification function, but since the hash will be stored against the user profile and not treated like a session/token this shouldn't be an issue.

An example PAT value could be:
Eg:

f83ee64ef9d15a68e5c7a910395ea1611e2fa138b1b9dd7e090941dfed773b2c:{“profile” : [ “givenName”, “fullName”, “mail” ] }
a011286605ff6a5de51f4d46eb511a9e8715498fca87965576c73b8fd27246fe:{"profile" : [ "postalladdress", "mail"]}

The key was simply created by running the resource and the associated permissions through sha256sum on Linux.  How you create the hash is beyond the scope of this blog, but this could be easily handled by say ForgeRock IDM and a custom endpoint in a few lines of JavaScript.

PAT Storage

The important aspect is where to store the PAT once it has been created.  Ideally this really needs to be stored against the users profile record in DJ.  I'd recommend creating a new schema attribute dedicated for PAT's that is multi-valued.  The user can then update their PAT's over REST the same as any other profile attribute.

For this demo I used the existing attribute called "iplanet-am-user-alias-list" for speed as this was multi-valued.  I added in a self-created PAT for my fake resource:


Using a multi-valued attribute allows me to create any number of PAT's.  As they don't have an expiration they might last for some time in the user store.

PAT Usage

Once stored, they could be used in a variety of ways to provide "access" to other users, application, service accounts or personas of yourself.  The most simple way, is to leverage the AM authorization engine as a decision point to verify that a PAT exists and what permissions it maps to.

Once the PAT is stored and created, the end user can provide it to another user/service that they want to use the PAT.  That service or user presents the username:PAT combination to the protected service they houses the data they want to gain access to.  That service calls the AM authorization API's to see if the user:PAT combination is valid.  A typical resource server to authorization server dance in the OAuth2 world.

The protected service would call {{OpenAM}}/openam/json/policies?_action=evaluate with a payload similar to:


Here I am calling the ../policies endpoint with a dedicated account called "policyeval" which has ability to read the REST endpoint and also read realm users which we will need later on.  Edit the necessary privileges tab within the Admin console.

If the PAT exists within the user profile of "smoff", AM returns an access=true message, along with the resource and associated permissions that can be used within the calling application:


So what needs setting up in the background to allow AM to make these decisions? Well all pretty simple really.

Create Authorization Resource Type for PAT's

Firstly create a resource type that matches the pat://*.* format (or any format you prefer):



Next we need to add a policy set that will contain our access policies:



The PATValidator only contains one policy called AllPATs, which is just a wildcard match for pat://*:*.  This will allow any combination of user:pat to be submitted for validation:




Make sure to set the subjects condition to "NOT Never Match" as we are not analysing user session data here.  The logic for analysis is being handled by a simple script.

PAT Authorization Script

The script is available here.

At a high level is does the following:

  1. Captures the submitted username and PAT that is part of the authorization request
  2. As the user will not have a local session, we need to make a native REST call to look up the user
  3. We do this by first generating a session for our policyeval user
  4. We use that session to call the ../json/users endpoint to perform a search for the users PATs
  5. We do a compare between the submitted PAT and any PAT's found against the user profile
  6. If a match is found, we pull out the assigned permissions and send back as a response attribute array to the calling application

Summary

There are any number of ways to create and use PAT's.  Another option for use, could be a custom authentication module that takes the username and hash and perform an authentication test.  The hash in this case would likely need a salt and some other storage protection mechanisms.

A third approach would be to integrate into the OAuth2 world, but this would require a bit more effort, especially with respect to token creation and scope mapping.

Friday, 5 May 2017

SAML2 IDP Automated Certificate Management in FR AM

ForgeRock AM 5.0 ships with Amster a lightweight command line tool and interactive shell, that allows for the automation of many management and configuration tasks.

A common task often associated with SAML2 identity provider configs, is the updating of certificates that are used for signing and the possible encryption of assertions.  A feature added in 13.0 of OpenAM, was the ability to have multiple certificates within an IDP config.  This is useful to overcome the age old challenge of how to handle certificate expiration.  An invalid cert can brake integrations with service providers.  The process to remove, then add a new certificate, would require any entities within the circle of trust to retrieve new metadata into their configs - and thus create downtime, so the timing of this is often an issue.  The ability to have multiple certificates in the config, would allow service providers to pull down meta data at a known date, instead of specifically when certificates expired.



Here we see the basic admin view of the IDP config...showing the list of certs available.  These certs are stored in the JCEKS keystore in AM5.0 (previously the JKS keystore).

So the config contains am1 and am2 certs - an export of the meta data (from the ../openam/saml2/jsp/exportmetadata.jsp?entityid=idp endpoint) will list both certs that could be used for signing:


The first certificate listed in the config, is the one that is used to sign.  When that expires, just remove from the list and the second certificate is then used.  As the service provider already has both certs in their originally downloaded metadata, there should be no break in service.

Anyway....back to automation.  Amster can manage the the SAML2 entities, either via the shell or script.  This allows admins to operationally create, edit and update entities...and a regular task could be to add new certificates to the IDP list as necessary.

To do this I created a script that does just this.  It's a basic bash script that utilises Amster to read, edit then re-import the entity as a JSON wrapped XML object.

The script is available here.

For more information on IDP certificate management see the docs here.

Thursday, 20 April 2017

Integrating Yubikey OTP with ForgeRock Access Management

Yubico is a manufacturer of multi-factor authentication devices, that typically are just USB dongles. They can provide a range of different MFA options including traditional static password linking, one-time-password generation and integration using FIDO (Fast Identity Online) Universal 2nd Factor (U2F).

I want to quickly show the route of integrating your Yubico Yubikey with ForgeRock Access Management.  ForgeRock and Yubico have had integrations for the last 6 years, but I thought it was good to have a simple update on integration using the OATH compliant OTP.

First of all you need a Yubikey.  I'm using a Yubikey Nano, which couldn't be any smaller if it tried. Just make sure you don't lose it... The Yubikey needs configuring first of all to generate one time passwords.  This is done using the Yubico personalisation tool.  This is a simple util that works on Mac, Windows and Linux.  Download the tool from Yubico and install.  Setting up the Yubikey for OTP generation is a 3 min job.  There's even a nice Vimeo on how to do it, if you can't be bothered RTFM.


This set up process, basically generates a secret, that is bound to the Yubikey along with some config.  If you want to use your own secret, just fill in the field...but don't forget it :-)

Next step is to setup ForgeRock AM (aka OpenAM), to use the Yubikey during login.

Access Management has shipped with an OATH compliant authentication module for years.  Even since the Sun OpenSSO days.  This module works with any Open Authentication compliant device.

Create a new module instance and add in the fields where you will store the secret and counter against the users profile.  For quickness (and laziness) I just used employeeNumber and telephoneNumber as they are already shipped in the profile schema and weren't being used.  In the "real world" you would just add two specific attributes to the profile schema.

Make sure you then copy the secret that the Yubikey personalisation tool created, into the user record within the employeeNumber field...


Next, just add the module to a chain, that contains your data store module first - the data store isn't essential, but you do need a way to identify the user first, in order to look up their OTP seed in the profile store, so user name and password authentication seems the quickest - albeit you could just use persistent cookie if the user had authenticated previously, or maybe even just a username module.


Done.  Next, to use your new authentication service, just augment the authentication URL with the name of the service - in this case yubikeyOTPService. Eg:

../openam/XUI/#login/&authIndexType=service&authIndexValue=yubikeyOTPService

This first asks me for my username and password...


...then my OTP.


At this point, I just add my Yubikey Nano into my USB drive, then touch it for 3 seconds, to auto generate the 6 digit OTP and log me in.  Note the 3 seconds bit is important.  Most Yubikeys have 2 configuration slots and the 1 slot is often configured for the Yubico Cloud Service, and is activated if you touch the key for only 1 second.  To activate the second configuration and in our case the OTP, just hold a little longer...