Pages

Tuesday, 15 November 2016

Using OpenAM as a Trusted File Authorization Engine

A common theme in the DevOps world, or any containerization style infrastructure, may be the need to verify which executables (or files in general) can be installed, run, updated or deleted within a particular environment, image or container.  There are numerous ways this could be done.  Consider a use case where exe's, Android APK's or other 3rd party compiled files need to be installed or used.

How to know the file is not-malicious, is of a trusted origin or hasn't been tampered?  A simple way is to have a white listing engine that contains a definition of particular files that could be installed or run within an environment.  This is were OpenAM can come in.  The policy engine in AM, is not just tied to HTTP based URL's and verbs.  Any arbitrary resource set can be defined with associated actions along with the necessary policies.

But how can we describe a file in the context of a resource?  A common method is to take a hash of the file object using a decent and modern algorithm such as SHA256.  Most iOS or Linux distro's will have a sha256 sum tool that can be used for this.  But first, we need to create a resource type in AM for our files.


So here we create a simple resource type as an "exe", with a pattern that will consist of the SHA256 sum of the file we want to check.  The actions for each file will consist of what the OS could do against the file...


The actions simply map into basic file tasks such as install, run delete etc.

Next we need to create a set a policies that will map to each file we want to add to our white list.  The policies are wrapped by a policy set.



This will act as a container for each policy we need to create.  But what does a policy look like? Each policy will contain the hash, some actions and also return the name of the file back in the response attributes.




It's important to set the subjects condition to be a "Not Never Match" as we are not performing a user test, so no user session will ever be passed for evaluation.


But how can we get all of these policies created and imported into AM? Well, this just requires a little bit of automation on a trusted machine you want to use as the white list source. It's pretty simple to take a hash of a file, and in turn create a JSON payload of the policy definition that can be sent into OpenAM to the ../openam/json/policies?_action=create endpoint in a POST request.  It's pretty trival to script this process and run against hundreds of file at once.

So, we now have a policy set with hundreds of file definitions.  Now what?  Well, we can now call the policy evaluation endpoint over ../openam/json/policies/?_action=evaluate with any file hash we find and get a result back from AM that tells us what we can do with the file within the OS or image.



The response back from AM is equally as simple:



Here we get a list of actions we can perform against the file - we can install and run it, but not delete or update.

Another basic example of how modern authorisation is moving away from just modelling HTTP pages and verbs and focusing on REST based policy decision point and evaluation for virtually any resource.

Wednesday, 12 October 2016

Protect OAuth2 Access Tokens Using Proof of Possession

Bearer tokens are the cash of the digital world.  They need to be protected.  Whoever gets hold of them, can well, basically use them as if they were you. Pretty much the same as cash.  The shop owner only really checks the cash is real, they don't check that the £5 note you produced from your wallet is actually your £5 note.

This has been an age old issue in web access management technologies, both for stateless and stateful token types, OAuth2 access and refresh tokens, as well as OpenID Connect id tokens.

In the hyper connected Consumer Identity & Access Management (CIAM) and Internet (Identity) of Things worlds, this can become a big problem.

Token misuse, perhaps via MITM (man in the middle) attacks, or even resource server misconfiguration, could result in considerable data compromise.

However, there are some newer standards that look to add some binding ability to the tokens - that is, glue them to a particular user or device based on some simple crypto.

The unstable nightly source and build of OpenAM has added the proof of possession capability to the OAuth2 provider service. (Perhaps the first vendor to do so? Email me if you see other implementations..).

The idea is, that the client makes a normal request for an access_token from the authorization service (AS), but also adds another parameter in the request, that contains some crypto the client has access to - basically a public key of an asymmetric key pair.

This key, which could be ephemeral for that request, is then baked into the access_token.  If the access_token is a JWT, the JWT contains this public key and the JWT is then signed by the authorization service.  If using a stateful access_token, the AS token introspection endpoint can relay the public key back to the resource server at look up time.




This basically gives the RS an option to then issue a challenge response style interaction with the client to see if they are in possession of the private key pair - thus proving they are the correct recipient of the originally issued access_token!



The basic flow, sees the addition of a new parameter to the access_token request to the OpenAM authorization service, under the name of "cnf_key".  This is a confirmation key, that the client is in possession of.  In this example, it would be a base64 encoded JSON Web Key representation of a public key.

So for example, a POST request to the endpoint ../openam/oauth2/access_token, would now take the parameters grant_type, scope and also cnf_key, with an authorization header containing the OAuth2 client id and secret as normal.  A cnf_key could look something like this:

eyJqd2siOnsKICAiYWxnIjogIlJTMjU2IiwKICAiZSI6ICJBUUFCIiwKICAibiI6ICJ2TDM0UXh5bXdId1dEOVpWTDljaU42Yk5ybk91NTI0cjdZMzRvUlJXRkpjWjc3S1dXaHB1Si1iSlZXVVNUd3ZKTGdWTWlDZmFxSTZEWnIwNWQ2VGdONTNfMklVWmtHLXgzNnBFbDZZRWs1d1ZnX1ExelFkeEZHZkRoeFBWajJ3TWNNcjFyR0h1UUFEeC1qV2JHeGRHLTJXMXFsVEdQT253SklqYk9wVm1RYUJjNHhSYndqenNsdG1tcndzMmZNTUtNTDVqbnFwR2RoeWRfdXlFTU0wdHpNTGFNSVN2M2lmeFM2UUw3c2tpZTZ5ajJxamxUTUd3QjA4S29ZUEQ2QlVPaXd6QWxkUmJfM3k4bVA2TXY5cDdvQXBheTZCb25pWU8yaVJySzMxUlRaLVlWUHRleTllSWZ1d0ZFc0RqVzNES0JBS21rMlhGY0NkTHEyU1djVWFOc1EiLAogICJrdHkiOiAiUlNBIiwKICAidXNlIjogInNpZyIsCiAgImtpZCI6ICJzbW9mZi1rZXkiCn19Cg==

Running that through base64 -d on bash, or via an online base64 decoder, shows something like the following: (NB this JWK was created using an online tool for simple testing)

{
   "jwk":{
             "alg": "RS256",
             "e": "AQAB",
             "n": "vL34QxymwHwWD9ZVL9ciN6bNrnOu524r7Y34oRRWFJcZ77KWWhpuJ-                               bJVWUSTwvJLgVMiCfaqI6DZr05d6TgN53_2IUZkG-                                                x36pEl6YEk5wVg_Q1zQdxFGfDhxPVj2wMcMr1rGHuQADx-jWbGxdG-2W1qlTGPOnwJIjbOpVmQaBc4xRbwjzsltmmrws2fMMKML5jnqpGdhyd_uyEMM0tzMLaMISv3ifxS6QL7skie6yj2qjlTMGwB08KoYPD6BUOiwzAldRb_3y8mP6Mv9p7oApay6BoniYO2iRrK31RTZ-YVPtey9eIfuwFEsDjW3DKBAKmk2XFcCdLq2SWcUaNsQ",
            "kty": "RSA",
            "use": "sig",
            "kid": "smoff-key"
     }
}

The authorization service, should then return the normal access_token payload.  If using stateless OAuth2 access_tokens, the access_token will contain the new embedded cnf_key attribute, containing the originally submitted public key.

The client can then present the access_token back to the RS at access time.  The resource server, can then leverage the public key to perform some out of band challenge response questions of the client, when the client comes to present the access_token later.

If using the more traditional stateful access_tokens, the RS can call the ../oauth2/introspect endpoint, sending in the presented access_token as a parameter, to find the public key.  An introspected access_token could look like the following, with the newly added cnf_key attribute baked within:

{
  "access_token": "7b64b9b3-e4ba-4e0b-b165-a71594f400ed",
  "grant_type": "password",
  "scope": [
    "email"
  ],
  "realm": "/",
  "cnf": {
    "jwk": {
      "alg": "RS256",
      "e": "AQAB",
      "n": "vL34QxymwHwWD9ZVL9ciN6bNrnOu524r7Y34oRRWFJcZ77KWWhpuJ-bJVWUSTwvJLgVMiCfaqI6DZr05d6TgN53_2IUZkG-x36pEl6YEk5wVg_Q1zQdxFGfDhxPVj2wMcMr1rGHuQADx-jWbGxdG-2W1qlTGPOnwJIjbOpVmQaBc4xRbwjzsltmmrws2fMMKML5jnqpGdhyd_uyEMM0tzMLaMISv3ifxS6QL7skie6yj2qjlTMGwB08KoYPD6BUOiwzAldRb_3y8mP6Mv9p7oApay6BoniYO2iRrK31RTZ-YVPtey9eIfuwFEsDjW3DKBAKmk2XFcCdLq2SWcUaNsQ",
      "kty": "RSA",
      "use": "sig",
      "kid": "smoff-key"
    }
  },
  "token_type": "Bearer",
  "expires_in": 3336,
  "client_id": "OAuth2Client",
  "email": ""
}

The RS can then leverage the cnf_key value to do some additional and optional, out of band cryptographic challenge with the presenting client, to ascertain they are in possession of the corresponding private key pair.

The powerful use case is the ability to validate that the client submitting the access_token, is in fact the same as the original recipient, when the access_token was issued.  This can help reduce MITM and other basic token misuse scenarios.

Thursday, 16 June 2016

Blockchain for Identity: Access Request Management

This is the first in a series of blogs, that will start to look at some use cases for leveraging block chain technology in the world of identity and access management.  I don't proclaim to be a BC expert and there are several blogs better equipped to tackle that subject, but a good introductory text is the O'Reilly published "Blockchain: Blueprint for a New Economy".

I want to first look at access request management.  An age old issue that has developed substaintially in the last 30 years, to several sub-industries within the IAM world, with specialist vendors, standards and methodologies.

In the Old Days

Embedded/Local Assertion Managment

So this is a typical "standalone" model of access management.  An application manages both users and access control list information within it's own boundary.  Each application needs a separate login and access control database. The subject is typically a person and the object an application with functions and processes.

Specialism & Economies of Scale

So whilst the first example is the starting point - and still exists in certain environments - specialism quickly occured, with separate processes for identity assertion management and access control list management. 



Externalised Identity & ACL Management

So this could be a typical enterprise web access management paradigm.  An identity provider generates a token or assertion, with a policy enforcement process acting as a gatekeeper down into the protected objects.  This works perfectly well for single domain scenarios, where identity and resource data can be easily controlled.  Scaling too is not really a major issue here, as traditionally, this approach would be within the same LAN for example.

So far so good.  But today, we are starting to see a much more federated and broken landscape. Organisations have complex supply chains, with partners, sub-companies and external users all requiring access into once previously internal-only objects.  Employees too, want to access resources in other domains and as-a-service providers.  


Federated Identities


This then creates a much more federated landscape.  Protocols such as SAML2 and OAuth2/OIDC allow identity data from trusted 3rd parties, but not originating from the objects domain, to interact with those resource securely.

Again, from a scaling perspective this tends to work quite well.  The main external interactions tend to be at the identity layer, with access control information still sitting within the object's domain - albeit externalised from the resource itself.

The Mesh and Super-Federation

As the Internet of Things becomes normality, the increased volume of both subjects and objects creates numerous challenges.  Firstly the definition of both changes.  A subject will become not just a person, but also a thing and potentially another service.  An object will become not just an application, but an autonomous piece of data, an API or even another subject.  This then creates a multi-point set of interactions, with subjects accessing other subjects, API's accessing API's, things accessing API's and so on.

Enter the Blockchain

So where does the block chain fit into all this?  Well, the main characteristics that can be valueable in this sort of landscape, would be the decentralised, append-only, globally accessible nature of a blockchain.  The blockchain technology could be used as an access request warehouse.  This warehouse could contain the output from the access request workflow process such as this sample of psuedo code:

{"sub":"1234-org2", "obj":"file.dat", "access":"granted", "iss":"tomorrow", "exp":"tomorrow+1", "issuingAuth":"org1", "added":"now"}

This is basic, but would be hashed and cryptographically made secure from a trusted access request manager.  That manager would have the necessary circle of trust relationships with the necessary identity and access control managers.

After each access request, an entry would be made to the chain.  Each object would then be able to make a query against the chain, to identify all corresponding entries that map to their object set, unionise all entries and work out the necessary access control result.  For example, this would contain all access granted and access denied results.


A Blockchained Enabled Access Requestment Mgmt Workflow

So What?

So we now have another system and process to manage?  Well possibly, but this could provide a much more scaleable and interoperable model with request to all the necessary access control decisions that would need to take place to allow an IoT and API enabled world.

Each object could have access to any BC enabled node - so there would be massive fault tolerance and elastic scaling.  Each subject would simply present a self-contained assertion.  Today that could be a JWT or a token within a proof-of-possession framework.  They could collect that from any generator they choose.  Things like authentication and identity validation would not be altered.

Access request workflow management would be abstracted - the same asychronous processes, approvals and trusted interactions would take place.  The blockchain would simply be an externalised, distribued, secure storage mechanism.

From a technology perspective I don't believe this framework exists, and I will be investigating a proof of concept in this area.

Friday, 10 June 2016

Delegated RBAC CRUD Via Workflow

OpenIDM provides a powerful delegated administration model, for both REST endpoint access and workflow process access.

A simple way to provide scoped access into the IDM functions, is to simply wrap a workflow process around it and then delegate access to that workflow to a certain of group of users.

A basic example could be that of role based access control administration.  The basic create, read, update and delete tasks often associated with object management.  So RBAC-CRUD to save a few letters.

Each CRUD function can be wrapped into a workflow, with access to those workflows then given members of the rbac-admins internal authorization role.

I created 5 workflows, four for the role-admins and 1 for the end user:

role-admins: createRole.bar



A simple wrapper that takes two arguments and runs an openidm.create() to create the role.

role-admins: deleteRole.bar

Opposite of create...and does a lookahead using some JS stored within the form HTML to get a list of roles that can be deleted.  Before the openidm.delete() function is called, it clears down the members list first.


role-admins: addRoleToUserTemporal.bar

So we have a role, now we want to add some users.  Again, does a lookahead to create a dynamic select drop down, then free text to add a username.  You could add some checking logic here I guess to make sure the user exists before submission, but I wrap a conditional check in the workflow before I patch the role anyway.

The other attribute is a timer - this is just based on the Activiti Timer element and I've set it to take just a time.  In reality you would accept a date, but for demo's a time is much easier.  So, after the time has been passed, the initial role to user association is reversed, taking the role away.


role-admins: removeRoleFromUser.bar

Simple manual process to remove a role from a user.  Note all the patches in the workflows work against managed/role.  Whilst you can add and remove roles from the managed/user/_id, by using managed/role endpoint, I can restrict the access the role-admins get via access.js more accurately.


openidm-authorized:requestRole.bar

We then have one workflow left - that is available to any user.  Eg it's a standard end user workflow, and this time for an access request.

This again does a look ahead and performs an approval step before provisioning the role to the user. The default manager approval is in the workflow and remmed out alongside the ability to use any member of the role-admins authorization role.  So you can flip between the two approval journeys.

The use of role-admins leverages the Activiti:Candidate users attribute - eg role-admins could contain 10 users - the approval goes to all 10 and the first one to claim the task can approve.



A couple of points on access.  The workflow access is governed by the ../conf/process-access.json file.  In there add in the pattern of the role _id along with the internal authorization roles that should have access - note internal role and not just managed/role.

The access.js file in the ../script directory also needs updating to allow full control to the managed/role endpoint to the role-admins users.

Code for this set is available here.

Note, thanks should also go to Marek Detko and some code crib from his role collection example.

Wednesday, 1 June 2016

Workflow Approval Via Encrypted Email Links

A common workflow process is often the access request scenario - a user requires access to something, and that something requires an approval before the provisioning can be completed. Typically this is done via notifications, a dashboard and perhaps an email notifying the approver that they have a task that requires their attention.

However, what if the approver doesn't want to, or cannot, access their dashboard to approve the request?  An alternative is to embed workflow approval questions into an email, with fully self-contained links that contain encrypted payloads to approve or reject the request. (NB a further extension to this is to be able to respond to workflow requests directly via email / SMTP).

A way to do this is to simply send an email during the workflow instantiation that contains links to approve or reject the request.  But how can those links be securely created to avoid tampering, replay and misuse?  There a few neat steps in the ForgeRock platform that can simply come to the rescue. (NB this assumes that the email traffic / account is secure, which might not be the case...!).

My use case will look like something like this:
  1. Helpdesk operator requests access to impersonate an end user for a set period of time - say 5 minutes
  2. The end user will receive an email notification with two links - Approve or Reject
  3. Each link will go to two specific OpenIDM custom endpoints - ../endpoint/approveImpersonation or ../endpoint/rejectImpersonation
  4. Each endpoint will take a ?payload= argument that will contain an encrypted value
  5. That encrypted value will be contain the end user's Id and a unique reference to their workflow task instance
  6. As every request into OpenIDM needs to be authenticated, we'll route the request via OpenIG to add in an authentication header
  7. The custom endpoints will verify the payload, decrypt, find the appropriate workflow task instance and complete the workflow request task
  8. If approved...the workflow will provision the end users Id into the helpdesk operators account under an attribute called impersonationId
  9. The workflow will then suspend and return n-minutes later, based on the time selected in step 1 and deprovision the attribute in step 8.

The architecture at a high level looks something like this:




The main element of this is the workflow.  This is a simple access request style workflow with two interesting components. First is the sending of an email with the two links - both of which are encrypted using the openidm.encrypt function. The second is a time boundary that removes any changes the workflow makes after a selected time window.  The encrypted email payload contains a unique reference that is attached to the task instance.  The unique reference is created using the openidm.hash function, that takes the requester Id, requestee Id and the current time in ms.


To trigger the workflow, the enduser and a time element are entered.



This triggers the sending of the notification email with the two links.

The end user simply selects the appropriate link.  The link automatically redirects via IG to snowball the appropriate authentication headers and completes against the OpenIDM endpoint.

The endpoint verifies the payload argument exists, that the encrypted value is in tact, decrypts, finds the appropriate workflow task, compares the two hashed verification codes and completes the appropriate approve or reject action.

The workflow then contains an intermediate timer event that is used to act as a stop watch - to basically reverse any changes that are made, acting like a temporal condition.

        <intermediateCatchEvent id="timer">
  <timerEventDefinition>
    <timeDuration>PT${lengthOfImpersonation}M</timeDuration>
  </timerEventDefinition>
</intermediateCatchEvent>

The length variable is taken from the submitted workflow form.  After the timeDuration has completed a simple patch removes any values provisioned to the user.

        <scriptTask name="Cleanup User" id="cleanupRequestingUser" >
            <script>
                
                queryParams = ["_queryFilter": '/userName eq "'+startUserId+'"']
                userToPatch = openidm.query("managed/user", queryParams)

                patchParams = [[operation:'replace', field: 'idToImpersonate', value : ""]]
                openidm.patch('managed/user/'+userToPatch.result[0]._id, null, patchParams)
            </script>
        </scriptTask>

The code for the above sample is available here.

Friday, 13 May 2016

Federated Authorization Using 3rd Party JWTs

Continuing on the theme of authorization from recent blogs, I've seen several emerging requirements for what you could describe as federated authorization using an offline assertion.  The offline component pertaining to the fact that the policy decision point (PDP), has no prior or post knowledge of the calling user.  All of the subject information and context are self contained in the PDP evaluation request. Eg a request that is using a JSON Web Token for example.

A common illustration could be where you have distinct domains or operational boundaries that exist between the assertion issuer and the protected resources. An example could be being able to post a tweet on Twitter with only your Facebook account, with no Twitter profile at all.

A neat feature of OpenAM, is the ability to perform policy decision actions without having prior knowledge of the subject, or in fact having the subject have a profile in the AM user store.  To do this requires a few neat steps.

Firstly let me create a resource type - for interest I'll make a non-URL based resource based on gaining access to a meeting room.


For my actions, I'll add in some activities you could perform within a meeting room...


Next step is to add in a policy set for my Meeting Room #1 and a policy to allow my External Users access to it.


My subjects tab for my policy is the first slight difference to a normal OpenAM policy.  Firstly my users who are accessing the meeting are external, so will not have a session or entry in the OpenAM profile store. So instead of looking for authenticated users, I switch to check for presented claims.  I add in 3 claims - one to check the issuer (obviously only trusted issuers are important to me....but at this step we're not verifying the issuer, that comes later..), the audience and a claim called Role.  Note the claims checks here are simply string comparators not wild cards and no signature checks have been done.

I next add in some actions that my external users can perform against my meeting room.  As managers, I add in the ability to order food, but they can't use the white board!


So far pretty simple.  However, there is one big thing we haven't done.  That is to verify the presented JWT.  The JWT should be signed by the 3rd party IDP in order to provide authenticity of the initial authentication.  For further info on JWT structure see RFE7519 -  but basically there are 3 components, a header, payload and signature.  The header contains algorithm and data structure information, the payload the user claims and the signature a crypto element.  This creates a base64 encoded dot-delimited payload.  However...we need to verify the JWT is from the issuer we trust.  To do this I create a scripted policy condition that verifies the signature.


This simply calls either a Groovy or JavaScript that I create in the OpenAM UI or upload over REST.


The script basically does a check to make sure a JWT is present in the REST PDP call, strips out the various components and creates a corresponding signature based on a shared secret.  If the reconstructed signature matches the submitted JWT signature we're in business.

The script calls in the ForgeRock JSON, JSE, JWS and JWT libraries that are already being used throughout the product, so we're not having to recreate anything new here.

To test the entire flow, you need to create a JWT with the appropriate claims from a 3rd party IDP. There are lots of online generators that can do this.  I used this one to build my JWT.


Note the selection of the algorithm and key.  The key is needed in the script on the AM side.

I can now take my newly minted JWT and make the appropriate REST call into OpenAM.


The call sends a request into ../json/policies?_action=evaluate with my payload of the resource I'm trying to access and my JWT (note this is currently submitted both within the subject.jwt attribute and also the environment map due to OPENAM-8893).  In order to make the call - remember my subject doesn't have a session within OpenAM - I create a service account called policyEvaluator that I use to call the REST endpoint with the appropriate privileges.

A successful call results in access to the meeting room, once my JWT has been verified correctly:


If the signature verification fails I am given an advice message:


Code for the policy script is available here.

NB - the appropriate classes and also the primitive byte[], need to be added the the Java white list for the policy engine, within the global configuration,

Thursday, 3 March 2016

In flight Authorization Management

Access request, or authorization management is far from new.  The classic use case is the use of a workflow process that, via approval, updates a profile or account with a persisted attribute/group/permission in a target system.  At run time, when a user attempts to perform an action on the target system, the system locally checks the profile of the user and looks for particular attributes that have been persisted.

A slight variation on this theme, is to provide a mechanism to alter (or at least request to alter) the persisted permissions at near run time.  An example of this, is to leverage OAuth2 and use of a tokeninfo endpoint that can convert access_token scope data into scope values, that are used by resource server to handle local authorization.  Dependent on the content of the scope values, the resource server could provide a route for those persisted entries to be updated - aka an access request.


In the above example, we have a standard OAuth2 client-server relationship on the right hand side - it just so happens we're also using the device flow pin and pair paradigm that is described here. Ultimately the TV application retrieves user data using OAuth2 - one of the attributes we send back to the TV, is an attribute called waterShedContent - this is a boolean value that governs whether the user can access post 9pm TV shows or not.  If the value is false, the TV player does not allow access - but does then provide a link into OpenIDM - which can trigger a workflow to request access.

Above flow goes something like this:

  1. User performs OAuth2 consent to allow the TV player access to certain profile attributes (0 is just the onboarding process for the TV via pin/pair for example)
  2. OpenAM retrieves static profile data such as the waterShedContent attribute and makes available via the ../tokeninfo end point accessible using the OAuth2 access_token
  3. Client interprets the data received from the ../tokeninfo endpoint to perform local authorization (if waterShedContent == true || false for example) providing a link into OpenIDM that can trigger an access request
  4. The BPMN workflow in IDM searches for an approver and assigns them a basic boolean gateway workflow - either allow or deny.  An allow triggers an openidm.patch that updates the necessary attribute that is then stored in OpenDJ
  5. The updated attribute is then made available via the ../tokeninfo endpoint again - perhaps via a refresh_token flow and the updated attribute is available in the client
Triggering a remote workflow (step 3) is pretty trivial - simply call /openidm/workflow/processinstance?_action=create with the necessary workflow you want to trigger.  To work out who to assign the workflow to, I leveraged the new relationship management feature of IDM and used the execution.setVariable('approver', approver) function within the workflow.  The approver was simply an attribute within my initial user object that I set when I created my managed/object.

The code for the PoC-level TV-player with the necessary OAuth2 and workflow request code is available here.

Wednesday, 3 February 2016

Set Top Box Emulator and OAuth2 Device Flow

This is really an extension to a blog I did in October 2015 - Device Authorization using OAuth2 and OpenAM, with an application written in Node.js using the newly released OpenAM 13.0.

The basic flow hasn't really changed. Ultimately there is a client - the TV emulator - that communicates to OpenAM and the end user, with the end user also performing out-of-band operations via a device which has better UI capabilities - aka a tablet or laptop.


The app boots and initiates a request to OpenAM to get a unique user and device code, prompting the user to hit a specific URL on their tablet.


The user authenticates with the OpenAM resource server as necessary, enters the code and performs a consent dance to approve the request from the TV to be paired and retrieve data from the user's profile - in this case, overloading the postaladdress attribute in DJ to store favourite channel data.



In the meantime, the TV client is performing a polling operation - checking with the OpenAM authorization service, to see if the end user has entered the correct user_code and approved the request.  Once completed the TV retrieves a typical OAuth2 bearer payload, including the refresh_token and access_token values that can be used to retrieve the necessary attributes.



Future requests from the TV now no longer need to request password or authorization data.  By leveraging a long live refresh_token access can be managed centrally.


For more information on OAuth2 Device Flow see here.