Friday, November 11, 2016

Home Realm Discovery in the Apache CXF Fediz IdP

When a client application (secured via either WS-Federation or SAML SSO) redirects a user to the Apache CXF Fediz IdP, the IdP must figure out what the home realm of the user is. If the home realm of the user corresponds to the realm of the IdP, then the IdP can authenticate the user. However, if the home realm does not match that of the IdP, then the IdP has the option to forward the authentication request to a third party IdP for authentication, if it is configured to do this. In this post, we will look at the different options available in the IdP to figure out what the home realm of the user is.

1) The 'whr' query parameter

When using the WS-Federation protocol, the application can specify the home realm of the user by adding the 'whr' query parameter to the URI that the browser is redirected to. Alternatively, the 'whr' query parameter could be added by a reverse proxy sitting in front of the IdP. Here is an example of such a URI including a 'whr' query parameter:
  • https://localhost:45753/fediz-idp-realmb/federation?wa=wsignin1.0&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Aidp%3Arealm-A&wreply=https%3A%2F%2Flocalhost%3A43618%2Ffediz-idp%2Ffederation&whr=urn:org:apache:cxf:fediz:idp:realm-B&wfresh=10&wctx=c07a5b9a-e270-4855-9201-fc1801851cc9
2) The 'hrds' configuration option in the IdP

If no 'whr' query parameter is available (this will always be the case for SAML SSO), then the IdP attempts to find out the home realm of the user by querying the "hrds" property of the IdP. This is a Spring Expression Language expression that is evaluated on the Spring WebFlow RequestContext.

For an example of how this can be used, let's look at the tests in Fediz for the SAML SSO IdP when redirecting to a trusted third party IdP. As there is no 'whr' query parameter for SAML SSO, instead we will define a class with a static method that maps application realms to home realms. The application realm is available in the IdP, as the SAML SSO AuthnRequest is already parsed at this point (it corresponds to the "Issuer" of the AuthnRequest). So we can specify the hrds configuration options in the IdP as follows:
  • <property name="hrds" value="T(org.apache.cxf.fediz.integrationtests.RealmMapper).realms()                                   .get(getFlowScope().get('saml_authn_request').issuer)" />
3) Via a form

If no 'whr' query parameter is available, and no 'hrds' configuration option is specified, then the IdP will display a form where the user can select the home realm. The IdP only does this if the configuration option "provideIdpList" is set to true. If it is set to false, then the current IdP is assumed to be the home realm IdP, unless the configuration option "useCurrentIdp" is also set to "false", in which case an error is displayed. The user can select the home realm in the form corresponding to the known trusted IdP realms of this IdP:


Friday, November 4, 2016

Support for IdP-initiated SAML SSO in Apache CXF

Previous blog posts have covered how to secure your JAX-RS web applications in Apache CXF using SAML SSO. Since the 3.1.8 release, Apache CXF also supports IdP-initiated SAML SSO. The typical use-case for SAML SSO involves the browser invoking on a JAX-RS application, and then being redirected to an IdP for authentication, which subsequently redirects the browser back to the application. However, sometimes a user will log on first to the IdP and then want to invoke on a web application. In this post we will show how to configure SAML SSO for a CXF-based web application to support the IdP-initiated flow, by demonstrating an interop test-case with Okta.

1) Configuring a SAML application in Okta

The first step is to create an account at Okta and configure a SAML application. This process is mapped out at the following link. Follow the steps listed on this page with the following additional changes:
  • Specify the following for the Single Sign On URL and audience URI: http://localhost:8080/fedizdoubleit/racs/sso
  • Specify the following for the default RelayState: http://localhost:8080/fedizdoubleit/app1/services/25
  • Add an additional attribute with name "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" and value "Manager".
The RequestAssertionConsumerService will process the SAML Response from Okta. However, it doesn't know where to subsequently send the browser. Therefore, we are configuring the RelayState parameter to encode the URL of the end application. In addition, our test application requires that the user has a specific role to invoke upon it, hence we add a "Manager" attribute with the URI corresponding to a role.

When the application is configured, you will see an option to "View Setup Instructions". Open this link in a new tab and set it aside for the moment - it contains information required when setting up the web application. Now click on the "People" tab and assign the application to the username that you have created at Okta.

2) Setting up the SAML SSO-enabled web application

We will use a trivial "double it" web application which I wrote previously to demonstrate the SAML SSO capabilities of Apache CXF Fediz. The web application is available here. Build the web application and deploy it in Apache Tomcat. You will need to edit 'webapps/fedizdoubleit/WEB-INF/cxf-service.xml'.

a) SamlRedirectBindingFilter configuration changes

First let's look at the changes which are required to the 'SamlRedirectBindingFilter':
  • Remove "idpServiceAddress" and "assertionConsumerServiceAddress". These aren't required as we are only supporting the IdP-initiated flow.
  • Also remove the "signRequest", "signaturePropertiesFile", "callbackHandler", "signatureUsername" and "issuerId" properties.
  • Add <property name="addWebAppContext" value="false"/>
  • Add <property name="supportUnsolicited" value="true"/>

b) RequestAssertionConsumerService (RACS) configuration changes

Now add the following properties to the "RequestAssertionConsumerService":
  • <property name="supportUnsolicited" value="true"/>
  • <property name="idpServiceAddress" value="..."/>
  • <property name="issuerId" value="http://localhost:8080/fedizdoubleit/racs/sso"/>
  • <property name="parseApplicationURLFromRelayState" value="true"/>
Paste in the value for "idpServiceAddress" from the "Identity Provider Single Sign-On URL" given in the "View Setup Instructions" page referenced above.
c) Adding Okta cert into the RACS truststore

As things stand, the SAML Response from Okta to the RequestAssertionConsumerService will fail, as the RACS will not trust the certificate Okta uses to sign the SAML Response. Therefore we need to insert the Okta cert into the truststore of the RACS. Copy the "X.509 Certificate" value from the "View Setup Instructions" page referenced earlier. Create a file called 'webapps/fedizdoubleit/WEB-INF/classes/okta.cert' and paste the certificate contents into this file. Import it into the truststore via:
  • keytool -keystore stsrealm_a.jks -storepass storepass -importcert -file okta.cert
At this point we should be all done. Click on the box for the application you have created in Okta. You should be redirected to the CXF RACS, which validates the SAML Response, and in turn redirects to the application.


Wednesday, November 2, 2016

Client Credentials grant support in the Apache CXF Fediz OIDC service

Apache CXF Fediz ships with a powerful and flexible OpenId Connect (OIDC) service that can be used to implement SSO for your organisation. All of the core OIDC flows are supported - Authorization Code flow, Implicit and Hybrid flows. As OIDC is just an identity layer over OAuth 2.0, it's possible to use Fediz as a purely OAuth 2.0 service as well, and all of the authorization grants defined in the spec are also fully supported. In this post we will look at support for one of these authorization grants in Fediz 1.3.1 - the client credentials grant.

1) The OAuth 2.0 client credentials grant

The client credentials grant is used for when the client is requesting access for a resource that is owned or controlled by that client. There is no enduser in this scenario, unlike say the authorization code flow or implicit flow. The client simply calls the token endpoint of the authorization service using "client_credentials" for the "grant_type" parameter. In addition, the client must authenticate (e.g. by supplying client_id and client_secret parameters). The authorization service authenticates the client and then returns an access token.

2) Supporting the client credentials grant in Fediz OIDC

It's easy to support the client credentials grant in the Fediz OIDC service.

a) Add the ClientCredentialsGrantHandler

Firstly, the ClientCredentialsGrantHandler must be added to the list of grant handlers supported by the token service as follows:

b) Add a way of authenticating the client

The next step is to add a way of authenticating the client credentials. Fediz uses JAAS to make it easy for the deployer to plug in different JAAS LoginModules if required. To configure JAAS, you must specify the name of the JAAS LoginModule in the configuration of the OAuthDataProviderImpl:

c) Example JAAS configuration

For the normal OIDC flows, the Fediz OIDC uses a WS-Federation filter to redirect the browser to the Fediz IdP, where the end user is then ultimately authenticated by the STS that bundles with Fediz. Therefore it seems like a natural fit to re-use the STS to authenticate the client in the Fediz OIDC. Follow steps (a) and (b) above. Start the Fediz STS, but before starting the OIDC service, specify the "java.security.auth.login.config" system property to point to the following JAAS configuration file:

You must substitute the correct port for "${idp.https.port}". The STSLoginModule takes the given username and password supplied by the client and uses them to authenticate to the STS.

Wednesday, October 26, 2016

Switching authentication mechanisms in the Apache CXF Fediz STS

Apache CXF Fediz ships with an Identity Provider (IdP) that can authenticate users via either the WS-Federation or SAML SSO protocols. The IdP delegates user authentication to a Security Token Service (STS) web application using the WS-Trust protocol. The STS implementation in Fediz ships with some sample user data for use in the tests. For a real-world scenario, deployers will have to swap the sample data out for an identity backend (such as Active Directory or LDAP). This post will explain how this can be done, with a particular focus on some recent changes to the STS web application in Fediz to make the process easier.

1) The default STS that ships with Fediz

First let's explain a bit about how the STS is configured by default in Fediz to cater for the testcases.

a) Endpoints and user authentication

The STS must define two distinct set of endpoints to work with the IdP. Firstly, the STS must be able to authenticate the user credentials that are presented to the IdP. Typically this is a Username + Password combination. However, X.509 client certificates and Kerberos tokens are also supported. Note that by default, the STS authenticates usernames and passwords via a simple file local to the STS.

After successful user authentication, a SAML token is returned to the IdP. The IdP then gets another SAML token "on behalf of" the authenticated user for a given realm, authenticating using its own credentials. So we need a second endpoint in the STS to issue this token. By default, the STS requires that the IdP authenticate using TLS client authentication. The security policies are defined in the WSDLs available here.

b) Realms

The Fediz IdP and STS support the concept of authenticating users in different realms. By default, the IdP is configured to authenticate users in "Realm A". This corresponds to a specific endpoint address in the STS. The STS also defines user authentication endpoints in "Realm B" for use in test scenarios involving identity federation between two IdPs.

In addition, the STS defines some configuration to map user identities between realms. In other words, how a principal in one realm should map to another realm, and how the claims in one realm map to those in another realm.

2) Changing the STS in Fediz 1.3.2 to use LDAP

From the forthcoming 1.3.2 release onwards, the Fediz STS web application is a bit easier to customize for your specific deployment needs. Let's see how easy it is to switch the STS to use LDAP.

a) Deploy the vanilla IdP and STS to Apache Tomcat

To start with, we will deploy the STS and IdP containing the sample data to Apache Tomcat.
  • Create a new directory: ${catalina.home}/lib/fediz
  • Edit ${catalina.home}/conf/catalina.properties and append ',${catalina.home}/lib/fediz/*.jar' to the 'common.loader' property.
  • Copy ${fediz.home}/plugins/tomcat/lib/* to ${catalina.home}/lib/fediz
  • Copy ${fediz.home}/idp/war/* to ${catalina.home}/webapps
  • Download and copy the hsqldb jar (e.g. hsqldb-2.3.4.jar) to ${catalina.home}/lib 
  • Copy idp-ssl-key.jks and idp-ssl-trust.jks from ${fediz.home}/examples/sampleKeys to ${catalina.home}
  • Edit the TLS Connector in ${catalina.home}/conf/server.xml', e.g.: <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" clientAuth="want" sslProtocol="TLS" keystoreFile="idp-ssl-key.jks" keystorePass="tompass" keyPass="tompass" truststoreFile="idp-ssl-trust.jks" truststorePass="ispass" />
Now start Tomcat and then enter the following in a web browser (authenticating with "alice/ecila" in "realm A" - you should be directed to the URL for the default service application (404, as we have not configured it):

https://localhost:8443/fediz-idp/federation?wa=wsignin1.0&wreply=https%3A%2F%2Flocalhost%3A8443%2Ffedizhelloworld%2Fsecure%2Ffedservlet&wtrealm=urn%3Aorg%3Aapache%3Acxf%3Afediz%3Afedizhelloworld

b) Change the STS authentication mechanism to Active Directory

To simulate an Active Directory instance for demonstration purposes, we will modify some LDAP system tests in the Fediz source that use Apache Directory. Check out the Fediz source and build it via "mvn install -DskipTests". Now go into "systests/ldap" and edit the LDAPTest. "@Ignore" the existing test + uncomment the test which just "sleeps". Also change the "@CreateTransport" annotation to start the LDAP port on "12345" instead of a random port.

Next we'll configure the Fediz STS to use this LDAP instance for authentication. Edit 'webapps/fediz-idp-sts/WEB-INF/cxf-transport.xml'. Change "endpoints/file.xml" to "endpoints/ldap.xml". Next edit 'webapps/fediz-idp-sts/WEB-INF/endpoints/ldap.xml" and just change the port from "389" to "12345".

Now we need to configure a JAAS configuration file, which the STS uses to validate the received Username + Password to LDAP. Copy this file to the "conf" directory of Tomcat, substituting "12345" for "portno". Now restart Tomcat, this time specifying the location of the JAAS configuration file, e.g.:
  • export JAVA_OPTS="-Xmx2048M -Djava.security.auth.login.config=/opt/fediz-apache-tomcat-8.0.37/conf/ldap.jaas"
This is all the changes that are required to swap over to use an LDAP instance for authentication.

Wednesday, September 28, 2016

Securing an Apache Kafka broker - part IV

This is the fourth in a series of articles on securing an Apache Kafka broker. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. The third post showed how Apache Ranger could be used instead to create and enforce authorization policies for Apache Kafka. In this post we will look at an alternative authorization solution called Apache Sentry.

1) Build the Apache Sentry distribution

First we will build and install the Apache Sentry distribution. Download Apache Sentry (1.7.0 was used for the purposes of this tutorial). Verify that the signature is valid and that the message digests match. Now extract and build the source and copy the distribution to a location where you wish to install it:
  • tar zxvf apache-sentry-1.7.0-src.tar.gz
  • cd apache-sentry-1.7.0-src
  • mvn clean install -DskipTests
  • cp -r sentry-dist/target/apache-sentry-1.7.0-bin ${sentry.home}
Apache Sentry has an authorization plugin for Apache Kafka, amongst other big data projects. In addition it comes with an RPC service which stores authorization privileges in a database. For the purposes of this tutorial we will just configure the authorization privileges in a configuration file locally to the broker. Therefore we don't need to do any further configuration to the distribution at this point.

2) Configure authorization in the broker

Configure Apache Kafka as per the first tutorial. To enable authorization using Apache Sentry we also need to follow these steps. First edit 'config/server.properties' and add:
  • authorizer.class.name=org.apache.sentry.kafka.authorizer.SentryKafkaAuthorizer
  • sentry.kafka.site.url=file:./config/sentry-site.xml
Next copy the jars from the "lib" directory of the Sentry distribution to the Kafka "libs" directory. Then create a new file in the config directory called "sentry-site.xml" with the following content:

This is the configuration file for the Sentry plugin for Kafka. It essentially says that the authorization privileges are stored in a local file, and that the groups for authenticated users should be retrieved from this file. Finally, we need to specify the authorization privileges. Create a new file in the config directory called "sentry.ini" with the following content:

This configuration file contains three separate sections. The "[users]" section maps the authenticated principals to local groups. The "[groups]" section maps the groups to roles, and the "[roles]" section lists the actual privileges. Now we can start the broker as in the first tutorial:
  • bin/kafka-server-start.sh config/server.properties 
3) Test authorization

Now lets test the authorization logic. Start the producer:
  • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
If everything is configured correctly then it should work as in the first tutorial. 

Monday, September 26, 2016

Securing an Apache Kafka broker - part III

This is the third in a series of blog posts about securing Apache Kafka. The first post looked at how to secure messages and authenticate clients using SSL. The second post built on the first post by showing how to perform authorization using some custom logic. However, this approach is not recommended for non-trivial deployments. In this post we will show at how we can create flexible authorization policies for Apache Kafka using the Apache Ranger admin UI. Then we will show how to enforce these policies at the broker.

1) Install the Apache Ranger Kafka plugin

The first step is to download Apache Ranger (0.6.1-incubating was used in this post). Verify that the signature is valid and that the message digests match. Now extract and build the source, and copy the resulting plugin to a location where you will configure and install it:
  • tar zxvf apache-ranger-incubating-0.6.1.tar.gz
  • cd apache-ranger-incubating-0.6.1
  • mvn clean package assembly:assembly -DskipTests
  • tar zxvf target/ranger-0.6.1-kafka-plugin.tar.gz
  • mv ranger-0.6.1-kafka-plugin ${ranger.kafka.home}
Now go to ${ranger.kafka.home} and edit "install.properties". You need to specify the following properties:
  • COMPONENT_INSTALL_DIR_NAME: The location of your Kafka installation
  • POLICY_MGR_URL: Set this to "http://localhost:6080"
  • REPOSITORY_NAME: Set this to "KafkaTest".
Save "install.properties" and install the plugin as root via "sudo ./enable-kafka-plugin.sh". The Apache Ranger Kafka plugin should now be successfully installed (although not yet configured properly) in the broker.

2) Configure authorization in the broker

Configure Apache Kafka as per the first tutorial. There are a number of steps we need to follow to configure the Ranger Kafka plugin before it is operational:
  • Edit 'config/server.properties' and add the following: authorizer.class.name=org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer
  • Add the Kafka "config" directory to the classpath, so that we can pick up the Ranger configuration files: export CLASSPATH=$KAFKA_HOME/config
  • Copy the Apache Commons Logging jar into $KAFKA_HOME/libs. 
  • The ranger plugin will try to store policies by default in "/etc/ranger/KafkaTest/policycache". As we installed the plugin as "root" make sure that this directory is accessible to the user that is running the broker.
Now we can start the broker as in the first tutorial:
  • bin/kafka-server-start.sh config/server.properties
3) Configure authorization policies in the Apache Ranger Admin UI 

At this point we should have configured the broker so that the Apache Ranger plugin is used to communicate with the Apache Ranger admin service to download authorization policies. So we need to install and configure the Apache Ranger admin service. Please refer to this blog post for how to do this. Assuming the admin service is already installed, start it via "sudo ranger-admin start". Open a browser and log on to "localhost:6080" with the credentials "admin/admin".

First lets add some new users that match the SSL principals we have created in the first tutorial. Click on "Settings" and "Users/Groups". Add new users for the principals:
  • CN=Client,O=Apache,L=Dublin,ST=Leinster,C=IE
  • CN=Service,O=Apache,L=Dublin,ST=Leinster,C=IE
  • CN=Broker,O=Apache,L=Dublin,ST=Leinster,C=IE
Now go back to the Service Manager screen and click on the "+" button next to "KAFKA". Create a new service called "KafkaTest". Click "Test Connection" to make sure it can communicate with the Apache Kafka broker. Then click "add" to save the new service. Click on the new service. There should be an "admin" policy already created. Edit the policy and give the "broker" principal above the rights to perform any operation and save the policy. Now create a new policy called "TestPolicy" for the topic "test". Give the service principal the rights to "Consume, Describe and Publish". Give the client principal the rights to "Consum and Describe" only.


4) Test authorization

Now lets test the authorization logic. Bear in mind that by default the Kafka plugin reloads policies from the admin service every 30 seconds, so you may need to wait that long or to restart the broker to download the newly created policies. Start the producer:
  • bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
Send a few messages to check that the producer is authorized correctly. Now start the consumer:
  • bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --consumer.config config/consumer.properties --new-consumer
If everything is configured correctly then it should work as in the first tutorial.

Friday, September 23, 2016

Integrating Apache Camel with Apache Syncope - part III

This is the third in a series of blog posts about integrating Apache Camel with Apache Syncope. The first post introduced the new Apache Camel provisioning manager that is available in Apache Syncope 2.0.0, and gave an example of how we can modify the default behaviour to send an email to an administrator when a user was created. The second post showed how an administrator can keep track of user password changes for auditing purposes. In this post we will show how to integrate Syncope with Apache ActiveMQ using Camel.

1) The use-case

The use-case is that Apache Syncope is used for Identity Management in a large organisation. When users are created we would like to be able to gather certain information about the new users and process it dynamically in some way. In particular, we are interested in the age of the new users and the country in which they are based. Perhaps at the reception desk of the company HQ we display a map with the number of employees in each country highlighted. To decouple whatever applications are processing the data from Syncope itself, we will use a messaging solution, namely Apache ActiveMQ. When new users are created, we will modify the default Camel route to send a message to two topics corresponding to the age and location of the user.

2) Download and configure Apache ActiveMQ

The first step is to download Apache ActiveMQ (currently 5.14.0). Unzip it and start it via:
  • bin/activemq start 
Now go to the web interface of ActiveMQ - 'http://localhost:8161/admin/', logging in with credentials 'admin/admin'. Click on the "Queues" tab and create two new queues called 'age' and 'country'.

3) Download and configure Apache Syncope

Download and extract the standalone version of Apache Syncope 2.0.0. Before we start it we will copy the jars we need to get Camel working with ActiveMQ in Syncope. In the "webapps/syncope/WEB-INF/lib" directory of the Apache Tomcat instance bundled with Syncope, copy the following jars:
  • From $ACTIVEMQ_HOME/lib: activemq-client-5.14.0.jar + activemq-spring-5.14.0.jar + hawtbuf-1.11.jar + geronimo-j2ee-management_1.1_spec-1.0.1.jar
  • From $ACTIVEMQ_HOME/lib/camel: activemq-camel-5.14.0.jar + camel-jms-2.16.3.jar
  • From $ACTIVEMQ_HOME/lib/optional: activemq-pool-5.14.0.jar + activemq-jms-pool-5.14.0.jar + spring-jms-4.1.9.RELEASE.jar
Next we need to create a Camel spring configuration file containing a bean with the address of the broker. Add the following file to the Tomcat lib directory (called "camelRoutesContext.xml"):

Now we can start the embedded Apache Tomcat instance. Open a browser and navigate to 'http://localhost:9080/syncope-console' logging in with 'admin/password'. The first thing we need to do is to configure user attributes for "age" and "country". Go to "Configuration/Types" in the left-hand menu, and click on the "Schemas" tab. Create two plain (mandatory) schema types: "age" of type "String" and "country" of type "Long". Now click on the "AnyTypeClasses" tab and create a new AnyTypeClass selecting the two plain schema types we just created. Finally, click on the "AnyType" tab and edit the "USER". Add the new AnyTypeClass you created and hit "save".

Now we will modify the Camel route invoked when a user is created. Click on "Extensions/Camel Routes" in the left-hand configuration menu. Edit the "createUser" route and add the following above the "bean method" part:
  • <setBody><simple>${body.plainAttrMap[age].values[0]}</simple></setBody>
  • <to uri="activemq:age"/>
  • <setBody><simple>${exchangeProperty.actual.plainAttrMap[country].values[0]}</simple></setBody>
  • <to uri="activemq:country"/>
This should be fairly straightforward to follow. We are setting the message body to be the age of the newly created User, and dispatching that message to the "age" queue. We then follow the same process for the "country". We also need to change "body" in the "bean method" line to "exchangeProperty.actual", this is because we have redefined what the body is for each of the Camel routes above.


Now let's create some new users. Click on the "Realms" menu and select the "USER" tab. Create new users "alice" in country "usa" of age "25" and "bob" in country "canada" of age "27". Now let's look at the ActiveMQ console again. We should see two new messages in each of the queues as follows, ready to be consumed: