Quantcast
Channel: Arjan Tijms' Weblog
Viewing all 76 articles
Browse latest View live

Header based stateless token authentication for JAX-RS

$
0
0
Authentication is a topic that comes up often for web applications. The Java EE spec supports authentication for those via the Servlet and JASPIC specs, but doesn't say too much about how to authenticate for JAX-RS.

Luckily JAX-RS is simply layered on top of Servlets, and one can therefore just use JASPIC's authentication modules for the Servlet Container Profile. There's thus not really a need for a separate REST profile, as there is for SOAP web services.

While using the same basic technologies as authentication modules for web applications, the requirements for modules that are to be used for JAX-RS are a bit different.

JAX-RS is often used to implement an API that is used by scripts. Such scripts typically do not engage into an authentication dialog with the server, i.e. it's rare for an API to redirect to a form asking for credentials, let alone asking to log-in with a social provider.

An even more fundamental difference is that in web apps it's commonplace to establish a session for among others authentication purposes. While possible to do this for JAX-RS as well, it's not exactly a best practice. Restful APIs are supposed to be fully stateless.

To prevent the need for going into an arbitrary authentication dialog with the server, it's typically for scripts to send their credentials upfront with a request. For this BASIC authentication can be used, which does actually initiates a dialog albeit a standardised one. An other option is to provide a token as either a request parameter or as an HTTP header. It should go without saying that in both these case all communication should be done exclusively via https.

Preventing a session to be created can be done in several ways as well. One way is to store the authentication data in an encrypted cookie instead of storing that data in the HTTP session. While this surely works it does feel somewhat weird to "blindly" except the authenticated identity from what the client provides. If the encryption is strong enough it *should* be okayish, but still. Another method is to quite simply authenticate every time over again with each request. This however has its own problem, namely the potential for bad performance. An in-memory user store will likely be very fast to authenticate against, but anything involving an external system like a database or ldap server probably is not.

The performance problem of authenticating with each request can be mitigated though by using an authentication cache. The question is then whether this isn't really the same as creating a session?

While both an (http) session and a cache consume memory at the server, a major difference between the two is that a session is a store for all kinds of data, which includes state, but a cache is only about data locality. A cache is thus by definition never the primary source of data.

What this means is that we can throw data away from a cache at arbitrary times, and the client won't know the difference except for the fact its next request may be somewhat slower. We can't really do that with session data. Setting a hard limit on the size of a cache is thus a lot easier for a cache then it is for a session, and it's not mandatory to replicate a cache across a cluster.

Still, as with many things it's a trade off; having zero data stored at the server, but having a cookie send along with the request and needing to decrypt that every time (which for strong encryption can be computational expensive), or having some data at the server (in a very manageable way), but without the uneasiness of directly accepting an authenticated state from the client.

Here we'll be giving an example for a general stateless auth module that uses header based token authentication and authenticates with each request. This is combined with an application level component that processes the token and maintains a cache. The auth module is implemented using JASPIC, the Java EE standard SPI for authentication. The example uses a utility library that I'm incubating called OmniSecurity. This library is not a security framework itself, but provides several convenience utilities for the existing Java EE security APIs. (like OmniFaces does for JSF and Guava does for Java)

One caveat is that the example assumes CDI is available in an authentication module. In practice this is the case when running on JBoss, but not when running on most other servers. Another caveat is that OmniSecurity is not yet stable or complete. We're working towards an 1.0 version, but the current version 0.6-ALPHA is as the name implies just an alpha version.

The module itself look as follows:


public class TokenAuthModule extends HttpServerAuthModule {

private final static Pattern tokenPattern = compile("OmniLogin\\s+auth\\s*=\\s*(.*)");

@Override
public AuthStatus validateHttpRequest(HttpServletRequest request, HttpServletResponse response, HttpMsgContext httpMsgContext) throws AuthException {

String token = getToken(request);
if (!isEmpty(token)) {

// If a token is present, authenticate with it whether this is strictly required or not.

TokenAuthenticator tokenAuthenticator = getReferenceOrNull(TokenAuthenticator.class);
if (tokenAuthenticator != null) {

if (tokenAuthenticator.authenticate(token)) {
return httpMsgContext.notifyContainerAboutLogin(tokenAuthenticator.getUserName(), tokenAuthenticator.getApplicationRoles());
}
}
}

if (httpMsgContext.isProtected()) {
return httpMsgContext.responseNotFound();
}

return httpMsgContext.doNothing();
}

private String getToken(HttpServletRequest request) {
String authorizationHeader = request.getHeader("Authorization");
if (!isEmpty(authorizationHeader)) {

Matcher tokenMatcher = tokenPattern.matcher(authorizationHeader);
if (tokenMatcher.matches()) {
return tokenMatcher.group(1);
}
}

return null;
}

}
Below is a quick primer on Java EE's authentication modules:
A server auth module (SAM) is not entirely unlike a servlet filter, albeit one that is called before every other filter. Just as a servlet filter it's called with an HttpServletRequest and HttpServletResponse, is capable of including and forwarding to resources, and can wrap both the request and the response. A key difference is that it also receives an object via which it can pass a username and optionally a series of roles to the container. These will then become the authenticated identity, i.e. the username that is passed to the container here will be what HtttpServletRequest.getUserPrincipal().getName() returns. Furthermore, a server auth module doesn't control the continuation of the filter chain by calling or not calling FilterChain.doFilter(), but by returning a status code.

In the example above the authentication module extracts a token from the request. If one is present, it obtains a reference to a TokenAuthenticator, which does the actual authentication of the token and provides a username and roles if the token is valid. It's not strictly necessary to have this separation and the authentication module could just as well contain all required code directly. However, just like the separation of responsibilities in MVC, it's typical in authentication to have a separation between the mechanism and the repository. The first contains the code that does interaction with the environment (aka the authentication dialog, aka authentication messaging), while the latter doesn't know anything about an environment and only keeps a collection of users and roles that are accessed via some set of credentials (e.g. username/password, keys, tokens, etc).

If the token is found to be valid, the authentication module retrieves the username and roles from the authenticator and passes these to the container. Whenever an authentication module does this, it's supposed to return the status "SUCCESS". By using the HttpMsgContext this requirement is largely made invisible; the code just returns whatever HttpMsgContext.notifyContainerAboutLogin returns.

If authentication did not happen for whatever reason, it depends on whether the resource (URL) that was accessed is protected (requires an authenticated user) or is public (does not require an authenticated user). In the first situation we always return a 404 to the client. This is a general security precaution. According to HTTP we should actually return a 403 here, but if we did users can attempt to guess what the protected resources are. For applications where it's already clear what all the protected resources are it would make more sense to indeed return that 403 here. If the resource is a public one, the code "does nothing". Since authentication modules in Java EE need to return something and there's no status code that indicates nothing should happen, in fact doing nothing requires a tiny bit of work. Luckily this work is largely abstracted by HttpMsgContext.doNothing().

Note that the TokenAuthModule as shown above is already implemented in the OmniSecurity library and can be used as is. The TokenAuthenticator however has to be implemented by user code. An example of an implementation is shown below:


@RequestScoped
public class APITokenAuthModule implements TokenAuthenticator {

@Inject
private UserService userService;

@Inject
private CacheManager cacheManager;

private User user;

@Override
public boolean authenticate(String token) {
try {
Cache<String, User> usersCache = cacheManager.getDefaultCache();

User cachedUser = usersCache.get(token);
if (cachedUser != null) {
user = cachedUser;
} else {
user = userService.getUserByLoginToken(token);
usersCache.put(token, user);
}
} catch (InvalidCredentialsException e) {
return false;
}

return true;
}

@Override
public String getUserName() {
return user == null ? null : user.getUserName();
}

@Override
public List<String> getApplicationRoles() {
return user == null ? emptyList() : user.getRoles();
}

// (Two empty methods omitted)
}
This TokenAuthenticator implementation is injected with both a service to obtain users from, as well as a cache instance (InfiniSpan was used here). The code simply checks if a User instance associated with a token is already in the cache, and if it's not gets if from the service and puts it in the cache. The User instance is subsequently used to provide a user name and roles.

Installing the authentication module can be done during startup of the container via a Servlet context listener as follows:


@WebListener
public class SamRegistrationListener extends BaseServletContextListener {

@Override
public void contextInitialized(ServletContextEvent sce) {
Jaspic.registerServerAuthModule(new TokenAuthModule(), sce.getServletContext());
}
}
After installing the authentication module as outlined in this article in a JAX-RS application, it can be tested as follows:

curl -vs -H "Authorization: OmniLogin auth=ABCDEFGH123" https://localhost:8080/api/foo

As shown in this article, adding an authentication module for JAX-RS that's fully stateless and doesn't store an authenticated state on the client is relatively straightforward using Java EE authentication modules. Big caveats are that the most straightforward approach uses CDI which is not always available in authentication modules (in WildFly it's available), and that the example uses the OmniSecurity library to simplify some of JASPIC's arcane native APIs, but OmniSecurity is still only in an alpha status.

Arjan Tijms


OmniFaces 2.0 RC2 available for testing

$
0
0
After an intense debugging session following the release of OmniFaces 2.0, we have decided to release one more release candidate; OmniFaces 2.0 RC2.

For RC2 we mostly focused on TomEE 2.0 compatibility. Even though TomEE 2.0 is only available in a SNAPSHOT release, we're happy to see that it passed almost all of our tests and was able to run our showcase application just fine. The only place where it failed was with the viewParamValidationFailed page, but this appeared to be an issue in MyFaces and unrelated to TomEE itself.

To repeat from the RC1 announcement: OmniFaces 2.0 is the first release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Our Servlet dependency is now Servlet 3.0 from Java EE 6 (used to be 2.5, although we optionally used 3.0 features before). The minimal Java SE version is now Java 7.

A full list of what's new and changed is available here.

OmniFaces 2.0 RC2 can be tested by adding the following dependency to your pom.xml:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>2.0-RC2</version>
</dependency>

Alternatively the jars files can be downloaded directly.

We're currently investigating one last issue, if that's resolved and no other major bugs appear we'd like to release OmniFaces 2.0 at the end of this week.

Arjan Tijms

OmniFaces 2.0 released!

$
0
0
After a poll regarding the future dependencies of OmniFaces 2.0 and tworelease candidates we're proud to announce that today we've finally released OmniFaces 2.0.

OmniFaces 2.0 is a direct continuation of OmniFaces 1.x, but has started to build on newer dependencies. We also took the opportunity to do a little refactoring here and there (specifically noticeable in the Events class).

The easiest way to use OmniFaces is via Maven by adding the following to pom.xml:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>2.0</version>
</dependency>

A detailed description of the biggest items of this release can be found on the blog of BalusC.

One particular new feature not mentioned there is a new capability that has been added to <o:validateBean>; class level bean validation. While JSF core and OmniFaces both have had a validateBean for some time, one thing it curiously did not do despite its name is actually validating a bean. Instead, those existing versions just controlled various aspects of bean validation. Bean validation itself was then only applied to individual properties of a bean, namely those ones that were bound to input components.

With OmniFaces 2.0 it's now possible to specify that a bean should be validated at the class level. The following gives an example of this:


<h:inputText value="#{bean.product.item}" />
<h:inputText value="#{bean.product.order}" />

<o:validateBean value="#{bean.product}" validationGroups="com.example.MyGroup" />

Using the existing bean validation integration of JSF, only product.item and product.order can be validated, since these are the properties that are directly bound to an input component. Using <o:validateBean> the product itself can be validated as well, and this will happen at the right place in the JSF lifecycle. The right place in the lifecycle means that it will be in the "process validation" phase. True to the way JSF works, if validation fails the actual model will not be updated. In order to prevent this update class level bean validation will be performed on a copy of the actual product (with a plug-in structure to chose between multiple ways to copy the model object).

More information about this class level bean validation can be found on the associated showcase page. A complete overview of all thats new can be found on the what's new page.

Arjan Tijms

JSF and MVC 1.0, a comparison in code

$
0
0
One of the new specs that will debut in Java EE 8 will be MVC 1.0, a second MVC framework alongside the existing MVC framework JSF.

A lot has been written about this. Discussions have mostly been about the why, whether it isn't introduced too late in the game, and what the advantages (if any) above JSF exactly are. Among the advantages that were initially mentioned were the ability to have different templating engines, have better performance and the ability to be stateless. Discussions have furthermore also been about the name of this new framework.

This name can be somewhat confusing. Namely, the term MVC to contrast with JSF is perhaps technically not entirely accurate, as both are MVC frameworks. The flavor of MVC intended to be implemented by MVC 1.0 is actually "action-based MVC", most well known among Java developers as "MVC the way Spring MVC implements it". The flavor of MVC that JSF implements is "Component-based MVC". Alternative terms for this are MVC-push and MVC-pull.

One can argue that JSF since 2.0 has been moving to a more hybrid model; view parameters, the PreRenderView event and view actions have been key elements of this, but the best practice of having a single backing bean back a single view and things like injectable request parameters and eager request scoped beans have been contributing to this as well. The discussion of component-based MVC vs action-based MVC is therefore a little less black and white than it may initially seem, but of course in it's core JSF clearly remains a component-based MVC framework.

When people took a closer look at the advantages mentioned above it became quickly clear they weren't quite specific to action-based MVC. JSF most definitely supports additional templating engines, there's a specific plug-in mechanism for that called the VDL (View Declaration Language). Stacked up against an MVC framework, JSF actually performs rather well, and of course JSF can be used stateless.

So the official motivation for introducing a second MVC framework in Java EE is largely not about a specific advantage that MVC 1.0 will bring to the table, but first and foremost about having a "different" approach. Depending on one's use case, either one of the approaches can be better, or suit one's mental model (perhaps based on experience) better, but very few claims are made about which approach is actually better.

Here we're also not going to investigate which approach is better, but will take a closer look at two actual code examples where the same functionality is implemented by both MVC 1.0 and JSF. Since MVC 1.0 is still in its early stages I took code examples from Spring MVC instead. It's expected that MVC 1.0 will be rather close to Spring MVC, not as to the actual APIs and plumbing used, but with regard to the overall approach and idea.

As I'm not a Spring MVC user myself, I took the examples from a Reddit discussion about this very topic. They are shown and discussed below:

CRUD

The first example is about a typical CRUD use case. The Spring controller is given first, followed by a backing bean in JSF.

Spring MVC


@Named
@RequestMapping("/appointments")
public class AppointmentsController {

@Inject
private AppointmentBook appointmentBook;

@RequestMapping(value="/new", method = RequestMethod.GET)
public String getNewForm(Model model) {
model.addAttribute("appointment", new Appointment();
return "appointment-edit";
}

@RequestMapping(value="/new", method = RequestMethod.POST)
public String add(@Valid Appointment appointment, BindingResult result, RedirectAttributes redirectAttributes) {
if (result.hasErrors()) {
return "appointments/new";
}
appointmentBook.addAppointment(appointment);
redirectAttributes.addFlashAttribute("message", "Successfully added"+appointment.getTitle();

return "redirect:/appointments";
}

}

JSF


@Named
@ViewScoped
public class NewAppointmentsBacking {

@Inject
private AppointmentBook appointmentBook;

private Appointment appointment = new Appointment();

public Appointment getAppointment() {
return appointment;
}

public String add() {
appointmentBook.addAppointment(appointment);
addFlashMessage("Successfully added " + appointment.getTitle());

return "/appointments?faces-redirect=true";
}
}

As can be seen from the two code examples, there are at a first glance quite a number of similarities. However there are also a number of fundamental differences that are perhaps not immediately obvious.

Starting with the similarities, both versions are @Named and have the same service injected via the same @Inject annotation. When a URL is requested (via a GET) then in both versions there's a new Appointment instantiated. In the Spring version this happens in getNewForm(), in the JSF version this happens via the instance field initializer. Both versions subsequently make this instance available to the view. In the Spring MVC version this happens by setting it as an attribute of the model object that's passed in, while in the JSF version this happens via a getter.

The view typically contains a form where a user is supposed to edit various properties of the Appointment shown above. When this form is posted back to the server, in both versions an add() method is called where the (edited) Appointment instance is saved via the service that was previously injected and a flash message is set.

Finally both versions return an outcome that redirects the user to a new page (PRG pattern). Spring MVC uses the syntax "redirect:/appointments" for this, while JSF uses "/appointments?faces-redirect=true" to express the same thing.

Despite the large number of similarities as observed above, there is a big fundamental difference between the two; the class shown for Spring MVC represents a controller. It's mapped directly to a URL and it's pretty much the first thing that is invoked. All of the above runs without having determined what the view will be. Values computed here will be stored in a contextual object and a view is selected. We can think of this store as pushing values (the view didn't ask for it, since it's not even selected at this point). Hence the alternative name "MVC push" for this approach.

The class shown for the JSF example is NOT a controller. In JSF the controller is provided by the framework. It selects a view based on the incoming URL and the outcome of a ResourceHandler. This will cause a view to execute, and as part of that execution a (backing) bean at some point will be pulled in. Only after this pull has been done will the logic of the class in question start executing. Because of this the alternative name for this approach is "MVC pull".

Over to the concrete differences; in the Spring MVC sample instantiating the Appointment had to be explicitly mapped to a URL and the view to be rendered afterwards is explicitly defined. In the JSF version, both URL and view are defaulted; it's the view from which the bean is pulled. A backing bean can override the default view to be rendered by using the aforementioned view action. This gives it some of the "feel" of a controller, but doesn't change the fundamental fact that the backing bean had to be pulled into scope by the initial view first (things like @Eager in OmniFaces do blur the lines further by instantiating beans before a view pulls them in).

The post back case shows something similar. In the Spring version the add() method is explicitly mapped to a URL, while in the JSF version it corresponds to an action method of the view that pulled the bean in.

There's another difference with respect to validation. In the Spring MVC example there's an explicit check to see if validation has failed and an explicit selection of a view to display errors. In this case that view is the same one again ("appointments/new"), but it's still provided explicitly. In the JSF example there's no explicit check. Instead, the code relies on the default of staying on the same view and not invoking the action method. In effect, the exact same thing happens in both cases but the mindset to get there is different.

Dynamically loading images

The second example is about a case where a list of images is rendered first and where subsequently the content of those images is dynamically provided by the beans in question. The Spring code is again given first, followed by the JSF code.

Spring MVC


<c:forEach value="${thumbnails}" var="thumbnail">
<div>
<div class="thumbnail">
<img src="/thumbnails/${thumbnail.id}" />
</div>
<c:out value="${thumbnail.caption}" />
</div>
</c:forEach>

@Controller
public ThumbnailsController {

@Inject
private ThumbnailsDAO thumbnails;

@RequestMapping(value = "/", method = RequestMethod.GET)
public ModelAndView images() {
ModelAndView mv = new ModelAndView("images");
mv.addObject("thumbnails", thumbnailsDAO.getThumbnails());
return mv;
}

@RequestMapping(value = "/thumbnails/{id}", method = RequestMethod.GET, produces = "image/jpeg")
public @ResponseBody byte[] thumbnail(@PathParam long id) {
return thumbnailsDAO.getThumbnail(id);
}
}

JSF


<ui:repeat value="#{thumbnails}" var="thumbnail">
<div>
<div class="thumbnail">
<o:graphicImage value="#{thumbnailsBacking.thumbnail(thumbnail.id)}" />
</div>
#{thumbnail.caption}
</div>
</ui:repeat>

@Model
public class ThumbnailsBacking {

@Inject
private ThumbnailsDAO thumbnailsDAO;

@Produces @RequestScoped @Named("thumbnails")
public List<Thumbnail> getThumbnails() {
return thumbnailsDAO.getThumbnails();
}

public byte[] thumbnail(Long id) {
return thumbnailsDAO.getThumbnail(id);
}
}

Starting with the similarities again, we see that the markup for both views is fairly similar in structure. Both have an iteration tag that takes values from an input list called thumbnails and during each round of the iteration the ID of each individual thumbnail is used to render an image link.

Both the classes for Spring MVC and JSF call getThumbnails() on the injected DAO for the initial GET request, and both have a nearly identical thumbnail() method where getThumbnail(id) is called on the DAO in response to each request for a dynamic image that was rendered before.

Both versions also show that each framework has an alternative way to do what they do. In the Spring MVC example we see that instead of having a Model passed-in and returning a String based outcome, there's an alternative version that uses a ModelAndView instance, where the outcome is set on this object.

In the JSF version we see that instead of having an instance field + getter, there's an alternative version based an a producer. In that variant the data is made available under the EL name "thumbnails", just as in the Spring MVC version.

On to the differences, we see that the Spring MVC version is again using explicit URLs. The otherwise identical thumbnail() method has an extra annotation for specifying the URL to which it's mapped. This very URL is the one that's used in the img tag in the view. JSF on the other hand doesn't ask to map the method to a URL. Instead, there's an EL expression used to point directly to the method that delivers the image content. The component (o:graphicImage here) then generates the URL.

While the producer method that we showed in the JSF example (getThumbnails()) looked like JSF was declarative pushing a value, it's in fact still about a push. The method will not be called, and therefor a value not produced, until the EL variable "thumbnails" is resolved for the first time.

Another difference is that the view in the JSF example contains two components (ui:repeat and o:graphicImage) that adhere to JSF's component model, and that the view uses a templating language (Facelets) that is part of the JSF spec itself. Spring MVC (of course) doesn't specify a component model, and while it could theoretically come with its own templating language it doesn't have that one either. Instead, Spring MVC relies on external templating systems, e.g. JSP or Thymeleaf.

Finally, a remarkable difference is that the two very similar classes ThumbnailsController and ThumbnailsBacking are annotated by @Controller respectively @Model, two completely opposite responsibilities of the MVC pattern. Indeed, in JSF everything that's referenced by the view (via EL expressions) if officially called the model. ThumbnailsBacking is from JSF's point of the view the model. In practice the lines are bit more blurred, and the backing bean is more akin to a plumbing component that sits between the model, view and controller.

Conclusion

We haven't gone in-depth to what it means to have a component model and what advantages that has, nor have we discussed in any detail what a RESTful architecture brings to the table. In passing we mentioned the concept of state, but did not look at that either. Instead, we mainly focussed on code examples for two different use cases and compared and contrasted these. In that comparison we tried as much as possible to refrain from any judgement about which approach is better, component based MVC or action-oriented MVC (as I'm one of the authors of the JSF utility library OmniFaces and a member of the JSF EG such a judgement would always be biased of course).

We saw that while the code examples at first glance have remarkable similarities there are in fact deep fundamental differences between the two approaches. It's an open question whether the future is with either one of those two, with a hybrid approach of them, or with both living next to each other. Java EE 8 at least will opt for that last option and will have both a component based MVC framework and an action-oriented one.

Arjan Tijms

Java EE authorization - JACC revisited part I

$
0
0
A while ago we took a look at container authorization in Java EE, which we saw was taken care of by a specification called JACC.

We saw that JACC offered a clear standardized hook into what's often seen as a completely opaque and container specific process, but that it also had a number of disadvantages. Furthermore we provided a partial (non-working) implementation of a JACC provider to illustrate the idea.

In this part of the article we'll revisit JACC by taking a closer look at some of the mentioned disadvantages and dive a little deeper in the concept of role mapping. In part II we'll be looking at a more complete implementation of the JACC provider that was shown before.

To refresh our memory, the following were the disadvantages that we previously discovered:

  • Arcane & verbose API
  • No portable way to see what the groups/roles are in a collection of Principals
  • No portable way to use the container's role to group mapper
  • No default implementation of a JACC provider active or even available
  • Mixing Java SE and EE permissions (which protect against totally different things) when security manager is used
  • JACC provider has to be installed for the entire AS; can not be registered from or for a single application

As it later on appeared though, there's a little more to say about a few of these items.

Role mapping

While it's indeed the case that there's no portable way to get to either the groups or the container's role to group mapper, it appeared there was something called the primary use case for which JACC was originally conceived.

For this primary use case the idea was that a custom JACC provider would be coupled with a (custom) authentication module that only provided a caller principal (which contains the user name). That JACC provider would then contact an (external) authorization system to fetch authorization data based on this single caller principal. This authorization data can then be a collection of roles or anything that the JACC provider can either locally map to roles, or something to which it can map the permissions that a PolicyConfiguration initially collects. For this use case it's indeed not necessary to have portable access to groups or a role to groups mapper.

Building on this primary use case, it also appears that JASPIC auth modules in fact do have a means to put a specific implementation of a caller principal into the subject. JASPIC being JASPIC with its bare minimum of TCK tests this of course didn't work on all containers and there's still a gap present where the container is allowed to "map" that principal (whatever this means), but the basic idea is there. A JACC provider that knows about the auth module being used can then unambiguously pick out the caller principal from the set of principals in a subject. All of this would be so much simpler though if the caller principal was simply standardized in the first place, but alas.

To illustrate the basic process for a custom JACC provider according to this primary use case:


Auth module——provides——► Caller Principal (name = "someuser")

JACC provider——contacts—with—"someuser"——► Authorization System

Authorization System——returns——► roles ["admin", "architect"]

JACC provider——indexes—with—"admin"——► rolesToPermissions
JACC provider——indexes with—"architect"——► rolesToPermissions

As can be seen above there is no need for role mapping in this primary use case.

For the default implementation of a proprietary JACC provider that ships with a Java EE container the basic process is a little bit different as shown next:









role to group mapping in place
RoleGroups
"admin"["admin-group"]
"architect"["architect-group"]
"expert"["expert-group"]


JACC provider——calls—with—["admin", "architect", "expert"] ——► Role Mapper
Role mapper——returns——► ["admin-group", "architect-group", "expert-group"]

Auth module——provides——► Caller Principal (name = "someuser")
Auth module——provides——► Group Principal (name = "admin-group", name = "architect-group")

JACC provider maps "admin-group" to "admin"
JACC provider maps "architect-group "to "architect"

JACC provider——indexes—with—"admin"——► rolesToPermissions
JACC provider——indexes—with—"architect"——► rolesToPermissions

In the second use case the role mapper and possibly knowledge of which principals represent groups is needed, but since this JACC provider is the one that ships with a Java EE container it's arguably "allowed" to use proprietary techniques.

Do note that the mapping technique shown maps a subject's groups to roles, and uses that to check permissions. While this may conceptually be the most straightforward approach, it's not the only way.

Groups to permission mapping

An alternative approach is to remap the roles-to-permission collection to a groups-to-permission collection using the information from the role mapper. This is what both GlassFish and WebLogic implicitly do when they write out their granted.policy file.

The following is an illustration of this process. Suppose we have a role to permissions map as shown in the following table:

Role-to-permissions
RolePermission
"admin"[WebResourcePermission ("/protected/*" GET)]

This means a user that's in the logical application role "admin" is allowed to do a GET request for resources in the /protected folder. Now suppose the role mapper gave us the following role to group mapping:

Role-to-groups
RoleGroups
"admin"["admin-group", "adm"]

This means the logical application role "admin" is mapped to the groups "admin-group" and "adm". What we can now do is first reverse the last mapping into a group-to-roles map as shown in the following table:

Group-to-roles
GroupRoles
"admin-group"["admin"]
"adm"["admin"]

Subsequently we can then iterate over this new map and look up the permissions associated with each role in the existing role to permissions map to create our target group to permissions map. This is shown in the table below:

Group-to-permissions
GroupPermissions
"admin-group"[WebResourcePermission ("/protected/*" GET)]
"adm"[WebResourcePermission ("/protected/*" GET)]

Finally, consider a current subject with principals as shown in the next table:

Subject's principals
TypeName
com.somevendor.CallerPrincipalImpl"someuser"
com.somevendor.GroupPrincipalImpl"admin-group"
com.somevendor.GroupPrincipalImpl"architect-group"

Given the above shown group to permissions map and subject's principals, a JACC provider can now iterate over the group principals that belong to this subject and via the map check each such group against the permissions for that group. Note that the JACC provider does have to know that com.somevendor.GroupPrincipalImpl is the principal type that represents groups.

Principal to permission mapping

Yet another alternative approach is to remap the roles-to-permission collection to a principals-to-permission collection, again using the information from the role mapper. This is what both Geronimo and GlassFish' optional SimplePolicyProvider do.

Principal to permission mapping basically works like group to permission mapping, except that the JACC provider doesn't need to have knowledge of the principals involved. For the JACC provider those principals are pretty much opaque then, and it doesn't matter if they represent groups, callers, or something else entirely. All the JACC provider does is compare (using equals() or implies()) principals in the map against those in the subject.

The following code fragment taken from Geronimo 3.0.1 demonstrates the mapping algorithm:


for (Map.Entry<Principal, Set<String>> principalEntry : principalRoleMapping.entrySet()) {
Principal principal = principalEntry.getKey();
Permissions principalPermissions = principalPermissionsMap.get(principal);

if (principalPermissions == null) {
principalPermissions = new Permissions();
principalPermissionsMap.put(principal, principalPermissions);
}

Set<String> roleSet = principalEntry.getValue();
for (String role : roleSet) {
Permissions permissions = rolePermissionsMap.get(role);
if (permissions == null) continue;
for (Enumeration<Permission> rolePermissions = permissions.elements(); rolePermissions.hasMoreElements();) {
principalPermissions.add(rolePermissions.nextElement());
}
}

}

In the code fragment above rolePermissions is the map the provider created before the mapping, principalRoleMapping is the mapping from the role mapper and principalPermissions is the final map that's used for access decisions.

Default JACC provider

Several full Java EE implementations do not ship with an activated JACC provider, which makes it extremely troublesome for portable Java EE applications to just make use of JACC for things like asking if a user will be allowed to access say a URL.

As it appears, Java EE implementations are actually required to ship with an activated JACC provider and are even required to use it for access decisions. Clearly there's no TCK test for this, so just as we saw with JASPIC, vendors take different approaches in absence of such test. In the end it doesn't matter so much what the spec says, as it's the TCK that has the final word on compatibility certification. In this case, the TCK clearly says it's NOT required, while as mentioned the spec says it is. Why both JASPIC and JACC have historically tested so little is still not entirely clear, but I have it on good authority (no pun ;)) that the situation is going to be improved.

So while this is theoretically not a spec issue, it is still very much a practical issue. I looked at 6 Java EE implementations and found the following:

JACC default providers
ServerJACC provider presentJACC provider activatedVendor discourages to activate JACC
JBoss EAP 6.3VVX
GlassFish 4.1VVX
Geronimo 3.0.1VVX
WebLogic 12.1.3VXV
JEUS 8 previewVXV
WebSphere 8.5XX- (no provider present so nothing to discourage)

As can be seen only half of the servers investigated have JACC actually enabled. WebLogic 12.1.3 and JEUS 8 preview both do ship with a JACC policy provider, but it has to be enabled explicitly. Both WebLogic and JEUS 8 in their documentation somewhat advice against using JACC. TMaxSoft warns in its JEUS 7 security manual (there's not one for JEUS 8 yet) that the default JACC provider that will be activated is mainly for testing and doesn't advise to use it for real production usage.

WebSphere does not even ship with any default JACC policy provider, at least not that I could find. There's only a Tivoli Access Manager client, for which you have to install a separate external authorization server.

I haven't yet investigated Interstage AS, Cosminexus and WebOTX, but I hope to be able to look at them at a later stage.

Conclusion

Given the historical background of JACC it's a little bit more understandable why access to the role mapper was never standardized. Still, it is something that's needed for other use cases than the historical primary use case, so after all this time is still something that would be welcome to have. Another huge disadvantage of JACC, the fact that it's simply not always there in Java EE, appeared to be yet another case of incomplete TCK coverage.

Arjan Tijms

Java EE authorization - JACC revisited part II

$
0
0
This is the second part of a series where we revisit JACC after taking an initial look at it last year. In the first part we somewhat rectified a few of the disadvantages that were initially discovered and looked at various role mapping strategies.

In this second part we'll take an in-depth look at obtaining the container specific role mapper and the container specific way of how a JACC provider is deployed. In the next and final part we'll be bringing it all together and present a fully working JACC provider.

Container specifics

The way in which to obtain the role mapper and what data it exactly provides differs greatly for each container, and is something that containers don't really document either. Also, although the two system properties that need to be specified for the two JACC artifacts are standardized, it's often not at all clear how the jar file containing the JACC provider implementation classes has to be added to the container's class path.

After much research I obtained the details on how to do this for the following servers:

  • GlassFish 4.1
  • WebLogic 12.1.3
  • Geronimo 3.0.1
This list is admittedly limited, but as it appeared the process of finding out these details can be rather time consuming and frankly maddening. Given the amount of time that already went into this research I decided to leave it at these three, but hope to look into additional servers at a later date.

The JACC provider that we'll present in the next part will use a RoleMapper class that at runtime tries to obtain the native mapper from each known server using reflection (so to avoid compile dependencies). Whatever the native role mapper returns is transformed to a group to roles map first (see part I for more details on the various mappings). In the section below the specific reflective code for each server is given first. The full RoleMapper class is given afterwards.

GlassFish

The one server where the role mapper was simple to obtain was GlassFish. The code how to do this is clearly visible in the in-memory example JACC provider that ships with GlassFish. A small confusing thing is that the example class and its interface contain many methods that aren't actually used. Based on this example the reflective code and mapping became as follows:


private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

try {
Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
.getMethod("get", SecurityRoleMapperFactoryClass.getClass())
.invoke(null, SecurityRoleMapperFactoryClass);

Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
.invoke(factoryInstance, contextID);

@SuppressWarnings("unchecked")
Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
.getMethod("getRoleToSubjectMapping")
.invoke(securityRoleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToSubjectMap.containsKey(role)) {
Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

List<String> groups = getGroupsFromPrincipals(principals);
for (String group : groups) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).add(role);
}

if ("**".equals(role) && !groups.isEmpty()) {
// JACC spec 3.2 states:
//
// "For the any "authenticated user role", "**", and unless an application specific mapping has
// been established for this role,
// the provider must ensure that all permissions added to the role are granted to any
// authenticated user."
//
// Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
// and groups is not
// empty, then there's an application specific mapping and "**" maps only to those groups, not
// to any authenticated user.
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

Finding out how to install the JACC provider took a bit more time. For some reason the documentation doesn't mention it, but the location to put the mentioned jar file is simply:


[glassfish_home]/glassfish/lib
GlassFish has a convenience mechanism to put a named JACC configuration in the following file:

[glassfish_home]/glassfish/domains1/domain1/config/domain.xml
This name has to be added to the security-config element and a jacc-provider element that specifies both the policy and factory classes as follows:

<security-service jacc="test">
<!-- Other elements here -->
<jacc-provider policy-provider="test.TestPolicy" name="test" policy-configuration-factory-provider="test.TestPolicyConfigurationFactory"></jacc-provider>
</security-service>

WebLogic

WebLogic turned out to be a great deal more difficult than GlassFish. Being closed source you can't just look into any default JACC provider, but as it happens the WebLogic documentation mentioned (actually, requires) a pluggable role mapper:


-Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl
Unfortunately, even though an option for a role mapper factory class is used, there's no documentation on what one's own role mapper factory should do (which interfaces it should implement, which interfaces the actual role mapper it returns should implement etc).

After a fair amount of Googling I did eventually found that what appears to be a super class is documented. Furthermore, the interface of a type called RoleMapper is documented as well.

Unfortunately that last interface does not contain any of the actual methods to do role mapping, so you can't use an implementation of just this. This all was really surprising; WebLogic gives the option to specify a role mapper factory, but key details are missing. Still, the above gave just enough hints to do some reflective experiments, and after a lot of trial and error I came to the following code that seemed to do the trick:


private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

try {

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

// RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
// via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
// See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
.invoke(null);

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
.invoke(roleMapperFactoryInstance, contextID);

// This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
// distinguish between the two.
// If a user now has a name that happens to be a role as well, we have an issue :X
@SuppressWarnings("unchecked")
Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
.getMethod("getRolesToPrincipalNames")
.invoke(roleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToPrincipalNamesMap.containsKey(role)) {

List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
// Ignore the fact that the collection also contains user names and hope
// that there are no user names in the application with the same name as a group
if (!groupToRoles.containsKey(groupOrUserName)) {
groupToRoles.put(groupOrUserName, new ArrayList<String>());
}
groupToRoles.get(groupOrUserName).add(role);
}

if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

Adding the two standard system properties for WebLogic appeared to be done most conveniently in the file:


[wls_home]/user_projects/domains/mydomain/bin/setDomainEnv.sh
There's a comment in the file that says to uncomment a section to use JACC, but that however is completely wrong. If you do indeed uncomment it, the server will not start: it are a few -D options, each on the beginning of a line, but at that point in the file you can't specify -D options that way. Furthermore it suggests that it's required to activate the Java SE security manager, but LUCKILY this is NOT the case. From WebLogic 12.1.3 onwards the security manager is no longer required (which is a huge win for working with JACC on WebLogic). The following does work though for our own JACC provider:

JACC_PROPERTIES="-Djavax.security.jacc.policy.provider=test.TestPolicy -Djavax.security.jacc.PolicyConfigurationFactory.provider=test.TestPolicyConfigurationFactory -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl "

JAVA_PROPERTIES="${JAVA_PROPERTIES} ${EXTRA_JAVA_PROPERTIES} ${JACC_PROPERTIES}"
export JAVA_PROPERTIES
For completeness and future reference, the following definition for JACC_PROPERTIES activates the provided JACC provider:

# JACC_PROPERTIES="-Djavax.security.jacc.policy.provider=weblogic.security.jacc.simpleprovider.SimpleJACCPolicy -Djavax.security.jacc.PolicyConfigurationFactory.provider=weblogic.security.jacc.simpleprovider.PolicyConfigurationFactoryImpl -Dweblogic.security.jacc.RoleMapperFactory.provider=weblogic.security.jacc.simpleprovider.RoleMapperFactoryImpl "
(Do note that WebLogic violates the Java EE spec here. Such activation should NOT be needed, as a JACC provider should be active by default.)

The location of where to put the JACC provider jar was not as straightforward. I tried the [wls_home]/user_projects/domains/mydomain/lib] folder, and although WebLogic did seem to detect "something" here as it would log during startup that it encountered a library and was adding it, it would not actually work and class not found exceptions followed. After some fiddling I got around this by adding the following at the point the CLASSPATH variable is exported:


CLASSPATH="${DOMAIN_HOME}/lib/jacctest-0.0.1-SNAPSHOT.jar:${CLASSPATH}"
export CLASSPATH
I'm not sure if this is the recommended approach, but it seemed to do the trick.

Geronimo

Where WebLogic was a great deal more difficult than GlassFish, Geronimo unfortunately was extremely more difficult. In 2 decades of working with a variety of platforms and languages I think getting this to work ranks pretty high on the list of downright bizarre things that are required to get something to work. The only thing that comes close is getting some obscure undocumented activeX control to work in a C++ Windows app around 1997.

The role mapper in Geronimo is not directly accessibly via some factory or service as in GlassFish and WebLogic, but instead there's a map containing the mapping, which is injected in a Geronimo specific JACC provider that extends something and implements many interfaces. As we obviously don't have or want to have a Geronimo specific provider I tried to find out how this injection exactly works.

Things start with a class called GeronimoSecurityBuilderImpl that parses the XML that expresses the role mapping. Nothing too obscure here. This class then registers a so-called GBean (a kind of Geronimo specific JMX bean) that it passes the previously mentioned Map, and then registers a second GBean that it gives a reference to this first GBean. Meanwhile, the Geronimo specific policy configuration factory, called GeronimoPolicyConfigurationFactory"registers" itself via a static method on one of the GBeans mentioned before. Those GBeans at some point start running, and use the factory that was set by the static method to get a Geronimo specific policy configuration and then call a method on that to pass the Map containing the role mapping.

Now this scheme is not only rather convoluted to say the least, there's also no way to get to this map from anywhere else without resorting to very ugly hacks and using reflection to hack into private instance variables. It was possible to programmatically obtain a GBean, but the one we're after has many instances and it didn't prove easy to get the one that applies to the current web app. There seemed to be an option if you know the maven-like coordinates of your own app, but I didn't wanted to hardcode these and didn't found an API to obtain those programmatically. Via the source I noticed another way was via some meta data about a GBean, but there was no API available to obtain this.

After spending far more hours than willing to admit, I finally came to the following code to obtain the Map I was after:


private void tryGeronimoAlternative() {
Kernel kernel = KernelRegistry.getSingleKernel();

try {
ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();

Field registryField = kernel.getClass().getDeclaredField("registry");
registryField.setAccessible(true);
BasicRegistry registry = (BasicRegistry) registryField.get(kernel);

Set<GBeanInstance> instances = registry.listGBeans(new AbstractNameQuery(null, Collections.EMPTY_MAP, ApplicationPrincipalRoleConfigurationManager.class.getName()));

Map<Principal, Set<String>> principalRoleMap = null;
for (GBeanInstance instance : instances) {

Field classLoaderField = instance.getClass().getDeclaredField("classLoader");
classLoaderField.setAccessible(true);
ClassLoader gBeanClassLoader = (ClassLoader) classLoaderField.get(instance);

if (gBeanClassLoader.equals(contextClassLoader)) {

ApplicationPrincipalRoleConfigurationManager manager = (ApplicationPrincipalRoleConfigurationManager) instance.getTarget();
Field principalRoleMapField = manager.getClass().getDeclaredField("principalRoleMap");
principalRoleMapField.setAccessible(true);

principalRoleMap = (Map<Principal, Set<String>>) principalRoleMapField.get(manager);
break;

}

// process principalRoleMap here

}

} catch (InternalKernelException | IllegalStateException | NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e1) {
// Ignore
}
}
Note that this is the "raw" code and not yet converted to be fully reflection based like the GlassFish and WebLogic examples and not is not yet converting the principalRoleMap to the uniform format we use.

In order to install the custom JACC provider I looked for a config file or startup script, but there didn't seem to be an obvious one. So I just supplied the standardized options directly on the command line as follows:


-Djavax.security.jacc.policy.provider=test.TestPolicy
-Djavax.security.jacc.PolicyConfigurationFactory.provider=test.TestPolicyConfigurationFactory
I then tried to find a place to put the jar again, but simply couldn't find one. There just doesn't seem to be any mechanism to extend Geronimo's class path for the entire server, which is (perhaps unfortunately) what JACC needs. There were some options for individual deployments, but this cannot work for JACC since the Policy instance is called at a very low level and for everything that is deployed on the server. Geronimo by default deploys about 10 applications for all kinds of things. Mocking with each and every one of them just isn't feasible.

What I eventually did is perhaps one of the biggest hacks ever; I injected the required classes directly into the Geronimo library that contains the default JACC provider. After all, this provider is already used, so surely Geronimo has to be able to load my custom provider from THIS location :X

All libraries in Geronimo are OSGI bundles, so in addition to just injecting my classes I also had to adjust the MANIFEST, but after doing that Geronimo was FINALLY able to find my custom JACC provider. The MANIFEST was updated by copying the existing one from the jar and adding the following to it:


test;uses:="org.apa
che.geronimo.security.jaspi,javax.security.auth,org.apache.geronimo.s
ecurity,org.apache.geronimo.security.realm.providers,org.apache.geron
imo.security.jaas,javax.security.auth.callback,javax.security.auth.lo
gin,javax.security.auth.message.callback"
And then running the zip command as follows:

zip /test/geronimo-tomcat7-javaee6-3.0.1/repository/org/apache/geronimo/framework/geronimo-security/3.0.1/geronimo-security-3.0.1.jar META-INF/MANIFEST.MF
From the root directory where my compiled classes live I executed the following command to inject them:

jar uf /test/geronimo-tomcat7-javaee6-3.0.1/repository/org/apache/geronimo/framework/geronimo-security/3.0.1/geronimo-security-3.0.1.jar test/*
I happily admit it's pretty insane to do it like this. Hopefully this is not really the way to do it, and there's a sane way that I just happened to miss, or that someone with deep Geronimo knowledge would "just know".

Much to my dismay, the absurdity didn't end there. As it appears the previously mentioned GBeans act as a kind of protection mechanism to ensure only Geronimo specific JACC providers are installed. Since the entire purpose of the exercise is to install a general universal JACC provider, turning it into a Geronimo specific one obviously wasn't an option. The scarce documentation vaguely hints at replacing some of these GBeans or the security builder specifically for your application, but since JACC is installed for the entire server this just isn't feasible.

Eventually I tricked Geronimo into thinking a Geronimo specific JACC provider was installed by instantiating (via reflection) a dummy Geronimo policy provider factory and putting intercepting proxies into it to prevent a NPE that would otherwise ensue. As a side effect of this hack to beat Geronimo's "protection" I could capture the map I previously grabbed via reflective hacks somewhat easier.

The code to install the dummy factory:


try {
// Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
// This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
// will statically register itself with an internal Geronimo class
geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
geronimoContextToRoleMapping = new ConcurrentHashMap<>();
} catch (Exception e) {
// ignore
}
The code to put the capturing policy configurations in place:

// Are we dealing with Geronimo?
if (geronimoPolicyConfigurationFactoryInstance != null) {

// PrincipalRoleConfiguration

try {
Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");

Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {

@SuppressWarnings("unchecked")
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

// Take special action on the following method:

// void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
if (method.getName().equals("setPrincipalRoleMapping")) {

geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);

}
return null;
}
});

// Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:

// public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
.getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
.invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);


} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
// Ignore
}
}
And finally the code to transform the map into our uniform target map:

private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
if (geronimoContextToRoleMapping != null) {

if (geronimoContextToRoleMapping.containsKey(contextID)) {
Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);

for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {

// Convert the principal that's used as the key in the Map to a list of zero or more groups.
// (for Geronimo we know that using the default role mapper it's always zero or one group)
for (String group : principalToGroups(entry.getKey())) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).addAll(entry.getValue());

if (entry.getValue().contains("**")) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}
}

return true;
}

return false;
}

The role mapper class

After having taken a look at the code for each individual server in isolation above, it's now time to show the full code for the RoleMapper class. This is the class that the JACC provider that we'll present in the next part will use as the universal way to obtain the server's role mapping, as-if this was already standardized:


package test;

import static java.util.Arrays.asList;
import static java.util.Collections.list;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.security.Principal;
import java.security.acl.Group;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import javax.security.auth.Subject;

public class TestRoleMapper {

private static Object geronimoPolicyConfigurationFactoryInstance;
private static ConcurrentMap<String, Map<Principal, Set<String>>> geronimoContextToRoleMapping;

private Map<String, List<String>> groupToRoles = new HashMap<>();

private boolean oneToOneMapping;
private boolean anyAuthenticatedUserRoleMapped = false;

public static void onFactoryCreated() {
tryInitGeronimo();
}

private static void tryInitGeronimo() {
try {
// Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
// This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
// will statically register itself with an internal Geronimo class
geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
geronimoContextToRoleMapping = new ConcurrentHashMap<>();
} catch (Exception e) {
// ignore
}
}

public static void onPolicyConfigurationCreated(final String contextID) {

// Are we dealing with Geronimo?
if (geronimoPolicyConfigurationFactoryInstance != null) {

// PrincipalRoleConfiguration

try {
Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");

Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {

@SuppressWarnings("unchecked")
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

// Take special action on the following method:

// void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
if (method.getName().equals("setPrincipalRoleMapping")) {

geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);

}
return null;
}
});

// Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:

// public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
.getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
.invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);


} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
// Ignore
}
}
}


public TestRoleMapper(String contextID, Collection<String> allDeclaredRoles) {
// Initialize the groupToRoles map

// Try to get a hold of the proprietary role mapper of each known
// AS. Sad that this is needed :(
if (tryGlassFish(contextID, allDeclaredRoles)) {
return;
} else if (tryWebLogic(contextID, allDeclaredRoles)) {
return;
} else if (tryGeronimo(contextID, allDeclaredRoles)) {
return;
} else {
oneToOneMapping = true;
}
}

public List<String> getMappedRolesFromPrincipals(Principal[] principals) {
return getMappedRolesFromPrincipals(asList(principals));
}

public boolean isAnyAuthenticatedUserRoleMapped() {
return anyAuthenticatedUserRoleMapped;
}

public List<String> getMappedRolesFromPrincipals(Iterable<Principal> principals) {

// Extract the list of groups from the principals. These principals typically contain
// different kind of principals, some groups, some others. The groups are unfortunately vendor
// specific.
List<String> groups = getGroupsFromPrincipals(principals);

// Map the groups to roles. E.g. map "admin" to "administrator". Some servers require this.
return mapGroupsToRoles(groups);
}

private List<String> mapGroupsToRoles(List<String> groups) {

if (oneToOneMapping) {
// There is no mapping used, groups directly represent roles.
return groups;
}

List<String> roles = new ArrayList<>();

for (String group : groups) {
if (groupToRoles.containsKey(group)) {
roles.addAll(groupToRoles.get(group));
}
}

return roles;
}

private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

try {
Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
.getMethod("get", SecurityRoleMapperFactoryClass.getClass())
.invoke(null, SecurityRoleMapperFactoryClass);

Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
.invoke(factoryInstance, contextID);

@SuppressWarnings("unchecked")
Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
.getMethod("getRoleToSubjectMapping")
.invoke(securityRoleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToSubjectMap.containsKey(role)) {
Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

List<String> groups = getGroupsFromPrincipals(principals);
for (String group : groups) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).add(role);
}

if ("**".equals(role) && !groups.isEmpty()) {
// JACC spec 3.2 states:
//
// "For the any "authenticated user role", "**", and unless an application specific mapping has
// been established for this role,
// the provider must ensure that all permissions added to the role are granted to any
// authenticated user."
//
// Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
// and groups is not
// empty, then there's an application specific mapping and "**" maps only to those groups, not
// to any authenticated user.
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

try {

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

// RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
// via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
// See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
.invoke(null);

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
.invoke(roleMapperFactoryInstance, contextID);

// This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
// distinguish between the two.
// If a user now has a name that happens to be a role as well, we have an issue :X
@SuppressWarnings("unchecked")
Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
.getMethod("getRolesToPrincipalNames")
.invoke(roleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToPrincipalNamesMap.containsKey(role)) {

List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
// Ignore the fact that the collection also contains user names and hope
// that there are no user names in the application with the same name as a group
if (!groupToRoles.containsKey(groupOrUserName)) {
groupToRoles.put(groupOrUserName, new ArrayList<String>());
}
groupToRoles.get(groupOrUserName).add(role);
}

if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
if (geronimoContextToRoleMapping != null) {

if (geronimoContextToRoleMapping.containsKey(contextID)) {
Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);

for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {

// Convert the principal that's used as the key in the Map to a list of zero or more groups.
// (for Geronimo we know that using the default role mapper it's always zero or one group)
for (String group : principalToGroups(entry.getKey())) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).addAll(entry.getValue());

if (entry.getValue().contains("**")) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}
}

return true;
}

return false;
}

/**
* Extracts the roles from the vendor specific principals. SAD that this is needed :(
*
* @param principals
* @return
*/
public List<String> getGroupsFromPrincipals(Iterable<Principal> principals) {
List<String> groups = new ArrayList<>();

for (Principal principal : principals) {
if (principalToGroups(principal, groups)) {
// return value of true means we're done early. This can be used
// when we know there's only 1 principal holding all the groups
return groups;
}
}

return groups;
}

public List<String> principalToGroups(Principal principal) {
List<String> groups = new ArrayList<>();
principalToGroups(principal, groups);
return groups;
}

public boolean principalToGroups(Principal principal, List<String> groups) {
switch (principal.getClass().getName()) {

case "org.glassfish.security.common.Group": // GlassFish
case "org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal": // Geronimo
case "weblogic.security.principal.WLSGroupImpl": // WebLogic
case "jeus.security.resource.GroupPrincipalImpl": // JEUS
groups.add(principal.getName());
break;

case "org.jboss.security.SimpleGroup": // JBoss
if (principal.getName().equals("Roles") && principal instanceof Group) {
Group rolesGroup = (Group) principal;
for (Principal groupPrincipal : list(rolesGroup.members())) {
groups.add(groupPrincipal.getName());
}

// Should only be one group holding the roles, so can exit the loop
// early
return true;
}
}
return false;
}

}

Server mapping overview

Each server essentially provides the same core data; a role to group mapping, but each server puts this data in a different format. The table below summarizes this:

Role to group format per server
ServerMapKeyValue
GlassFish 4.1Map<String, Subject>Role nameSubject containing Principals representing groups and users (different class type for each)
WebLogic 12.1.3Map<String, String[]>Role nameGroups and user names (impossible to distinguish which is which)
Geronimo 3.0.1Map<Principal, Set<String>>Principal representing group or user (different class type for each)Role names

As we can see above, GlassFish and WebLogic both have a "a role name to groups and users" format. In the case of GlassFish the groups and users are for some reason wrapped in a Subject. A Map<String, Set<Principal>> would perhaps have been more logical here. WebLogic unfortunately uses a String to represent both group- and user names, meaning there's no way to know if a given name represents a group or a user. One can only guess at what the idea behind this design decision must have been.

Geronimo finally does the mapping exactly the other way around; it has a "group or user to role names" format. After all the insanity we saw with Geronimo this actually is a fairly sane mapping.

Conclusion

As we saw obtaining the container specific role mapping for a universal JACC provider is no easy feat. Finding out how to deploy a JACC provider appeared to be surprisingly difficult, and in case of Geronimo even nearly impossible. It's hard to say what can be done to improve this. Should JACC define an extra standardized property where you can provide the path to a jar file? E.g. something like

-Djavax.security.jacc.provider.jar=/usr/lib/myprovider.jar
At least for testing, and probably for regular usage as well, it would be extremely convenient if JACC providers could additionally be registered from within an application archive.

Arjan Tijms

The most popular upcoming Java EE 8 technologies according to ZEEF users

$
0
0
I maintain a page on zeef.com about the upcoming Java EE 8 specification. On this page I collect all interesting links about the various sub-specs that will be updated or newly introduced in EE 8. The page is up since April last year and therefor currently has almost 10 months worth of data (at the moment 8.7k views, 5k clicks).

While there still aren't any discussions and thus links available for quite a couple of specs, it does give us some early insight in what's popular. At the moment the ranking is as follows:

PositionLinkCategory
1Java EE 8 roadmap [png]Java EE 8 overal
2JSF MVC discussionJSF 2.3
3Let's get started on JSF 2.3JSF 2.3
4Servlet 4.0Servlet 4.0
5Java EE 8 Takes Off!Java EE 8 overal
6An MVC action-based framework in Java EE 8MVC 1.0
7JSF and MVC 1.0, a comparison in codeMVC 1.0
8Let's get started on Servlet 4.0Servlet 4.0
9JavaOne Replay: 'Java EE 8 Overview' by Linda DeMichielJava EE 8 overal
10A CDI 2 Wish ListCDI 2.0

If we look at the single highest ranking link for each spec, we'll get to the following global ranking:

  1. Java EE 8 overal
  2. JSF 2.3
  3. Servlet 4.0
  4. MVC 1.0
  5. CDI 2.0
  6. JAX-RS 2.1
  7. JSON-B 1.0
  8. JCache 1.0
  9. JMS 2.1
  10. Java (EE) Configuration
  11. Java EE Security API 1.0
  12. JCA.next
  13. Java EE Management API 2.0
  14. JSON-P 1.1
Interestingly, when we don't look at the single highest clicked link per spec, but aggregate the clicks for all top links, we get a somewhat different ranking as shown below (the relative positions compared to the first ranking are shown behind each spec):

  1. Java EE 8 overal (=)
  2. MVC 1.0 (+2)
  3. JSF 2.3 (-1)
  4. CDI 2.0 (+1)
  5. Servlet 4.0 (-2)
  6. JCache 1.0 (+2)
  7. Java (EE) Configuration (+3)
  8. JAX-RS 2.1 (-2)
  9. JMS 2.1 (=)
  10. JSON-B 1.0 (-3)
  11. Java EE Security API 1.0 (=)
  12. JCA.next (=)
  13. Java EE Management API 2.0 (=)
  14. JSON-P 1.1 (=)

As we can see the specs that occupy the top 5 are still the same, but whereas JSF 2.3 was the most popular sub-spec where it concerned a single link, looking at all links together it's now MVC 1.0. The umbrella spec Java EE however is still firmly on top. The bottom segment is even exactly the same, but for most of them very few information is available so a block is basically the same as a link. Specifically for the Java EE Management API and JSON-P 1.1 there's no information at all available beyond a single announcement that the initial JSR was posted.

While the above ranking does give us some data points, we have to take into account that it's not just about the technologies themselves but also about a number of other factors. E.g. the position on the page does influence clicks. The Java EE 8 block is on the top left of the page and will be seen first by most visitors. Then again, CDI 2.0 is at a pretty good position at the top middle of the page, but got relatively few clicks. JSF 2.3 and especially MVC 1.0 are at a less ideal position at the middle left of the page, below the so-called "fold" of many screens (meaning, you have to scroll to see it). Yet, both of them received the most clicks after the umbrella spec.

The observant reader may notice that some key Java EE technologies such as JPA, EJB, Bean Validation and Expression Language are missing. It's likely that these specs will either not be updated at all for Java EE 8, or will only receive a very small update (called a MR or Maintenance Release in the JCP process).

Oracle has indicated on multiple occasions that this is almost entirely due to resource issues. Apparently there just aren't enough resources available to be able to update all specs. Even though there are e.g. dozens of JPA JIRA issues filed and persistence is arguably one of the most important aspect of the majority of (web) applications, it's just not possible to have a major update for it, unfortunately.

Conclusion

In general we can say that for this particular data point the web technologies gather the most interest, while the back end/business and supporting technologies are a little less popular. It will be interesting to see if and if so how the numbers will change when more information become available. Java EE Management API 2.0 for one seems really unpopular now, but there simply isn't much to measure yet.

Arjan Tijms

The most popular Java EE servers in 2014/2015 according to OmniFaces users

$
0
0
For a little over 3 months (from half of November 2014 to late February 2015) we had a poll on the OmniFaces website asking what AS (Application Server) people used with OmniFaces (people could select multiple servers).

The response was quite overwhelming for our little project; no less than 840 people responded, choosing a grand total of 1108 servers.

The final results are as follows:

PositionServerVotes (Percentage)
1JBoss (AS/EAP/WildFly)395 (47%)
2GlassFish206 (24%)
3Tomcat/Mojarra/Weld186 (22%)
4TomEE85 (10%)
5WebSphere55 (6%)
6WebLogic49 (6%)
7Tomcat/MyFaces/OWB33 (3%)
8Jetty/Mojarra/Weld19 (2%)
9Geronimo13 (1%)
10JEUS11 (1%)
11Liberty9 (1%)
12Jetty/MyFaces/OWB9 (1%)
13JOnAS8 (0%)
14NetWeaver8 (0%)
15Resin6 (0%)
16InforSuite5 (0%)
17WebOTX4 (0%)
18Interstage AS4 (0%)
19(u)Cosminexus3 (0%)

As can be seen the clear winner here is JBoss, which gets nearly half of all votes and nearly twice the amount of the runner up; GlassFish. Just slightly below GlassFish at number 3 is Tomcat in the specific combination with Mojarra and Weld.

It has be noted that Mojarra & Weld are typically but a small part of a homegrown Java EE stack, which often also includes things like Hibernate, Hibernate-Validations and many more components. For the specific case of OmniFaces however the Servlet, JSF and CDI implementations are what matter most so that's why we specifically included these in the poll. Another homegrown stack based on Tomcat, but using Myfaces and OWB (OpenWebBeans) instead scores significantly lower and ends up at place 7.

We acknowledge that people not necessarily have to use Mojarra and Weld together, but can also use Mojarra with OWB, or MyFaces with Weld. However we wanted to somewhat limit the options for homegrown stacks, and a little research ahead hinted these were the more popular combinations. In a follow up poll we may zoom into this and specifically address homegrown stacks by asking which individual components people use.

An interesting observation is that the entire top 4 consists solely out of open source servers, together good for 103% relative to the amount of people who voted (remember that 1 person could vote for multiple servers), or a total of 79% relative to all servers voted for.

While these are certainly impressive numbers, we do have to realize that the voters are self selected and specifically concern those who use OmniFaces. OmniFaces is an open source library without any form of commercial support. It's perhaps not entirely unreasonable to surmise that environments that favor closed source commercially supported servers are less likely to use OmniFaces. Taking that into account, the numbers thus don't necessarily mean that open source servers are indeed used that much in general.

That said, the two big commercial servers WebSphere and WebLogic still got a fair amount of votes; 104 together which is 9% relative to all servers voted for.

The fully open source and once much talked about server Geronimo got significantly few votes; only 13. The fact that Geronimo has more or less stopped developing its server and the lack of a visible community (people blogging about it, writing articles, responding to issues etc) probably contributes to that.

It's somewhat surprising that IBM's new lightweight AS Liberty got only 9 votes, where older (and more heavier) AS WebSphere got 55 votes. Maybe Liberty indeed isn't used that much yet, or maybe the name recognition isn't that big at the moment. A potential weakness in the poll is that we left out the company names. For well known servers such as JBoss and GlassFish you rarely see people calling it Red Hat JBoss or Oracle GlassFish, but in case of Liberty it might have been clearer to call it "IBM Liberty (WLP)".

Another small surprise is that the somewhat obscure server JEUS got as many votes as it did; 11 in total. This is perhaps extra surprising since creator TMaxSoft for some unknown reason consistently calls it a WAS instead of an AS, and the poll asked for the latter.

The "Japanese obscure three" (WebOTX, Interstage AS and (u)Cosminexus) are at the bottom of the list, yet at least 3 to 4 persons each claim to be using it with OmniFaces. Since not all of these servers are trivial to obtain, we've never tested OmniFaces on any of them so frankly have no idea how well OmniFaces runs on them. Even though according to this poll it concerns just a small amount of people, we're now quite eager to try out a few of these servers in the future, just to see how things work there.

Conclusion

For the particular community of those who use Omnifaces, we've seen that open source servers in general and particularly JBoss, GlassFish and TomEE are the most popular Java EE servers. Tomcat and Jetty were included as well, but aren't officially Java EE (although one can build stacks on them that get close).

A couple of servers, which really are complete Java EE implementations just as well and one might think take just as much work to build and maintain, only see a very low amount of users according to this poll. That's of course not to say that they aren't used much in general, but may just gather to a different audience.

Arjan Tijms


Java EE authorization - JACC revisited part III

$
0
0
This is the third and final part of a series where we revisit JACC after taking an initial look at it last year.

In the first part we mainly looked at various role mapping strategies, while the main topic of the second part was obtaining the container specific role mapper and the container specific way of how a JACC provider is deployed.

In this third and final part we'll be bringing it all together and present a fully working JACC provider for a single application module (e.g. a single war).

Architecture

As explained before, implementing a JACC provider requires implementing three classes:

  1. PolicyConfigurationFactory
  2. PolicyConfiguration
  3. Policy
Zooming into these, the following is what is more accurately required to be implemented:
  1. A factory that provides an object that collects permissions
  2. A state machine that controls the life-cyle of this permission collector
  3. Linking permissions of multiple modules and utilities
  4. Collecting and managing permissions
  5. Processing permissions after collecting
  6. An "authorization module" using permissions for authorization decisions

In the implementation given before we put all this functionality in the specified three classes. Here we'll split out each item to a separate class (we'll skip linking though, which is only required for EARs where security constraints are defined in multiple modules). This will result in more classes in total, but each class is hopefully easier to understand.

A factory that provides an object that collects permissions

The factory is largely as given earlier, but contains a few fixes and makes use of the state machine that is shown below.


import static javax.security.jacc.PolicyContext.getContextID;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import javax.security.jacc.PolicyConfiguration;
import javax.security.jacc.PolicyConfigurationFactory;
import javax.security.jacc.PolicyContextException;

public class TestPolicyConfigurationFactory extends PolicyConfigurationFactory {

private static final ConcurrentMap<String, TestPolicyConfigurationStateMachine> configurators = new ConcurrentHashMap<>();

@Override
public PolicyConfiguration getPolicyConfiguration(String contextID, boolean remove) throws PolicyContextException {

if (!configurators.containsKey(contextID)) {
configurators.putIfAbsent(contextID, new TestPolicyConfigurationStateMachine(new TestPolicyConfiguration(contextID)));
}

TestPolicyConfigurationStateMachine testPolicyConfigurationStateMachine = configurators.get(contextID);

if (remove) {
testPolicyConfigurationStateMachine.delete();
}

// According to the contract of getPolicyConfiguration() every PolicyConfiguration returned from here
// should always be transitioned to the OPEN state.
testPolicyConfigurationStateMachine.open();

return testPolicyConfigurationStateMachine;
}

@Override
public boolean inService(String contextID) throws PolicyContextException {
TestPolicyConfigurationStateMachine testPolicyConfigurationStateMachine = configurators.get(contextID);
if (testPolicyConfigurationStateMachine == null) {
return false;
}

return testPolicyConfigurationStateMachine.inService();
}

public static TestPolicyConfiguration getCurrentPolicyConfiguration() {
return (TestPolicyConfiguration) configurators.get(getContextID()).getPolicyConfiguration();
}

}

A state machine that controls the life-cyle of this permission collector

The state machine as required by the spec was left out in the previous example, but we've implemented it now. A possible implementation could have been to actually use a generic state machine that's been given some kind of rules file. Indeed, some implementations take this approach. But as the rules are actually not that complicated and there are not much transitions to speak of I found that just providing a few checks was a much easier method.

A class such as this would perhaps better be provided by the container, as it seems unlikely individual PolicyConfigurations would often if ever need to do anything specific here.


import static test.TestPolicyConfigurationStateMachine.State.DELETED;
import static test.TestPolicyConfigurationStateMachine.State.INSERVICE;
import static test.TestPolicyConfigurationStateMachine.State.OPEN;

import java.security.Permission;
import java.security.PermissionCollection;

import javax.security.jacc.PolicyConfiguration;
import javax.security.jacc.PolicyConfigurationFactory;
import javax.security.jacc.PolicyContextException;

public class TestPolicyConfigurationStateMachine implements PolicyConfiguration {

public static enum State {
OPEN, INSERVICE, DELETED
};

private State state = OPEN;
private PolicyConfiguration policyConfiguration;


public TestPolicyConfigurationStateMachine(PolicyConfiguration policyConfiguration) {
this.policyConfiguration = policyConfiguration;
}

public PolicyConfiguration getPolicyConfiguration() {
return policyConfiguration;
}


// ### Methods that can be called in any state and don't change state

@Override
public String getContextID() throws PolicyContextException {
return policyConfiguration.getContextID();
}

@Override
public boolean inService() throws PolicyContextException {
return state == INSERVICE;
}


// ### Methods where state should be OPEN and don't change state

@Override
public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToExcludedPolicy(permission);
}

@Override
public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToUncheckedPolicy(permission);
}

@Override
public void addToRole(String roleName, Permission permission) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToRole(roleName, permission);
}

@Override
public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToExcludedPolicy(permissions);
}

@Override
public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToUncheckedPolicy(permissions);
}

@Override
public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.addToRole(roleName, permissions);
}

@Override
public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.linkConfiguration(link);
}

@Override
public void removeExcludedPolicy() throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.removeExcludedPolicy();

}

@Override
public void removeRole(String roleName) throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.removeRole(roleName);
}

@Override
public void removeUncheckedPolicy() throws PolicyContextException {
checkStateIs(OPEN);
policyConfiguration.removeUncheckedPolicy();
}


// Methods that change the state
//
// commit() can only be called when the state is OPEN or INSERVICE and next state is always INSERVICE
// delete() can always be called and target state will always be DELETED
// open() can always be called and target state will always be OPEN

@Override
public void commit() throws PolicyContextException {
checkStateIsNot(DELETED);

if (state == OPEN) {
// Not 100% sure; allow double commit, or ignore double commit?
// Here we ignore and only call commit on the actual policyConfiguration
// when the state is OPEN
policyConfiguration.commit();
state = INSERVICE;
}
}

@Override
public void delete() throws PolicyContextException {
policyConfiguration.delete();
state = DELETED;
}

/**
* Transition back to open. This method is required because of the {@link PolicyConfigurationFactory} contract, but is
* mysteriously missing from the interface.
*/
public void open() {
state = OPEN;
}


// ### Private methods

private void checkStateIs(State requiredState) {
if (state != requiredState) {
throw new IllegalStateException("Required status is " + requiredState + " but actual state is " + state);
}
}

private void checkStateIsNot(State undesiredState) {
if (state == undesiredState) {
throw new IllegalStateException("State could not be " + undesiredState + " but actual state is");
}
}

}

Linking permissions of multiple modules and utilities

As mentioned we did not implement linking (perhaps we'll look at this in a future article), but as its an interface method we have to put an (empty) implementation somewhere. At the same time JACC curiously requires us to implement a couple of variations on the permission collection methods that don't even seem to be called in practice by any container we looked at. Finally the PolicyConfiguration interface requires an explicit life-cycle method and an identity method. The life-cycle method is not implemented either since all life-cycle managing is done by the state machine that wraps our actual PolicyConfiguration.

All these "distracting" methods were conveniently shoved into a base class as follows:


import static java.util.Collections.list;

import java.security.Permission;
import java.security.PermissionCollection;

import javax.security.jacc.PolicyConfiguration;
import javax.security.jacc.PolicyContextException;

public abstract class TestPolicyConfigurationBase implements PolicyConfiguration {

private final String contextID;

public TestPolicyConfigurationBase(String contextID) {
this.contextID = contextID;
}

@Override
public String getContextID() throws PolicyContextException {
return contextID;
}

@Override
public void addToExcludedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToExcludedPolicy(permission);
}
}

@Override
public void addToUncheckedPolicy(PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToUncheckedPolicy(permission);
}
}

@Override
public void addToRole(String roleName, PermissionCollection permissions) throws PolicyContextException {
for (Permission permission : list(permissions.elements())) {
addToRole(roleName, permission);
}
}

@Override
public void linkConfiguration(PolicyConfiguration link) throws PolicyContextException {
}

@Override
public boolean inService() throws PolicyContextException {
// Not used, taken care of by PolicyConfigurationStateMachine
return true;
}

}

Collecting and managing permissions

The next step concerns a base class for a PolicyConfiguration that takes care of the actual collection of permissions, and making those collected permissions available later on. For each permission that the container discovers it calls the appropriate method in this class.

This kind of permission collecting, like the state machine, is actually pretty generic. One wonders if it wouldn't be a great deal simpler if the container just called a single init() method once (or even better, used injection) with a simple data structure containing collections of all permission types. Looking at some container implementations it indeed looks like the container has those collections already and just loops over them handing them one by one to our PolicyConfiguration.


import java.security.Permission;
import java.security.Permissions;
import java.util.HashMap;
import java.util.Map;

import javax.security.jacc.PolicyContextException;

public abstract class TestPolicyConfigurationPermissions extends TestPolicyConfigurationBase {

private Permissions excludedPermissions = new Permissions();
private Permissions uncheckedPermissions = new Permissions();
private Map<String, Permissions> perRolePermissions = new HashMap<String, Permissions>();

public TestPolicyConfigurationPermissions(String contextID) {
super(contextID);
}

@Override
public void addToExcludedPolicy(Permission permission) throws PolicyContextException {
excludedPermissions.add(permission);
}

@Override
public void addToUncheckedPolicy(Permission permission) throws PolicyContextException {
uncheckedPermissions.add(permission);
}

@Override
public void addToRole(String roleName, Permission permission) throws PolicyContextException {
Permissions permissions = perRolePermissions.get(roleName);
if (permissions == null) {
permissions = new Permissions();
perRolePermissions.put(roleName, permissions);
}

permissions.add(permission);
}

@Override
public void delete() throws PolicyContextException {
removeExcludedPolicy();
removeUncheckedPolicy();
perRolePermissions.clear();
}

@Override
public void removeExcludedPolicy() throws PolicyContextException {
excludedPermissions = new Permissions();
}

@Override
public void removeRole(String roleName) throws PolicyContextException {
if (perRolePermissions.containsKey(roleName)) {
perRolePermissions.remove(roleName);
} else if ("*".equals(roleName)) {
perRolePermissions.clear();
}
}

@Override
public void removeUncheckedPolicy() throws PolicyContextException {
uncheckedPermissions = new Permissions();
}

public Permissions getExcludedPermissions() {
return excludedPermissions;
}

public Permissions getUncheckedPermissions() {
return uncheckedPermissions;
}

public Map<String, Permissions> getPerRolePermissions() {
return perRolePermissions;
}

}

Processing permissions after collecting

The final part of the PolicyConfiguration concerns a kind of life cycle method again, namely a method that the container calls to indicate all permissions have been handed over to the PolicyConfiguration. In a more modern implementation this might have been an @PostConstruct annotated method.

Contrary to most methods of the PolicyConfiguration that we've seen until now, what happens here is pretty specific to the custom policy provider. Some implementations do a lot of work here and generate a .policy file in the standard Java SE format and write that to disk. This file is then intended to be read back by a standard Java SE Policy implementation.

Other implementations use this moment to optimize the collected permissions by transforming them into their own internal data structure.

In our case we keep the permissions as we collected them and just instantiate a role mapper implementation at this point. The full set of roles that are associated with permissions that each depend on a certain role are passed into the role mapper.


import javax.security.jacc.PolicyContextException;

public class TestPolicyConfiguration extends TestPolicyConfigurationPermissions {

public TestPolicyConfiguration(String contextID) {
super(contextID);
}

private TestRoleMapper roleMapper;

@Override
public void commit() throws PolicyContextException {
roleMapper = new TestRoleMapper(getContextID(), getPerRolePermissions().keySet());
}

public TestRoleMapper getRoleMapper() {
return roleMapper;
}

}
The role mapper referenced in the code shown above was presented in part II of this article and didn't change between parts, but for completeness we'll present it here again:

import static java.util.Arrays.asList;
import static java.util.Collections.list;

import java.lang.reflect.InvocationHandler;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Proxy;
import java.security.Principal;
import java.security.acl.Group;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentMap;

import javax.security.auth.Subject;

public class TestRoleMapper {

private static Object geronimoPolicyConfigurationFactoryInstance;
private static ConcurrentMap<String, Map<Principal, Set<String>>> geronimoContextToRoleMapping;

private Map<String, List<String>> groupToRoles = new HashMap<>();

private boolean oneToOneMapping;
private boolean anyAuthenticatedUserRoleMapped = false;

public static void onFactoryCreated() {
tryInitGeronimo();
}

private static void tryInitGeronimo() {
try {
// Geronimo 3.0.1 contains a protection mechanism to ensure only a Geronimo policy provider is installed.
// This protection can be beat by creating an instance of GeronimoPolicyConfigurationFactory once. This instance
// will statically register itself with an internal Geronimo class
geronimoPolicyConfigurationFactoryInstance = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory").newInstance();
geronimoContextToRoleMapping = new ConcurrentHashMap<>();
} catch (Exception e) {
// ignore
}
}

public static void onPolicyConfigurationCreated(final String contextID) {

// Are we dealing with Geronimo?
if (geronimoPolicyConfigurationFactoryInstance != null) {

// PrincipalRoleConfiguration

try {
Class<?> geronimoPolicyConfigurationClass = Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfiguration");

Object geronimoPolicyConfigurationProxy = Proxy.newProxyInstance(TestRoleMapper.class.getClassLoader(), new Class[] {geronimoPolicyConfigurationClass}, new InvocationHandler() {

@SuppressWarnings("unchecked")
@Override
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {

// Take special action on the following method:

// void setPrincipalRoleMapping(Map<Principal, Set<String>> principalRoleMap) throws PolicyContextException;
if (method.getName().equals("setPrincipalRoleMapping")) {

geronimoContextToRoleMapping.put(contextID, (Map<Principal, Set<String>>) args[0]);

}
return null;
}
});

// Set the proxy on the GeronimoPolicyConfigurationFactory so it will call us back later with the role mapping via the following method:

// public void setPolicyConfiguration(String contextID, GeronimoPolicyConfiguration configuration) {
Class.forName("org.apache.geronimo.security.jacc.mappingprovider.GeronimoPolicyConfigurationFactory")
.getMethod("setPolicyConfiguration", String.class, geronimoPolicyConfigurationClass)
.invoke(geronimoPolicyConfigurationFactoryInstance, contextID, geronimoPolicyConfigurationProxy);


} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
// Ignore
}
}
}


public TestRoleMapper(String contextID, Collection<String> allDeclaredRoles) {
// Initialize the groupToRoles map

// Try to get a hold of the proprietary role mapper of each known
// AS. Sad that this is needed :(
if (tryGlassFish(contextID, allDeclaredRoles)) {
return;
} else if (tryWebLogic(contextID, allDeclaredRoles)) {
return;
} else if (tryGeronimo(contextID, allDeclaredRoles)) {
return;
} else {
oneToOneMapping = true;
}
}

public List<String> getMappedRolesFromPrincipals(Principal[] principals) {
return getMappedRolesFromPrincipals(asList(principals));
}

public boolean isAnyAuthenticatedUserRoleMapped() {
return anyAuthenticatedUserRoleMapped;
}

public List<String> getMappedRolesFromPrincipals(Iterable<Principal> principals) {

// Extract the list of groups from the principals. These principals typically contain
// different kind of principals, some groups, some others. The groups are unfortunately vendor
// specific.
List<String> groups = getGroupsFromPrincipals(principals);

// Map the groups to roles. E.g. map "admin" to "administrator". Some servers require this.
return mapGroupsToRoles(groups);
}

private List<String> mapGroupsToRoles(List<String> groups) {

if (oneToOneMapping) {
// There is no mapping used, groups directly represent roles.
return groups;
}

List<String> roles = new ArrayList<>();

for (String group : groups) {
if (groupToRoles.containsKey(group)) {
roles.addAll(groupToRoles.get(group));
}
}

return roles;
}

private boolean tryGlassFish(String contextID, Collection<String> allDeclaredRoles) {

try {
Class<?> SecurityRoleMapperFactoryClass = Class.forName("org.glassfish.deployment.common.SecurityRoleMapperFactory");

Object factoryInstance = Class.forName("org.glassfish.internal.api.Globals")
.getMethod("get", SecurityRoleMapperFactoryClass.getClass())
.invoke(null, SecurityRoleMapperFactoryClass);

Object securityRoleMapperInstance = SecurityRoleMapperFactoryClass.getMethod("getRoleMapper", String.class)
.invoke(factoryInstance, contextID);

@SuppressWarnings("unchecked")
Map<String, Subject> roleToSubjectMap = (Map<String, Subject>) Class.forName("org.glassfish.deployment.common.SecurityRoleMapper")
.getMethod("getRoleToSubjectMapping")
.invoke(securityRoleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToSubjectMap.containsKey(role)) {
Set<Principal> principals = roleToSubjectMap.get(role).getPrincipals();

List<String> groups = getGroupsFromPrincipals(principals);
for (String group : groups) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).add(role);
}

if ("**".equals(role) && !groups.isEmpty()) {
// JACC spec 3.2 states:
//
// "For the any "authenticated user role", "**", and unless an application specific mapping has
// been established for this role,
// the provider must ensure that all permissions added to the role are granted to any
// authenticated user."
//
// Here we check for the "unless" part mentioned above. If we're dealing with the "**" role here
// and groups is not
// empty, then there's an application specific mapping and "**" maps only to those groups, not
// to any authenticated user.
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryWebLogic(String contextID, Collection<String> allDeclaredRoles) {

try {

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html
Class<?> roleMapperFactoryClass = Class.forName("weblogic.security.jacc.RoleMapperFactory");

// RoleMapperFactory implementation class always seems to be the value of what is passed on the commandline
// via the -Dweblogic.security.jacc.RoleMapperFactory.provider option.
// See http://docs.oracle.com/cd/E57014_01/wls/SCPRG/server_prot.htm
Object roleMapperFactoryInstance = roleMapperFactoryClass.getMethod("getRoleMapperFactory")
.invoke(null);

// See http://docs.oracle.com/cd/E21764_01/apirefs.1111/e13941/weblogic/security/jacc/RoleMapperFactory.html#getRoleMapperForContextID(java.lang.String)
Object roleMapperInstance = roleMapperFactoryClass.getMethod("getRoleMapperForContextID", String.class)
.invoke(roleMapperFactoryInstance, contextID);

// This seems really awkward; the Map contains BOTH group names and user names, without ANY way to
// distinguish between the two.
// If a user now has a name that happens to be a role as well, we have an issue :X
@SuppressWarnings("unchecked")
Map<String, String[]> roleToPrincipalNamesMap = (Map<String, String[]>) Class.forName("weblogic.security.jacc.simpleprovider.RoleMapperImpl")
.getMethod("getRolesToPrincipalNames")
.invoke(roleMapperInstance);

for (String role : allDeclaredRoles) {
if (roleToPrincipalNamesMap.containsKey(role)) {

List<String> groupsOrUserNames = asList(roleToPrincipalNamesMap.get(role));

for (String groupOrUserName : roleToPrincipalNamesMap.get(role)) {
// Ignore the fact that the collection also contains user names and hope
// that there are no user names in the application with the same name as a group
if (!groupToRoles.containsKey(groupOrUserName)) {
groupToRoles.put(groupOrUserName, new ArrayList<String>());
}
groupToRoles.get(groupOrUserName).add(role);
}

if ("**".equals(role) && !groupsOrUserNames.isEmpty()) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}

return true;

} catch (ClassNotFoundException | NoSuchMethodException | SecurityException | IllegalAccessException | IllegalArgumentException
| InvocationTargetException e) {
return false;
}
}

private boolean tryGeronimo(String contextID, Collection<String> allDeclaredRoles) {
if (geronimoContextToRoleMapping != null) {

if (geronimoContextToRoleMapping.containsKey(contextID)) {
Map<Principal, Set<String>> principalsToRoles = geronimoContextToRoleMapping.get(contextID);

for (Map.Entry<Principal, Set<String>> entry : principalsToRoles.entrySet()) {

// Convert the principal that's used as the key in the Map to a list of zero or more groups.
// (for Geronimo we know that using the default role mapper it's always zero or one group)
for (String group : principalToGroups(entry.getKey())) {
if (!groupToRoles.containsKey(group)) {
groupToRoles.put(group, new ArrayList<String>());
}
groupToRoles.get(group).addAll(entry.getValue());

if (entry.getValue().contains("**")) {
// JACC spec 3.2 states: [...]
anyAuthenticatedUserRoleMapped = true;
}
}
}
}

return true;
}

return false;
}

/**
* Extracts the roles from the vendor specific principals. SAD that this is needed :(
*
* @param principals
* @return
*/
public List<String> getGroupsFromPrincipals(Iterable<Principal> principals) {
List<String> groups = new ArrayList<>();

for (Principal principal : principals) {
if (principalToGroups(principal, groups)) {
// return value of true means we're done early. This can be used
// when we know there's only 1 principal holding all the groups
return groups;
}
}

return groups;
}

public List<String> principalToGroups(Principal principal) {
List<String> groups = new ArrayList<>();
principalToGroups(principal, groups);
return groups;
}

public boolean principalToGroups(Principal principal, List<String> groups) {
switch (principal.getClass().getName()) {

case "org.glassfish.security.common.Group": // GlassFish
case "org.apache.geronimo.security.realm.providers.GeronimoGroupPrincipal": // Geronimo
case "weblogic.security.principal.WLSGroupImpl": // WebLogic
case "jeus.security.resource.GroupPrincipalImpl": // JEUS
groups.add(principal.getName());
break;

case "org.jboss.security.SimpleGroup": // JBoss
if (principal.getName().equals("Roles") && principal instanceof Group) {
Group rolesGroup = (Group) principal;
for (Principal groupPrincipal : list(rolesGroup.members())) {
groups.add(groupPrincipal.getName());
}

// Should only be one group holding the roles, so can exit the loop
// early
return true;
}
}
return false;
}

}

An "authorization module" using permissions for authorization decisions

At long last we present the actual "authorization module" (called Policy in Java SE and JACC). Compared to the version we presented before this now delegates extracting the list of roles from the principles that are associated with the authenticated user to the role mapper we showed above. In addition to that we also added the case where we check for the so-called "any authenticated user", which means it doesn't matter which roles a user has, but only the fact if this user is authenticated or not counts.

This authorization module implements the default authorization algorithm defined by the Servlet and JACC specs, which does the following checks in order:

  1. Is permission excluded? (nobody can access those)
  2. Is permission unchecked? (everyone can access those)
  3. Is permission granted to every authenticated user?
  4. Is permission granted to any of the roles the current user is in?
  5. Is permission granted by the previous (if any) authorization module?

The idea of a custom authorization module is often to do something specific authorization wise, so this would be the most likely place to put custom code. In fact, if only this particular class could be injected with the permissions that now have to be collected by our own classes as shown above, then JACC would be massively simplified in one fell swoop.

In that case only this class would be have to be implemented. Even better would be if the default algorithm was also provided in a portable way. With that we could potentially only implement the parts that are really different for our custom implementation and leave the rest to the default implementation.


import static java.util.Arrays.asList;
import static java.util.Collections.list;
import static test.TestPolicyConfigurationFactory.getCurrentPolicyConfiguration;

import java.security.CodeSource;
import java.security.Permission;
import java.security.PermissionCollection;
import java.security.Permissions;
import java.security.Policy;
import java.security.Principal;
import java.security.ProtectionDomain;
import java.util.List;
import java.util.Map;

public class TestPolicy extends Policy {

private Policy previousPolicy = Policy.getPolicy();

@Override
public boolean implies(ProtectionDomain domain, Permission permission) {

TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
TestRoleMapper roleMapper = policyConfiguration.getRoleMapper();

if (isExcluded(policyConfiguration.getExcludedPermissions(), permission)) {
// Excluded permissions cannot be accessed by anyone
return false;
}

if (isUnchecked(policyConfiguration.getUncheckedPermissions(), permission)) {
// Unchecked permissions are free to be accessed by everyone
return true;
}

List<Principal> currentUserPrincipals = asList(domain.getPrincipals());

if (!roleMapper.isAnyAuthenticatedUserRoleMapped() && !currentUserPrincipals.isEmpty()) {
// The "any authenticated user" role is not mapped, so available to anyone and the current
// user is assumed to be authenticated (we assume that an unauthenticated user doesn't have any principals
// whatever they are)
if (hasAccessViaRole(policyConfiguration.getPerRolePermissions(), "**", permission)) {
// Access is granted purely based on the user being authenticated (the actual roles, if any, the user has it not important)
return true;
}
}

if (hasAccessViaRoles(policyConfiguration.getPerRolePermissions(), roleMapper.getMappedRolesFromPrincipals(currentUserPrincipals), permission)) {
// Access is granted via role. Note that if this returns false it doesn't mean the permission is not
// granted. A role can only grant, not take away permissions.
return true;
}

// Access not granted via any of the JACC maintained Permissions. Check the previous (default) policy.
// Note: this is likely to be called in case it concerns a Java SE type permissions.
// TODO: Should we not distinguish between JACC and Java SE Permissions at the start of this method? Seems
// very unlikely that JACC would ever say anything about a Java SE Permission, or that the Java SE
// policy says anything about a JACC Permission. Why are these two systems even combined in the first place?
if (previousPolicy != null) {
return previousPolicy.implies(domain, permission);
}

return false;
}

@Override
public PermissionCollection getPermissions(ProtectionDomain domain) {

Permissions permissions = new Permissions();

TestPolicyConfiguration policyConfiguration = getCurrentPolicyConfiguration();
TestRoleMapper roleMapper = policyConfiguration.getRoleMapper();

Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

// First get all permissions from the previous (original) policy
if (previousPolicy != null) {
collectPermissions(previousPolicy.getPermissions(domain), permissions, excludedPermissions);
}

// If there are any static permissions, add those next
if (domain.getPermissions() != null) {
collectPermissions(domain.getPermissions(), permissions, excludedPermissions);
}

// Thirdly, get all unchecked permissions
collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

// Finally get the permissions for each role *that the current user has*
//
// Note that the principles that are put into the ProtectionDomain object are those from the current user.
// (for a Server application, passing in a Subject would have been more logical, but the Policy class was
// made for Java SE with code-level security in mind)
Map<String, Permissions> perRolePermissions = policyConfiguration.getPerRolePermissions();
for (String role : roleMapper.getMappedRolesFromPrincipals(domain.getPrincipals())) {
if (perRolePermissions.containsKey(role)) {
collectPermissions(perRolePermissions.get(role), permissions, excludedPermissions);
}
}

return permissions;
}

@Override
public PermissionCollection getPermissions(CodeSource codesource) {

Permissions permissions = new Permissions();

TestPolicyConfigurationPermissions policyConfiguration = getCurrentPolicyConfiguration();
Permissions excludedPermissions = policyConfiguration.getExcludedPermissions();

// First get all permissions from the previous (original) policy
if (previousPolicy != null) {
collectPermissions(previousPolicy.getPermissions(codesource), permissions, excludedPermissions);
}

// Secondly get the static permissions. Note that there are only two sources possible here, without
// knowing the roles of the current user we can't check the per role permissions.
collectPermissions(policyConfiguration.getUncheckedPermissions(), permissions, excludedPermissions);

return permissions;
}

private boolean isExcluded(Permissions excludedPermissions, Permission permission) {
if (excludedPermissions.implies(permission)) {
return true;
}

for (Permission excludedPermission : list(excludedPermissions.elements())) {
if (permission.implies(excludedPermission)) {
return true;
}
}

return false;
}

private boolean isUnchecked(Permissions uncheckedPermissions, Permission permission) {
return uncheckedPermissions.implies(permission);
}

private boolean hasAccessViaRoles(Map<String, Permissions> perRolePermissions, List<String> roles, Permission permission) {
for (String role : roles) {
if (hasAccessViaRole(perRolePermissions, role, permission)) {
return true;
}
}

return false;
}

private boolean hasAccessViaRole(Map<String, Permissions> perRolePermissions, String role, Permission permission) {
return perRolePermissions.containsKey(role) && perRolePermissions.get(role).implies(permission);
}

/**
* Copies permissions from a source into a target skipping any permission that's excluded.
*
* @param sourcePermissions
* @param targetPermissions
* @param excludedPermissions
*/
private void collectPermissions(PermissionCollection sourcePermissions, PermissionCollection targetPermissions, Permissions excludedPermissions) {

boolean hasExcludedPermissions = excludedPermissions.elements().hasMoreElements();

for (Permission permission : list(sourcePermissions.elements())) {
if (!hasExcludedPermissions || !isExcluded(excludedPermissions, permission)) {
targetPermissions.add(permission);
}
}
}

}

Conclusion

This concludes our three parter on revisiting JACC. In this third and final part we have looked at an actual Policy Provider. We have broken up the implementation into several parts that each focused on a particular responsibility. While the Policy Provider is complete and working (tested on GlassFish, WebLogic and Geronimo) we did not implement module linking yet, so it's with the caveat that it only works within a single war.

To implement another custom Policy Provider many of these parts can probably be re-used as-is and likely only the Policy itself has to customized.

Arjan Tijms

How Java EE translates web.xml constraints to Permission instances

$
0
0
It's a well known fact that in Java EE security one can specify security constraints in web.xml. It's perhaps a little lesser known fact that in full profile Java EE servers those constraints are translated by the container to instances of the Permission class. The specifications responsible for this are Servlet and JACC. This article shows a simple example of what this translation looks like.

Web.xml constraints

We're putting the following constraints in web.xml:


<security-constraint>
<web-resource-collection>
<web-resource-name>Forbidden Pattern</web-resource-name>
<url-pattern>/forbidden/*</url-pattern>
</web-resource-collection>
<auth-constraint/>
</security-constraint>

<security-constraint>
<web-resource-collection>
<web-resource-name>Protected Pattern</web-resource-name>
<url-pattern>/protected/*</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>architect</role-name>
<role-name>administrator</role-name>
</auth-constraint>
</security-constraint>

<security-constraint>
<web-resource-collection>
<web-resource-name>Protected Exact</web-resource-name>
<url-pattern>/adminservlet</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>administrator</role-name>
</auth-constraint>
</security-constraint>

<security-role>
<role-name>architect</role-name>
</security-role>
<security-role>
<role-name>administrator</role-name>
</security-role>

Java Permissions

Given the above shown constraints in web.xml the following WebResourcePermission instances will be generated, in 3 collections as shown below. For brevity only WebResourcePermission is shown. The other types are omitted.

Excluded

  • WebResourcePermission "/forbidden/*"

Unchecked

  • WebResourcePermission "/:/adminservlet:/protected/*:/forbidden/*"

Per Role

  • architect
    • WebResourcePermission "/protected/*"
  • administrator
    • WebResourcePermission "/protected/*"
    • WebResourcePermission "/adminservlet"

Below is very short explanation for the different permission types normally used for the translation. The interested reader is suggested to study the Javadoc of each type for more detailed information.


Java EE will generate 3 types of Permission instances when translating constraints expressed in web.xml; WebRoleRefPermission, WebUserDataPermission and WebResourcePermission.

WebRoleRefPermission

A web role ref permission is about mapping Servlet local roles to application roles. Especially with MVC frameworks like JSF and the upcoming JAX-RS based MVC 1.0 the use for this is perhaps questionable, as there's only one Servlet in that case that serves many different views.

WebUserDataPermission

A web user data permission is about the transport level guarantees for accessing resources (practically this almost always means HTTP vs HTTPS). This can be specified using the <user-data-constraint> element in web.xml, which we have omitted here.

WebResourcePermission

The web resource permission is about the actual access to a resource. This can be specified using the <web-resource-collection> element in web.xml, which we have used in the example above.

So let's take a look at what's going on here.

Our first web.xml constraint shown above defined so-called "excluded access", which means that nobody can access the resources defined by that pattern. In XML this is accomplished by simply omitting the auth-constraint element. This was translated to Java code by means of putting a WebResourcePermission with the pattern "/forbidden/*" in the "Excluded" collection. Although there are some differences, this is a reasonably direct translation from the XML form.

The permission shown above for the "Unchecked" collection concerns the so-called "unchecked access", which means that everyone can access those resources. This one wasn't explicitly defined in XML, although XML does have syntax for explicitly defining unchecked access. The permission shown here concerns the Servlet default mapping (a fallback for everything that doesn't match any other declared Servlet pattern).

The pattern used here may need some further explanation. In the pattern the colon (:) is a separator of a list of patterns. The first pattern is the one we grant access to, while the rest of the patterns are the exceptions to that. So unchecked access for "/:/adminservlet:/protected/*:/forbidden/*" means access to everything (e.g. /foo/readme.text) is granted to everyone, with the exception of "/adminservlet" and paths that starts with either "/protected" or "/forbidden". In this case the translation from the XML form to Java is not as direct.

The next two constraints that we showed in web.xml concerned "role-based access", which means that only callers who are in the associated roles can access resources defined by those patterns. In XML this is accomplished by putting one or more patterns together with one or more roles in a security constraint. This is translated to Java by generating {role, permission} pairs for each unique combination that appears in the XML file. It's typically most convenient then to put these entries in a map, with role the key and permission the value, as was done above, but this is not strictly necessary. Here we see that the translation doesn't directly reflect the XML structure, but the link to the XML version can surely be seen in the translation.

Obtaining the generated Permissions

There is unfortunately no API available in Java EE to directly obtain the generated Permission instances. Instead, one has to install a JACC provider that is called by the container for each individual Permission that is generated. A ready to use provider was given in a previous article, but as we saw before they are not entirely trivial to install.

Conclusion

We've shown a few simple web.xml based security constraints and saw how they translated to Java Permission instances.

There are quite a few things that we did not look at, like the option to specify one or more HTTP Methods (GET, POST, etc) with or without the deny uncovered methods feature, the option to specify a transport level guarantee, the "any authenticated user" role, combinations of overlapping patterns with different constraints, etc etc. This was done intentionally to keep the example simple and to focus on the main concept of translation without going in to too many details. In a future article we may take a look at some more advanced cases.

Arjan Tijms

Testing JASPIC 1.1 on IBM Liberty EE 7 beta

$
0
0
In this article we take a look at the latest April 2015 beta version of IBM's Liberty server, and specifically look at how well it implements the Java EE authentication standard JASPIC.

The initial version of Liberty implemented only a seemingly random assortment of Java EE APIs, but the second version that we looked at last year officially implemented the (Java EE 6) web profile. This year however the third incarnation is well on target to implement the full profile of Java EE 7.

This means IBM's newer and much lighter Liberty (abbreviated WLP), will be a true alternative for the older and incredibly obese WebSphere (abbreviated WAS) where it purely concerns the Java EE standard APIs. From having by far the most heavyweight server on the market (weighing in at well over 2GB), IBM can now offer a server that's as light and small as various offerings from its competition.

For this article we'll be specifically looking at how well JASPIC works on Liberty. Please take into account that the EE 7 version of Liberty is still a beta, so this only concerns an early look. Bugs and missing functionality are basically expected.

We started by downloading Liberty from the beta download page. The download page initially looked a little confusing, but it's constantly improving and by the time that this article was written it was already a lot clearer. Just like the GlassFish download page, IBM now offers a very straightforward Java EE Web profile download and a Java EE full profile one.

For old time WebSphere users who were used to installers that were themselves 200GB in size and only run on specific operating systems, and then happily downloaded 2GB of data that represented the actual server, it beggars belief that Liberty is now just an archive that you unzip. While the last release of Liberty already greatly improved matters by having an executable jar as download, effectively a self-extracting archive, nothing beats the ultimate simplicity of an "install" that solely consists of an archive that you unzip. This represents the pure zen of installing, shaving every non-essential component off it and leaving just the bare essentials. GlassFish has an unzip install, JBoss has it, TomEE and Tomcat has it, even the JDK has it these days, and now finally IBM has one too :)

We downloaded the Java EE 7 archive, wlp-beta-javaee7-2015.4.0.0.zip, weighing in at a very reasonable 100MB, which is about the same size as the latest beta of JBoss (WildFly 9.0 beta2). Like last year there is no required registration or anything. A license has to be accepted (just like e.g. the JDK), but that's it. The experience up to this point is as perfect as can be.

A small disappointment is that the download page lists a weird extra step that supposedly needs to be performed. It says something called a "server" needs to be created after the unzip, but luckily it appeared this is not the case. After unzipping Liberty can be started directly on OS X by pointing Eclipse to the directory where Liberty was extracted, or by typing the command "./server start" from the "./bin" directory where Liberty was extracted. Why this unnecessary step is listed is not clear. Hopefully it's just a remainder of some early alpha version. On Linux (we tried Ubuntu 14.10) there's an extra bug. The file permissions of the unzipped archive are wrong, and a "chmod +x ./bin/server" is needed to get Liberty to start using either Eclipse or the commandline.

(UPDATE: IBM responded right away by removing the redundant step mentioned by the download page)

A bigger disappointment is that the Java EE full profile archive is by default configured to only be a JSP/Servlet container. Java EE 7 has to be "activated" by manually editing a vendor specific XML file called "server.xml" and finding out that in its "featureManager" section one needs to type <feature>javaee-7.0</feature>. For some reason or the other this doesn't include JASPIC and JACC. Even though they really are part of Java EE (7), they have to be activated separately. In the case of JASPIC this means adding the following as well: <feature>jaspic-1.1</feature>. Hopefully these two issues are just packaging errors and will be resolved in the next beta or at least in the final version.

On to trying out JASPIC, we unfortunately learned that by default JASPIC doesn't really work as it should. Liberty inherited a spec compliance issue from WebSphere 8.x where the runtime insists that usernames and groups that an auth module wishes to set as the authenticated identity also exist in an IBM specific server internal identity store that IBM calls "user registry". This is however not the intend of JASPIC, and existing JASPIC modules will not take this somewhat strange requirement into account which means they will therefor not work on WebSphere and now Liberty. We'll be looking at a hack to work around this below.

Another issue is that Liberty still mandates so called group to role mapping, even when such mapping is not needed. Unlike some other servers that also mandate this by default there's currently no option to switch this requirement off, but there's an open issue for this in IBM's tracker. Another problem is that the group to role mapping file can only be supplied by the application when using an EAR archive. With lighter weight applications a war archive is often the initial choice, but when security is needed and you don't want or can't pollute the server itself with (meaningless) application specific data, then the current beta of Liberty forces the EAR archive upon you. Here too however there's already an issue filed to remedy this.

One way to work around the spec compliance issue mentioned above is by implementing a custom user registry that effectively does nothing. IBM has some documentation on how to do this, but unfortunately it's not giving exact instructions but merely outlines the process. The structure is also not entirely logical.

For instance, step 1 says "Implement the custom user registry (FileRegistrysample.java)". But in what kind of project? Where should the dependencies come from? Then step 2 says: "Creating an OSGi bundle with Bundle Activation. [...] Import the FileRegistrysample.java file". Why not create the bundle project right away and then create the mentioned file inside that bundle project? Step 4 says "Register the services", but gives no information on how to do this. Which services are we even talking about, and should they be put in an XML file or so and if so which one and what syntax? Step 3.4 asks to install the feature into Liberty using Eclipse (this works very nicely), but then step 4 and 5 are totally redundant, since they explain another more manually method to install the feature.

Even though it's outdated, IBM's general documentation on how to create a Liberty feature is much clearer. With those two articles side by side and cross checking it with the source code of the example used in the first article, I was able to build a working NOOP user registry. I had to Google for the example's source code though as the link in the article resulted in a 404. A good thing to realize is that the .esa file that's contained in the example .jar is also an archive that once unzipped contains the actual source code. Probably a trivial bit of knowledge for OSGi users, but myself being an OSGi n00b completely overlooked this and spent quite some time looking for the .java files.

The source code of the actual user registry is as follows:


package noopregistrybundle;

import static java.util.Collections.emptyList;

import java.rmi.RemoteException;
import java.security.cert.X509Certificate;
import java.util.ArrayList;
import java.util.List;
import java.util.Properties;

import javax.naming.InvalidNameException;
import javax.naming.ldap.LdapName;
import javax.naming.ldap.Rdn;

import com.ibm.websphere.security.CertificateMapFailedException;
import com.ibm.websphere.security.CertificateMapNotSupportedException;
import com.ibm.websphere.security.CustomRegistryException;
import com.ibm.websphere.security.EntryNotFoundException;
import com.ibm.websphere.security.NotImplementedException;
import com.ibm.websphere.security.PasswordCheckFailedException;
import com.ibm.websphere.security.Result;
import com.ibm.websphere.security.UserRegistry;
import com.ibm.websphere.security.cred.WSCredential;

public class NoopUserRegistry implements UserRegistry {

@Override
public void initialize(Properties props) throws CustomRegistryException, RemoteException {
}

@Override
public String checkPassword(String userSecurityName, String password) throws PasswordCheckFailedException, CustomRegistryException, RemoteException {
return userSecurityName;
}

@Override
public String mapCertificate(X509Certificate[] certs) throws CertificateMapNotSupportedException, CertificateMapFailedException, CustomRegistryException, RemoteException {
try {
for (X509Certificate cert : certs) {
for (Rdn rdn : new LdapName(cert.getSubjectX500Principal().getName()).getRdns()) {
if (rdn.getType().equalsIgnoreCase("CN")) {
return rdn.getValue().toString();
}
}
}
} catch (InvalidNameException e) {
}

throw new CertificateMapFailedException("No valid CN in any certificate");
}

@Override
public String getRealm() throws CustomRegistryException, RemoteException {
return "customRealm"; // documentation says can be null, but should really be non-null!
}

@Override
public Result getUsers(String pattern, int limit) throws CustomRegistryException, RemoteException {
return emptyResult();
}

@Override
public String getUserDisplayName(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return userSecurityName;
}

@Override
public String getUniqueUserId(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return userSecurityName;
}

@Override
public String getUserSecurityName(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return uniqueUserId;
}

@Override
public boolean isValidUser(String userSecurityName) throws CustomRegistryException, RemoteException {
return true;
}

@Override
public Result getGroups(String pattern, int limit) throws CustomRegistryException, RemoteException {
return emptyResult();
}

@Override
public String getGroupDisplayName(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return groupSecurityName;
}

@Override
public String getUniqueGroupId(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return groupSecurityName;
}

@Override
public List<String> getUniqueGroupIds(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return new ArrayList<>(); // Apparently needs to be mutable
}

@Override
public String getGroupSecurityName(String uniqueGroupId) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return uniqueGroupId;
}

@Override
public boolean isValidGroup(String groupSecurityName) throws CustomRegistryException, RemoteException {
return true;
}

@Override
public List<String> getGroupsForUser(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException {
return emptyList();
}

@Override
public Result getUsersForGroup(String paramString, int paramInt) throws NotImplementedException, EntryNotFoundException, CustomRegistryException, RemoteException {
return emptyResult();
}

@Override
public WSCredential createCredential(String userSecurityName) throws NotImplementedException, EntryNotFoundException, CustomRegistryException, RemoteException {
return null;
}

private Result emptyResult() {
Result result = new Result();
result.setList(emptyList());
return result;
}
}

There were two small caveats here. The first is that the documentation for getRealm says it may return null and that "customRealm" will be used as the default then. But when you actually return null authentication will fail with many null pointer exceptions appearing in the log. The second is that getUniqueGroupIds() has to return a mutable collection. If Collections#emptyList is returned it will throw an exception that no element can be inserted. Likely IBM merges the list of groups this method returns with those that are being provided by the JASPIC auth module, and directly uses this collection for that merging.

The Activator class that's mentioned in the article referenced above looks as follows:


package noopregistrybundle;

import static org.osgi.framework.Constants.SERVICE_PID;

import java.util.Dictionary;
import java.util.Hashtable;

import org.osgi.framework.BundleActivator;
import org.osgi.framework.BundleContext;
import org.osgi.framework.ServiceRegistration;
import org.osgi.service.cm.ConfigurationException;
import org.osgi.service.cm.ManagedService;

import com.ibm.websphere.security.UserRegistry;

public class Activator extends NoopUserRegistry implements BundleActivator, ManagedService {

private static final String CONFIG_PID = "noopUserRegistry";

private ServiceRegistration<ManagedService> managedServiceRegistration;
private ServiceRegistration<UserRegistry> userRegistryRegistration;

@SuppressWarnings({ "rawtypes", "unchecked" })
Hashtable getDefaults() {
Hashtable defaults = new Hashtable();
defaults.put(SERVICE_PID, CONFIG_PID);
return defaults;
}

@SuppressWarnings("unchecked")
public void start(BundleContext context) throws Exception {
managedServiceRegistration = context.registerService(ManagedService.class, this, getDefaults());
userRegistryRegistration = context.registerService(UserRegistry.class, this, getDefaults());
}

@Override
public void updated(Dictionary<String, ?> properties) throws ConfigurationException {

}

public void stop(BundleContext context) throws Exception {
if (managedServiceRegistration != null) {
managedServiceRegistration.unregister();
managedServiceRegistration = null;
}
if (userRegistryRegistration != null) {
userRegistryRegistration.unregister();
userRegistryRegistration = null;
}
}
}

Here we learned what that cryptic "Register the services" instruction from the article meant; it are the two calls to context.registerService here. Surely something that's easy to guess, or isn't it?

Finally a MANIFEST.FM file had to be created. The Eclipse tooling should normally help here, but it our case it worked badly. The "Analyze code and add dependencies to the MANIFEST.MF" command in the manifest editor (under the Dependencies tab) didn't work at all, and "org.osgi.service.cm" couldn't be chosen from the Imported Packages -> Add dialog. Since this import is actually used (and OSGi requires you to list each and every import used by your code) I added this manually. The completed file looks as follows:


Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Bundle-Name: NoopRegistryBundle
Bundle-SymbolicName: NoopRegistryBundle
Bundle-Version: 1.0.0.qualifier
Bundle-Activator: noopregistrybundle.Activator
Import-Package: com.ibm.websphere.security;version="1.1.0",
javax.naming,
javax.naming.ldap,
org.osgi.service.cm,
org.osgi.framework
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Export-Package: noopregistrybundle

Creating yet another project for the so-called feature, importing this OSGi bundle there and installing the build feature into Liberty was all pretty straightforward when following the above mentioned articles.

The final step consisted of adding the noop user registry to Liberty's server.xml, which looked as follows:


<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">

<featureManager>
<feature>javaee-7.0</feature>
<feature>jaspic-1.1</feature>
<feature>localConnector-1.0</feature>
<feature>usr:NoopRegistryFeature</feature>
</featureManager>

<httpEndpoint httpPort="9080" httpsPort="9443" id="defaultHttpEndpoint"/>

<noopUserRegistry/>
</server>

With this in place, JASPIC indeed worked on Liberty, which is absolutely great! To do some more thorough testing of how compatible Liberty exactly is we used the JASPIC tests that I contributed to the Java EE 7 samples project. These tests have been used by various other server vendors already and give a basic impression of what things work and do not work.

The tests had to be adjusted for Liberty because of its requirement to add an EAR wrapper that hosts the mandated group to role mapping.

After running the tests, the following failures were reported:

TestClassComment
testPublicPageNotRememberLogin org.javaee7.jaspic.basicauthentication.BasicAuthenticationPublicTest
testPublicPageLoggedin org.javaee7.jaspic.basicauthentication.BasicAuthenticationPublicTest
testProtectedAccessIsStateless org.javaee7.jaspic.basicauthentication.BasicAuthenticationStatelessTest
testPublicServletWithLoginCallingEJB org.javaee7.jaspic.ejbpropagation.ProtectedEJBPropagationTest
testProtectedServletWithLoginCallingEJB org.javaee7.jaspic.ejbpropagation.PublicEJBPropagationLogoutTest
testProtectedServletWithLoginCallingEJB org.javaee7.jaspic.ejbpropagation.PublicEJBPropagationTest
testLogout org.javaee7.jaspic.lifecycle.AuthModuleMethodInvocationTestSAM method cleanSubject not called, but should have been
testJoinSessionIsOptional org.javaee7.jaspic.registersession.RegisterSessionTest
testRemembersSession org.javaee7.jaspic.registersession.RegisterSessionTest
testResponseWrapping org.javaee7.jaspic.wrapping.WrappingTest Response wrapped by SAM did not arrive in Servlet
testRequestWrapping org.javaee7.jaspic.wrapping.WrappingTest Request wrapped by SAM did not arrive in Servlet

Specifically the EJB, "logout calls cleanSubject"& register session (both new JASPIC 1.1 features) and request/response wrapper tests failed.

Two of those are new JASPIC 1.1 features and likely IBM just hasn't implemented those yet for the beta. Request/response wrapper failures is a known problem from JASPIC 1.0 times. Although most servers implement it now curiously not a single JASPIC implementation did so back in the Java EE 6 time frame (even though it was a required feature by the spec).

First Java EE 7 production ready server?

At the time of writing, which is 694 days (1 year, ~10 months) after the Java EE 7 spec was finalized, there are 3 certified Java EE servers but none of them is deemed by their vendor as "production ready". With the implementation cycle of Java EE 6 we saw that IBM was the first vendor to release a production ready server after 559 days (1 year, 6 months), with Oracle following suit at 721 days (1 year, 11 months).

Oracle (perhaps unfortunately) doesn't do public beta releases and is a little tight lipped about their up coming Java EE 7 WebLogic 12.2.1 release, but it's not difficult to guess that they are working hard on it (I have it on good authority that they indeed are). Meanwhile IBM has just released a beta that starts to look very complete. Looking at the amount of time it took both vendors last time around it might be a tight race between the two for releasing the first production ready Java EE 7 server. Although JBoss' WildFly 8.x is certified, a production ready and supported release is likely still at least a full year ahead when looking at the current state of the WildFly branch and if history is anything to go by (it took JBoss 923 days (2 years, 6 months) last time).

Conclusion

Despite a few bugs in the packaging of the full and web profile servers, IBM's latest beta shows incredible promise. The continued effort in making its application server yet again simpler to install for developers is nothing but applaudable. IBM clearly meant it when they started the Liberty project a few years ago and told their mission was to optimize the developer experience.

There are a few small bugs and one somewhat larger violation in its JASPIC implementation, but we have to realize it's just a beta. In fact, IBM engineers are already looking at the JASPIC issues.

To summarize the good and not so good points:

Good

  • Runs on all operating systems (no special IBM JDK required)
  • Monthly betas of EE 7 server
  • Liberty to support Java EE 7 full profile
  • Possibly on its way to become the first production ready EE 7 server
  • Public download page without required registration
  • Very good file size for full profile (100MB)
  • Extremely easy "download - unzip - ./server start" experience

Not (yet) so good

  • Download page lists totally unnecessary step asking to "create a server" (update: now fixed by IBM)
  • Wrong file permissions in archive for usage on Linux; executable attribute missing on bin/server
  • Wrong configuration of server.xml; both web and full profile by default configured as JSP/Servlet only
  • "javaee-7.0" feature in server.xml doesn't imply JASPIC and JACC, while both are part of Java EE
  • JASPIC runtime tries to validate usernames/groups in internal identity store (violation of JASPIC spec)
  • Mandatory group to role mapping, even when this is not needed
  • Mandatory usage of EAR archive when group to role mapping has to be provided by the application
  • Not all JASPIC features implemented yet (but remember that we looked at a beta version)

Arjan Tijms

OmniFaces 2.1-RC1 has been released!

$
0
0
We are proud to announce that OmniFaces 2.1 release candidate 1 has been made available for testing.

OmniFaces 2.1 is the second release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Since Java EE 7 availability remains somewhat scarce, we maintain a no-frills 1.x branch for JSF 2.0 (without CDI). For this branch we've simultaneously released a release candidate as well: 1.11-RC1.

A full list of what's new and changed is available here.

OmniFaces 2.1 RC1 can be tested by adding the following dependency to your pom.xml:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>2.1-RC1</version>
</dependency>

Alternatively the jars files can be downloaded directly.

For the 1.x branch the coordinates are:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>1.11-RC1</version>
</dependency>
This one too can be downloaded directly.

If no major bugs surface we hope to release OmniFaces 2.1 final soon.

Arjan Tijms

Diving into the unknown: the JEUS application server

$
0
0

There are quite a number of Java EE implementations. At the open source front JBoss, GlassFish and increasingly TomEE are very well known. On the commercial side there's WebLogic, WebSphere and increasingly Liberty that are rather well known as well.

A couple of implementations are somewhat less known such as Resin and Geronimo/WASCE, but you'd expect many advanced Java EE developers to have heard of those. Yet another implementation is JOnAS, which arguably is much less known.

Then however, there are also the truly obscure (in the sense of not widely known) ones such as TMax JEUS, Fujitsu Interstage Application Server, Hitachi uCosminexus Application Server and NEC WebOTX Application Server. These seem to be virtually unknown among general Java EE users. The only way I found out that these servers even existed was because of them being mentioned on Oracle's Java EE compatibility page.

The most famous of these unknown servers is JEUS; every ~3 years it makes it into the news as the first server after the RI (GlassFish) to be certified for the new Java EE version. Considering that JEUS is a full Java EE implementation this no small feat. Now Applications Servers have it somewhat easier these days as they rarely have to implement any of the major sub-specs like JSF, CDI, JPA, Bean Validation, etc themselves but can instead choose from one of the readily available open source implementations. Still, there's a lot of extra work into building a complete AS out of them, and for some sub-specs like e.g. JASPIC there isn't a really reusable (open source) implementation available.

As I'm somewhat in the business of testing JASPIC implementations and JEUS should probably ship with its own independent JASPIC implementation, plus the fact that we want to test OmniFaces on as many servers as possible, I found it to be interesting to take a deeper look at it.

The first thing I found out is that JEUS really is obscure. Besides some news articles that JEUS is the first AS to be Java EE 7 certified, there's hardly anything to be found. On StackOverflow for example when you search for "JBoss" there are about 26000 results. Searching for "GlassFish" yields about 16500 results. Searching for "JEUS" however has... 3 results, of which 2 from the same user. Searching on Google gives some results, but nearly all of them are in Korean. The very few (old) posts in English are written by someone who judging by his name is also Korean.

Clearly JEUS is thus very much a Korean thing. It might well be that JEUS enjoys enormous popularity in Korea, but for some reason this popularity doesn't really extend beyond Korea. As mentioned JEUS doesn't really exist on StackOverflow, I've never seen it being mentioned on forums, or in discussions on the various Java (EE) mailing lists (although occasionally TMaxSoft experts do post) and at OmniFaces we have never received any question or bug reports from a JEUS user. The Eclipse marketplace does not carry any plug-in for JEUS and it's not listed among the servers in the "download additional server adapters" dialog either. Yet logic dictates that there must be some user base that justifies the colossal investment of developing and maintaining a full Java EE implementation.

Downloading JEUS proved to be a bit troublesome. There's a company site at tmaxsoft.com, but this is the typical corporate website that contains some nice projections and customer testimonials (indeed mainly Korean companies again) without any real technical information or any kind of download link. Frustrating is that despite the site being composed of static documents, it uses JavaScript to put new content on a page making it impossible to link to anything. Even worse is that a number of links just don't seem to work. At the JEUS section (to which I can't link because of the js issue), there's a link to download the brochure, but after clicking on it simply nothing happens. I tried to register from the JEUS section and got directed to an actual URL (http://tmaxsoft.com/jsp/popup/info_reg.jsp), but after filling out the form nothing happened. When trying to register from the index page I got redirected to another URL (http://tmaxsoft.com/member/memberRegister.do?strMode=INSERT&join_gubun=web) which presented a rather similar but still different form. After I filled this one out I was rewarded with a friendly confirmation:


Register has been completed.
Please authentication with your email address which is registerd when you signing up.
But when I did just that, a got a somewhat less friendly message:

this ID has not been certificated
with your e-mail for register yet.
Besides the not entirely correct usage of English (which is a sin I also not rarely commit despite being partially of English descent), this was all not really encouraging.

After some Googling, I stumbled upon another TMaxSoft site, a proper technical one this time: technet.tmax.co.kr :) Unfortunately it's all in Korean. You can change to English, but that gives you a rather different site instead: technet.tmaxsoft.com. Luckily Google Translate does a fairly decent job. Reading the content it's weird that for an application server that always wins the certification race for a new Java EE version, the overall description of the server is about J2EE 1.3 features! :( At the bottom of the document it goes on to describe that JEUS 4.2 is compatible with J2EE 1.3, and not a word about JEUS 7 (Java EE 6) let alone JEUS 8 (Java EE 7).

Registering again via this site using the translated Korean version however finally did work! I could now access a download page, which offered me to download JEUS 5 (J2EE 1.4) and JEUS 6 (Java EE 5). After browsing some more through the site I found a JEUS 4.x Development Guide (which should be over a decade old), and clicking on the online manual brought up some JEUS 5 docs (nearly a decade old), and when downloading the manual I would get something for JEUS 6 (still some 6 years old). I checked the JEUS section of their corporate Chinese site, but it too has JEUS 6 listed as the latest version. From here I could downloaded the brochure, but it doesn't say much. There are links to a US site, but the domain seems to be gone. Some clever fiddling with the URL of the online manual at the Korean site revealed that the JEUS 7 online manual is in fact there, namely at technet.tmax.co.kr/kr/edocs/jeus/70/index.html. Why this isn't linked as the default (is it even linked from somewhere at all?) is beyond me.

Just to be sure I also tried the English technet.tmaxsoft.com. Curiously, I had to register again for this one (it really is a separate site). Unfortunately, here too the latest JEUS version that can be downloaded from the Trial Versions section is JEUS 6 and the latest manual is again JEUS 6. Even stranger is that the US version of the TMaxSoft site apparently did mention JEUS 7 and had some clear download links for a jeus60_unix_generic_en.bin and jeus70_linux_x86.bin, according to an archived capture of the domain when it was still up.

This all didn't exactly boost my confidence in JEUS. How can they be the first with a new Java EE version all the time, yet have a website that seems stuck years in the past and inconsistently presents me documentation about different (ancient) versions? It just didn't make a whole lot of sense. At any length the JEUS 7 manual that I found seemed fairly professional and thorough, but it's mainly about the classic Java EE stuff (Servlet, JSP, EJB (including CMP!), JCA, etc). I didn't see JSF and CDI being mentioned anywhere, but it does cover JPA.

After some more Googling I finally landed on a developer preview page at the corporate site again that did contain a direct download link for JEUS 8. (Again, I wonder why this isn't directly linked from the tech site, and why it isn't on the main download page.) Via this I got a 160.5MB large file called jeus80_unix_generic_ko.bin. The first few hundred lines of this file are a shell script that starts some Java based installer that extracts itself. It's a clever trick to have a one file kind of universal installer. Unfortunately it wouldn't run on OS X:


Preparing to install...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Preparing CONSOLE Mode Installation...

===============================================================================
JEUS8.0 (created with InstallAnywhere by Macrovision)
-------------------------------------------------------------------------------


The installer cannot run on your configuration. It will now quit.
Luckily I have an Ubuntu machine as well, on which the install did work:

Introduction
------------

InstallAnywhere will guide you through the installation of JEUS8.0.

It is strongly recommended that you quit all programs before continuing with
this installation.
I was of course not going to quit all programs (does anyone seriously do this???) and continued. After accepting the license I could choose a platform:

Choose Platform
---------------

Choose the operating system and architecture :
1)HP-UX PA-RISC
2)HP-UX Itanium
3)Solaris UltraSPARC
4)Solaris x86
5)Solaris x64
6)AIX 5.x, 6.x, 7.x PowerPC
7)Linux Itanium
8)Linux x86
9)Linux x64
Quit) Quit Installer

Choose Current System (DEFAULT: 9):
And after asking for an installation folder I could choose a type (?):

Installation type
-----------------

Please choose the Install Set to be installed by this installer.

->1- Domain Admin Server
2- Managed Server

ENTER THE NUMBER FOR THE INSTALL SET, OR PRESS TO ACCEPT THE DEFAULT
As I have no idea what the difference is, I went for the default. After that it asked me for an installation type (production/dev), for a JDK path (interestingly it only accepted a JDK 7 version), an administrator password (it wanted a minimal length and some numbers, I choose "admin007" and a "domain" name (I went for the default "jeus_domain"). This installed 207MB in a folder, with the following layout:

bin docs lib nodemanager setup ThirdPartyLicenses.txt
derby domains license readme.txt templates UninstallerData
Interestingly setup/lib_native had the following content:

aix5l_32 hp-ux_64 linux_ia64 linux_x86_32 sunos_32 sunos_x86
aix5l_64 hp-ux_ia64_32 linux_ppc_32 linux_x86_64 sunos_64 win32
hp-ux_32 hp-ux_ia64_64 linux_ppc_64 mac sunos_x64 win64
Note the presence of the mac folder! While setup/bin only contains "binaries" for win and unix, the actual binaries are just shell scripts. E.g. /bin/startDomainAdminServer contains this as its most important part:

#!/bin/sh

[...]

# execute jeus with echo
set -x
"${JAVA_HOME}/bin/java" $VM_OPTION $SESSION_MEM \
-Xbootclasspath/p:"${JEUS_HOME}/lib/system/extension.jar" \
-classpath "${LAUNCHER_CLASSPATH}" \
-Dsun.rmi.dgc.client.gcInterval=3600000 \
-Dsun.rmi.dgc.server.gcInterval=3600000 \
-Djava.library.path="${JEUS_LIBPATH}" \
-Djava.endorsed.dirs="${JEUS_HOME}/lib/endorsed" \
-Djava.naming.factory.initial=jeus.jndi.JNSContextFactory \
-Djava.naming.factory.url.pkgs=jeus.jndi.jns.url \
-Djava.net.preferIPv4Stack=true \
-Djava.util.logging.manager=jeus.util.logging.JeusLogManager \
-Djava.util.logging.config.file="${JEUS_HOME}/bin/logging.properties" \
-Djeus.home="${JEUS_HOME}" \
-Djeus.jvm.version=${VM_TYPE} \
-Djeus.tm.checkReg=true \
-Djeus.properties.replicate=jeus,sun.rmi,java.util,java.net \
${JAVA_ARGS} \
This all looks like it could possibly run on OS X anyway but I didn't try this for now.

Most of the libraries that make up the server are in lib/system:


activation.jar http.jar jeus-omgapi.jar resolver.jar
appcompiler.jar jasper.jar jeus-servlet.jar saaj-impl.jar
bootstrap.jar javaee.jar jeus-store.jar sasl.jar
classmate-0.8.0.jar javax.json.jar jeus-tm.jar serializer.jar
com.ibm.jbatch-ri-spi.jar jaxb1-impl.jar jeus-toplink-essentials.jar shoal.jar
com.ibm.jbatch-runtime-all.jar jaxb2-basics-runtime.jar jeusutil.jar sigar-1.6.4.jar
commons-cli.jar jaxb-impl.jar jeus-websocket.jar sjsxp.jar
commons.jar jaxb-xjc.jar jeus-ws.jar snmp_agent.jar
corba-asm.jar jaxrpc-impl.jar jline.jar stax-ex.jar
corba-codegen.jar jaxrpc-spi.jar jms.jar streambuffer.jar
corba-csiv2-idl.jar jaxws-rt.jar jmx-description.jar tmaxjce_jdk15x.jar
corba-internal-api.jar jaxws-tools.jar jmxremote.jar TMAX-JEUS7.0-MIB.mib
corba-newtimer.jar jboss-logging-3.1.1.GA.jar jmxtools.jar toplink-essentials-agent.jar
corba-omgapi.jar jeus-ant-util.jar jsse14_repack.jar toplink-essentials.jar
corba-orbgeneric.jar jeusapi.jar jxerces.jar trilead-ssh2.jar
corba-orb.jar jeusasm.jar libCUtility.so weld-api.jar
deploy.jar jeus-concurrent.jar libJeusNet.so weld-core.jar
derby.jar jeus-config.jar libjtiagent.so weld-spi.jar
derbynet.jar jeus-console2.jar libNSStream.so woodstox.jar
ecj.jar jeus-console-executor.jar libRunner.so wsit.jar
eclipselink.jar jeus-eclipselink.jar libsigar-amd64-linux.so xalan.jar
el-impl.jar jeus-gms.jar libsigar-universal64-macosx.dylib xercesImpl.jar
extension.jar jeus-hotswap.jar libWebtoBAdmin.so xml-apis.jar
FastInfoset.jar jeus.jar mail.jar xml_resource.jar
hibernate-validator-5.0.1.Final.jar jeusjaxb.jar message-bridge.jar xmlsec.jar
hibernate-validator-annotation-processor-5.0.1.Final.jar jeus-launcher.jar mimepull.jar xsltc.jar
hibernate-validator-cdi-5.0.1.Final.jar jeus-network.jar Module-Version-Info.txt
We see the usual suspects here, like Hibernate Validator as implementation for Bean Validation, the IBM JBatch implementation (it's Java EE 7, remember ;)), EclipseLink for JPA, Weld for CDI etc. Here we see again a Mac OS X artifact libsigar-universal64-macosx.dylib. It might be that TMaxSoft is working on OS X support, but just hasn't finished it (this is a developer preview after all).

Some additional "system" dependencies are in lib/shared:


jax-rs-ri-2.0 jsf-injection-provider.jar jsf_ri_1.2 jsf_ri_2.2 jsf-weld-integration.jar jstl_1.2 libraries.xml
The directories contain jars like jax-rs-ri-2.0.jar (Jersey), jsf-ri.jar (Mojarra) and jstl-impl.jar. (I wonder though why these aren't also in the lib/system directory with the other Java EE dependencies.) While JEUS is definitely not GlassFish it's clear that it uses the exact same set of dependencies.

The readme.txt is luckily in English and explains how to start the server. I used the following command from the JEUS installation directory:


./bin/startDomainAdminServer -domain jeus_domain -u administrator -p admin007
And lo it behold, it started at the first attempt:

***************************************************************
- JEUS Home : /home/arjan/jeus8
- Java Vendor : Sun
- Added Java Option :
***************************************************************
+ /opt/jdk/bin/java -server -Xmx512m -Xbootclasspath/p:/home/arjan/jeus8/lib/system/extension.jar -classpath /home/arjan/jeus8/lib/system/jeus-launcher.jar:/home/arjan/jeus8/lib/system/xalan.jar:/home/arjan/jeus8/lib/system/xsltc.jar:/home/arjan/jeus8/lib/system/jaxb-impl.jar:/home/arjan/jeus8/lib/system/woodstox.jar:/home/arjan/jeus8/lib/system/xml_resource.jar:/home/arjan/jeus8/lib/system/commons-cli.jar:/home/arjan/jeus8/lib/system/jaxb2-basics-runtime.jar:/home/arjan/jeus8/lib/system/javaee.jar:/home/arjan/jeus8/lib/system/tmaxjce_jdk15x.jar -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.library.path=/home/arjan/jeus8/lib/system -Djava.endorsed.dirs=/home/arjan/jeus8/lib/endorsed -Djava.naming.factory.initial=jeus.jndi.JNSContextFactory -Djava.naming.factory.url.pkgs=jeus.jndi.jns.url -Djava.net.preferIPv4Stack=true -Djava.util.logging.manager=jeus.util.logging.JeusLogManager -Djava.util.logging.config.file=/home/arjan/jeus8/bin/logging.properties -Djeus.home=/home/arjan/jeus8 -Djeus.jvm.version=hotspot -Djeus.tm.checkReg=true -Djeus.properties.replicate=jeus,sun.rmi,java.util,java.net jeus.launcher.Launcher -domain jeus_domain -u administrator -p admin007

================ JEUS LICENSE INFORMATION ================
=== VERSION : JEUS 8.0 (Fix#0) (8.0.0.0-b1)
=== EDITION: Enterprise (Trial License)
=== NOTICE: This license restricts the number of allowed clients.
=== Max. Number of Clients: 5
==========================================================
[2013.09.24 22:48:11][2] [launcher-1] [Launcher-0012] Starting the server [adminServer] with the command
/opt/jdk1.7.0_40/jre/bin/java -DadminServer -Xmx1024m -XX:MaxPermSize=128m -server -Xbootclasspath/p:/home/arjan/jeus8/lib/system/extension.jar -classpath /home/arjan/jeus8/lib/system/bootstrap.jar -Djava.security.policy=/home/arjan/jeus8/domains/jeus_domain/config/security/policy -Djava.library.path=/home/arjan/jeus8/lib/system -Djava.endorsed.dirs=/home/arjan/jeus8/lib/endorsed -Djeus.properties.replicate=jeus,sun.rmi,java.util,java.net -Djeus.jvm.version=hotspot -Djava.util.logging.config.file=/home/arjan/jeus8/bin/logging.properties -Dsun.rmi.dgc.server.gcInterval=3600000 -Djava.util.logging.manager=jeus.util.logging.JeusLogManager -Djeus.home=/home/arjan/jeus8 -Djava.net.preferIPv4Stack=true -Djeus.tm.checkReg=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Djeus.domain.name=jeus_domain -Djava.naming.factory.initial=jeus.jndi.JNSContextFactory -Djava.naming.factory.url.pkgs=jeus.jndi.jns.url -Djeus.server.protectmode=false -XX:+UnlockDiagnosticVMOptions -XX:+LogVMOutput -XX:LogFile=/home/arjan/jeus8/domains/jeus_domain/servers/adminServer/logs/jvm.log jeus.server.admin.DomainAdminServerBootstrapper -domain jeus_domain -u administrator -server adminServer .
[2013.09.24 22:48:11][2] [launcher-1] [Launcher-0014] The server[adminServer] is being started ...
[2013.09.24 22:48:17][2] [adminServer-1] [SERVER-0248] The JEUS server is STARTING.
[2013.09.24 22:48:17][0] [adminServer-1] [SERVER-0000] Version information - JEUS 8.0 (Fix#0) (8.0.0.0-b1).
[2013.09.24 22:48:17][0] [adminServer-1] [SERVER-0001] java.specification.version=[1.7], java.runtime.version=[1.7.0_40-b43], vendor=[Oracle Corporation]
[2013.09.24 22:48:17][2] [adminServer-1] [SERVER-0002] Domain=[jeus_domain], Server=[adminServer], baseport=[9736], pid=[7151]
[2013.09.24 22:48:17][2] [adminServer-1] [SERVER-0004] The current system time zone : sun.util.calendar.ZoneInfo[id="Europe/Amsterdam",offset=3600000,dstSavings=3600000,useDaylight=true,transitions=180,lastRule=java.util.SimpleTimeZone[id=Europe/Amsterdam,offset=3600000,dstSavings=3600000,useDaylight=true,startYear=0,startMode=2,startMonth=2,startDay=-1,startDayOfWeek=1,startTime=3600000,startTimeMode=2,endMode=2,endMonth=9,endDay=-1,endDayOfWeek=1,endTime=3600000,endTimeMode=2]]
[2013.09.24 22:48:17][2] [adminServer-1] [SERVER-0571] All JEUS system properties have been confirmed.
[2013.09.24 22:48:17][2] [adminServer-1] [SERVER-0568] [Network] service ip:port = [0.0.0.0 : 9736], representation ip = [0.0.0.0], hostname = [java.local], inetAddress = [java.local/0.0.0.0]
[2013.09.24 22:48:17][2] [adminServer-1] [SERVER-0561] The default RMI export port = 9743.
[2013.09.24 22:48:17][2] [adminServer-1] [UNIFY-0102] There's no selectors count setting for Non-blocking listener 'BASE', applying available processors count 1 instead of it.
[2013.09.24 22:48:17][2] [adminServer-1] [NET-0002] Beginning to listen to NonBlockingChannelAcceptor: /0.0.0.0:9736.
[2013.09.24 22:48:17][2] [adminServer-1] [UNIFY-0102] There's no selectors count setting for Non-blocking listener 'http-server', applying available processors count 1 instead of it.
[2013.09.24 22:48:17][2] [adminServer-1] [NET-0002] Beginning to listen to NonBlockingChannelAcceptor: /0.0.0.0:8808.
[2013.09.24 22:48:17][2] [adminServer-1] [UNIFY-0102] There's no selectors count setting for Non-blocking listener 'jms-internal', applying available processors count 1 instead of it.
[2013.09.24 22:48:17][2] [adminServer-1] [NET-0002] Beginning to listen to NonBlockingChannelAcceptor: /0.0.0.0:9941.
[2013.09.24 22:48:17][2] [adminServer-1] [GMS-0100] The GMS instance for the cluster group jeus_domain_1111111111 has been successfully initialized.
[2013.09.24 22:48:18][2] [adminServer-1] [JNSS-0009] The JNDI naming server has been successfully initialized.
[2013.09.24 22:48:18][2] [adminServer-1] [JNDI.Local-0001] Starting JNDI Local Client...
[2013.09.24 22:48:18][2] [adminServer-1] [JMXR-0138] JMXConnector service URL : service:jmx:jmxmp://0.0.0.0:9736/JeusMBeanServer
[2013.09.24 22:48:18][2] [adminServer-1] [JMXR-0138] JMXConnector service URL : service:jmx:jmxmp://0.0.0.0:9736/JEUSMP_adminServer
[2013.09.24 22:48:18][2] [adminServer-1] [JMX-0051] JMXConnector started with the JNDI name [mgmt/rmbs/adminServer].
[2013.09.24 22:48:18][2] [adminServer-1] [GMS-1000] The member adminServer has joined the cluster group jeus_domain_1111111111.
[2013.09.24 22:48:21][2] [adminServer-1] [CORBA-1002] ORB(SE-ORB) started.
[2013.09.24 22:48:22][2] [adminServer-1] [JMS-7374] The persistence store manager for the JMS broker 'adminServer' has been started.
[2013.09.24 22:48:22][2] [adminServer-1] [JMS-6822] The JMS engine with the broker named adminServer has started.
[2013.09.24 22:48:22][2] [adminServer-1] [WEB-1003] Socket send buffer size of this operating system = [8192]
[2013.09.24 22:48:24][2] [adminServer-1] [WEB-1030] The web engine has started successfully.
[2013.09.24 22:48:24][2] [adminServer-1] [Deploy-0095] Distributing the application[webadmin].
[2013.09.24 22:48:24][2] [adminServer-1] [WEB-3857]
- session descriptor -
- timeout : 30(min)
- shared : false
- reload-persistent : false
- session tracking mode -
- Cookie : true
- URL Rewrite: false
- SSL : false

- session cookie config -
- cookie-name : JSESSIONID
- version : 0
- domain : null
- path : null
- max-age : -1 (browser-lifetime)
- secure : false
- http-only : true

[2013.09.24 22:48:24][2] [adminServer-1] [WEB-1032] Distributed the web context [webadmin] information
- Virtual host : DEFAULT_HOST
- Context path : /webadmin
- Document base : /home/arjan/jeus8/lib/systemapps/fakeWebadmin_war

[2013.09.24 22:48:25][2] [adminServer-1] [WEB-3480] The web module [webadmin] has been successfully distributed.
[2013.09.24 22:48:25][2] [adminServer-1] [Deploy-0096] Successfully distributed the appliacion[webadmin].
[2013.09.24 22:48:25][2] [adminServer-1] [WEB-3484] ServletContext[name=webadmin, path=/webadmin, ctime=Tue Sep 24 22:48:24 CEST 2013] started successfully.
[2013.09.24 22:48:25][2] [adminServer-1] [SERVER-0248] The JEUS server is STANDBY.
[2013.09.24 22:48:25][2] [adminServer-1] [SERVER-0248] The JEUS server is STARTING.
[2013.09.24 22:48:25][2] [adminServer-1] [WEB-3413] The web engine is ready to receive requests.
[2013.09.24 22:48:25][2] [adminServer-1] [SERVER-0602] Successfully sent the JoinedAndReady event. JEUS GMS=[Group=jeus_domain_1111111111,ServerToken=adminServer]
[2013.09.24 22:48:25][2] [adminServer-1] [UNIFY-0100] Listener information
BASE (plain, 0.0.0.0 : 9736) - VIRTUAL - SecurityServer
- FileTransfer
- BootTimeFileTransfer
- ClassFTP
- JNDI
- JMXConnectionServer/JeusMBeanServer
- JMXConnectionServer/JEUSMP_adminServer
- GMS-NetworkManager
- TransactionManager
- HTTP Listener
http-server (plain, 0.0.0.0 : 8808) - VIRTUAL
- HTTP Listener
jms-internal (plain, 0.0.0.0 : 9941) - VIRTUAL - JMSServiceChannel-internal

[2013.09.24 22:48:25][0] [adminServer-1] [SERVER-0242] Successfully started the server.
[2013.09.24 22:48:25][2] [adminServer-1] [SERVER-0248] The JEUS server is RUNNING.
[2013.09.24 22:48:25][2] [adminServer-1] [SERVER-0401] The elapsed time to start: 13779ms.
[2013.09.24 22:48:25][2] [launcher-10] [Launcher-0034] The server[adminServer] initialization completed successfully[pid : 7151].
[2013.09.24 22:48:25][0] [launcher-1] [Launcher-0040] Successfully started the server. The server state is now RUNNING.
The server listens on port 8808 by default. While 8080 seems to be sort of the universal default, there are certainly more servers that deviate from this (e.g. WebLogic defaults to 7001). There doesn't seem be a web app configured on the root, as requesting localhost:8808 yields the following result:

You can enable a web console by adding <enable-webadmin>true</enable-webadmin> to [jeus install dir]/domains/jeus_domain/config/domain.xml as a child of the domain node, e.g. :




false
1111111111

adminServer
true

After this I should have been able to request http://localhost:9736/webadmin, but unfortunately this didn't work:

After some investigation it looks like this web app should have been in [jeus install dir]/jeus8/lib/systemapps/webadmin, but in my case this directory only contained a WEB-INF folder with empty lib and classes folders. When I created an a.jsp file with just the content "hello" in the [jeus install dir]/jeus8/lib/systemapps/webadmin folder and requested the URL again, I got a result :)

Most likely TMaxSoft is doing an overhaul of their web console and it's not added to this developer preview yet. I assume that on the current production version JEUS 7 this would just have worked.

Finally with the command ./stopServer -host localhost -u administrator -p admin007 JEUS can be stopped. Luckily, it indeed stopped correctly.

Conclusion

For a seasoned Java EE developer and library writer it's rather intriguing that there's a complete Java EE server out there that's virtually unknown outside Korea, or at the very least seems to be completely unknown in the West. JEUS may be a very capable server going from the impressive list of testimonials, but there are very high barriers for the casual Java EE developer to come into contact with JEUS.

The most direct issue is that people outside Korea just don't know about JEUS. It's as simple as that. News postings about JEUS being certified is probably the only thing people ever hear about it. JEUS should really have an Eclipse plug-in in the Eclipse marketplace or in the list of "download additional server adapters", if only to raise awareness that it exists. A few (English) blog posts now and then wouldn't hurt either (a very good example is how David Blevins is raising awareness of TomEE's existence).

For the curious developer who is interested in discovering what JEUS is, the current TmaxSoft websites are a VERY high barrier as well. The USA domain which is just gone, the mandatory login to see documentation or access the download page and above all the absence of links to the most recent versions of JEUS (JEUS 7) are HUGE barriers that will probably shoe away even the more enthusiastic developers.

But JEUS -always- winning the Java EE certification race and the fact that they have been working on their server since at least 2001 (the year JEUS 3 was released) must mean they're doing something right. In a next article I hope to actual run some code on JEUS. My main goal is run the OmniFaces showcase application (which intensively tests JSF, CDI and some BeanValidation), my JASPIC test suite (which obviously tests JASPIC) and our Java EE kickstart application (which tests JSF, EJB, JPA, and the default datasource among others).

Arjan Tijms



References: Further reading:

NEC's WebOTX - a commercial GlassFish derivative

$
0
0
In a previous article we took a look at an obscure Java EE application server that's only known in Korea and virtually unknown everywhere else. Korea is not the only country that has a national application server though. Japan is the other country. In fact, it has not one, but three obscure application servers.

These Japanese servers, the so-called obscure 3, are so unknown outside of Japan that major news events like a Java EE 7 certification simply just does not make it out here.

Those servers are the following:

  1. NEC WebOTX
  2. Hitachi Application Server
  3. Fujitsu Interstage AS

In this article we're going to take a quick look at the first one of this list: NEC WebOTX.

While NEC does have an international English page where a trial can be downloaded it only contains a very old version of WebOTX; 8.4, which implements Java EE 5. This file is called otx84_win32bitE.exe and is about 92MB in size.

As with pretty much all of the Asian application servers, the native language pages contain much more and much newer versions. In this case the Japanese page contains a recent version of WebOTX; 9.2, which implements Java EE 6. This file is called OTXEXP92.exe and is about 111MB in size. A bit of research revealed that a OTXEXP91.exe also once existed, but no other versions were found.

The file is a Windows installer, that presents several dialogs in Japanese. If you can't read Japanese it's a bit difficult to follow. Luckily, there are English instructions for the older WebOTX 8.4 available that still apply to the WebOTX 9.2 installer process as well. Installation takes a while and several scripts seem to start running, and it even wants to reboot the computer (a far cry from download & unzip, start server), but after a while WebOTX was installed in e:\webotx.

Jar and file comparison

One of the first things I often do after installing a new server is browse a little through the folders of the installation. This gives me some general idea about how the server is structured, and quite often will reveal what implementation components a particular server is using.

Surprisingly, the folder structure somewhat resembled that of GlassFish, but with some extra directories. E.g.

GlassFish 3.1.2.2 main dirWebOTX 9.2 main dir

 

Looking at the modules directory in fact did make it clear that WebOTX is in fact strongly based on GlassFish:

GlassFish 3.1.2.2 modules dirWebOTX 9.2 modules dir

 

The jar files are largely identical in the part shown, although WebOTX does have the extra jar here and there. It's a somewhat different story when it comes to the glassfish-* and gf-* jars. None of these are present in WebOTX, although for many similar ones are present but just prefixed by webotx- as shown below:

glassfish- prefixed jarswebotx- prefixed jars

 

When actually looking inside one of the jars with a matching name except for the prefix e.g. glassfish.jar vs webotx.jar, then it becomes clear that at least the file names are largely the same again, except for the package being renamed. See below:

glassfish.jarwebotx.jar

 

Curiously a few jars with similar names have internally renamed package names. This is for instance the case for the well known Jersey (JAX-RS) jar, but for some reason not for Mojarra (JSF). See below:

glassfish jersey-core.jarwebotx jersey-core.jar

 

Besides the differences shown above, name changes occur at a number of other places. For instance well known GlassFish environment variables have been renamed to corresponding WebOTX ones, and pom.xml as well as MANIFEST.FM files in jar files have some renamed elements as well. For instance, the embedded pom.xml for the mojarra jar contains this:


<project>
<modelVersion>4.0.0</modelVersion>
<!-- upds start 20121122 org.glassfish to com.nec.webotx.as -->
<groupId>com.nec.webotx.as</groupId>
<!-- upds end 20121122 org.glassfish to com.nec.webotx.as -->
<artifactId>javax.faces</artifactId>
<version>9.2.1</version>
<packaging>jar</packaging>
<name>
Oracle's implementation of the JSF 2.1 specification.
</name>
<description>
This is the master POM file for Oracle's Implementation of the JSF 2.1 Specification.
</description>
With the MANIFEST.FM containing this:

Implementation-Title: Mojarra
Implementation-Version: 9.2.1
Tool: Bnd-0.0.249
DSTAMP: 20131217
TODAY: December 17 2013
Bundle-Name: Mojarra JSF Implementation 9.2.1 (20131217-1350) https://
swf0200036.swf.nec.co.jp/app/svn/WebOTX-SWFactory/dev/mojarra/branche
s/mojarra2.1.26@96979
TSTAMP: 1350
DocName: Mojarra Implementation Javadoc
Implementation-Vendor: Oracle America, Inc.

 

Trying out the server

Rather peculiar to say the least for a workstation is that WebOTX is automatically started when the computer is rebooted. Unlike most other Java EE servers the default HTTP port after installation is 80. There's no default application installed and requesting http://localhost results in the following screen:

The admin interface is present on port 5858. For some reason the initial login screen asks for very specific browser versions though:

After logging in with username "admin", password "adminadmin", we're presented with a colorful admin console:

As is not rarely the case with admin consoles for Java EE servers there's a lot of ancient J2EE stuff there. Options for generating stubs for EJB CMP beans are happily being shown to the user. In a way this is not so strange. Modern Java EE doesn't mandate a whole lot of things to be configured via a console, thanks to the ongoing standardization and simplification efforts, so what's left is not rarely old J2EE stuff.

I tried to upload a .war file of the OmniFaces showcase, but unfortunately this part of the admin console was still really stuck in ancient J2EE times as it politely told me it only accepted .ear files:

After zipping the .war file into a second zip file and then renaming it to .ear (a rather senseless exercise), the result was accepted and after requesting http://localhost again the OmniFaces showcase home screen was displayed:

As we can see, it's powered by Mojarra 9.2.1. Now we all know that Mojarra moves at an amazing pace, but last time I looked it was still at 2.3 m2. Either NEC travelled some time into the future and got its Mojarra version there, or the renaming in MANIFEST.FM as shown above was a little bit too eagerly done ;)

At any length, all of the functionality in the showcase seemed to work, but as it was tested on GlassFish 3 before this wasn't really surprising.

Conclusion

We took a short look at NEC's WebOTX and discovered it's a GlassFish derivative. This is perhaps a rather interesting thing. Since Oracle stopped commercial support for GlassFish a while ago, many wondered if the code base wouldn't wither at least a little when potentially fewer people would use it in production. However, if a large and well known company such as NEC offers a commercial offering based on GlassFish then this means that next to Payara there remains more interest in the GlassFish code beyond being "merely" an example for other vendors.

While we mainly looked at the similarities with respect to the jar files in the installed product we didn't look at what value NEC exactly added to GlassFish. From a very quick glance it seems that at least some of it is related to management and monitoring, but to be really sure a more in depth study would be needed.

It remains remarkable though that while the company NEC is well known outside Japan for many products, it has its own certified Java EE server that's virtually unheard of outside of Japan.

Arjan Tijms

OmniFaces 2.1 released!

$
0
0
We're proud to announce that today we've released OmniFaces 2.1. OmniFaces is a utility library for JSF that provides a lot of utilities to make working with JSF much easier.

OmniFaces 2.1 is the second release that will depend on JSF 2.2 and CDI 1.1 from Java EE 7. Since Java EE 7 availability remains somewhat scarce, we maintain a no-frills 1.x branch for JSF 2.0 (without CDI) as well.

The easiest way to use OmniFaces 2.1 is via Maven by adding the following to pom.xml:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces</artifactId>
<version>2.1</version>
</dependency>

Alternatively the jars files can be downloaded directly.

A complete overview of all that's new can be found on the what's new page, and some more details can be found in BalusC's blogpost about this release.

As usual the release contains an assortment of new features, some changes and a bunch of fixes. One particular fix that took some time to get right is getting a CDI availability check to work correctly with Tomcat + OpenWebBeans (OWB). After a long discussion we finally got this to work, with special thanks to Mark Struberg and Ludovic Pénet.

One point worth noting is that since we joined the JSF EG, our time has to be shared between that and working on OmniFaces. In addition some code that's now in OmniFaces might move to JSF core (such as already happened for the IterableDataModel in order to support the Iterable interface in UIData and UIRepeat). For the OmniFaces 2.x line this will have no effect though, but for OmniFaces 3.x (which will focus on JSF 2.3) it may.

We will start planning soon for OmniFaces 2.2. Feature requests are always welcome ;)

Arjan Tijms


JSF 2.3 new feature: registrable DataModels

$
0
0
Iterating components in JSF such as h:dataTable and ui:repeat have the DataModel class as their native input type. Other datatypes such as List are supported, but these are handled by build-in wrappers; e.g. an application provided List is wrapped into a ListDataModel.

While JSF has steadily expanded the number of build-in wrappers and JSF 2.3 has provided new ones for Map and Iterable, a long standing request is for users (or libraries) to be able to register their own wrappers.

JSF 2.3 will now (finally) let users do this. The way this is done is by creating a wrapper DataModel for a specific type, just as one may have done years ago when returning data from a backing bean, and then annotating it with the new @FacesDataModel annotation. A “forClass” attribute has to be specified on this annotation that designates the type this wrapper is able to handle.

The following gives an abbreviated example of this:


@FacesDataModel(forClass = MyCollection.class)
public class MyCollectionModel<E> extends DataModel<E> {

@Override
public E getRowData() {
// access MyCollection here
}

@Override
public void setWrappedData(Object myCollection) {
// likely just store myCollection
}

// Other methods omitted for brevity
}

Note that there are two types involved here. The “forClass” attribute is the collection or container type that the DataModel wraps, while the generic parameter E concerns the data this collection contains. E.g. Suppose we have a MyCollection<User>, then “forClass” would correspond to MyCollection, and E would correspond to User. If set/getWrappedData was generic the “forClass” attribute may not have been needed, as generic parameters can be read from class definitions, but alas.

With a class definition as given above present, a backing bean can now return a MyCollection as in the following example:


@Named
public class MyBacking {
public MyCollection<User> getUsers() {
// return myCollection
}
}
h:dataTable will be able to work with this directly, as shown in the example below:

<h:dataTable value="#{myBacking.users}" var="user">
<h:column>#{user.name}</h:column>
</h:dataTable>

There are a few things noteworthy here.

Traditionally JSF artefacts like e.g. ViewHandlers are registered using a JSF specific mechanism, kept internally in a JSF data structure and are looked up using a JSF factory. @FacesDataModel however has none of this and instead fully delegates to CDI for all these concerns. The registration is done automatically by CDI by the simple fact that @FacesDataModel is a CDI qualifier, and lookup happens via the CDI BeanManager (although with a small catch, as explained below).

This is a new direction that JSF is going in. It has already effectively deprecated its own managed bean facility in favour of CDI named beans, but is now also favouring CDI for registration and lookup of the pluggable artefacts it supports. New artefacts will henceforth very likely exclusively use CDI for this, while some existing ones are retrofitted (like e.g. Converters and Validators). Because of the large number of artefacts involved and the subtle changes in behaviour that can occur, not all existing JSF artefacts will however change overnight to registration/lookup via CDI.

Another thing to note concerns the small catch with the CDI lookup that was mentioned above. The thing is that with a direct lookup using the BeanManager we’d get a very specific wrapper type. E.g. suppose there was no build-in wrapper for List and one was provided via @FacesDataModel. Now also suppose the actual data type encountered at runtime is an ArrayList. Clearly, a direct lookup for ArrayList will do us no good as there’s no wrapper available for exactly this type.

This problem is handled via a CDI extension that observes all definitions of @FacesDataModel that are found by CDI during startup and stores the types they handle in a collection. This is afterwards sorted such that for any 2 classes X and Y from this collection, if an object of X is an instanceof an object of Y, X appears in the collection before Y. The collection's sorting is otherwise arbitrary.

With this collection available, the logic behind @FacesDataModel scans this collection of types from beginning to end to find the first match which is assignable from the type that we encountered at runtime. Although it’s an implementation detail, the following shows an example of how the RI implements this:


getDataModelClassesMap(cdi).entrySet().stream()
.filter(e -> e.getKey().isAssignableFrom(forClass))
.findFirst()
.ifPresent(
e -> dataModel.add(
cdi.select(
e.getValue(),
new FacesDataModelAnnotationLiteral(e.getKey())
).get())
);

In effect this means we either lookup the wrapper for our exact runtime type, or the closest super type. I.e. following the example above, the wrapper for List is found and used when the runtime type is ArrayList.

Before JSF 2.3 is finalised there are a couple of things that may still change. For instance, Map and Iterable have been added earlier as build-in wrappers, but could be refactored to be based on @FacesDataModel as well. The advantage is be that the runtime would be a client of the new API as well, which on its turn means its easier for the user to comprehend and override.

A more difficult and controversial change is to allow @FacesDataModel wrappers to override build-in wrappers. Currently it’s not possible to provide one own’s List wrapper, since List is build in and takes precedence. If @FacesDataModel would take precedence, then a user or library would be able to override this. This by itself is not that bad, since JSF lives and breathes by its ability to let users or libraries override or extend core functionality. However, the fear is that via this particular way of overriding a user may update one if its libraries that happens to ship with an @FacesDataModel implementation for List, which would then take that user by surprise.

Things get even more complicated when both the new Iterable and Map would be implemented as @FacesDataModel AND @FacesDataModel would take precedence over the build-in types. In that case the Iterable wrapper would always match before the build-in List wrapper, making the latter unreachable. Now logically this would not matter as Iterable handles lists just as well, but in practice this may be a problem for applications that in some subtle way depend on the specific behaviour of a given List wrapper (in all honestly, such applications will likely fail too when switching JSF implementations).

Finally, doing totally away with the build-in wrappers and depending solely on @FacesDataModel is arguably the best option, but problematic too for reasons of backwards compatibility. This thus poses an interesting challenge between two opposite concerns: “Nothing can ever change, ever” and “Modernise to stay relevant and competitive”.

Conclusion

With @FacesDataModel custom DataModel wrappers can be registered, but those wrappers can not (yet) override any of the build-in types.

Arjan Tijms

Activating JASPIC in JBoss WildFly

$
0
0
JBoss WildFly has a rather good implementation of JASPIC, the Java EE standard API to build authentication modules.

Unfortunately there's one big hurdle for using JASPIC on JBoss WildFly; it has to be activated. This activation is somewhat of a hack itself, and is done by putting the following XML in a file called standalone.xml that resides with the installed server:


<security-domain name="jaspitest" cache-type="default">
<authentication-jaspi>
<login-module-stack name="dummy">
<login-module code="Dummy" flag="optional"/>
</login-module-stack>
<auth-module code="Dummy"/>
</authentication-jaspi>
</security-domain>

Subsequently in the application a file called WEB-INF/jboss-web.xml needs to be created that references this (dummy) domain:


<?xml version="1.0"?>
<jboss-web>
<security-domain>jaspitest</security-domain>
</jboss-web>

While this works it requires the installed server to be modified. For a universal Java EE application that has to run on multiple servers this is a troublesome requirement. While not difficult, it's something that's frequently forgotten and can take weeks if not months to resolve. And when it finally is resolved the entire process of getting someone to add the above XML fragment may have to be repeated all over again when a new version of JBoss is installed.

Clearly having to activate JASPIC using a server configuration file is less than ideal. The best solution would be to not require any kind of activation at all (like is the case for e.g. GlassFish, Geronimo and WebLogic). But this is currently not implemented for JBoss WildFly.

The next best thing is doing this activation from within the application. As it appears this is indeed possible using some reflective magic and the usage of JBoss (Undertow) internal APIs. Here's where the OmniSecurity JASPIC Undertow project comes in. With this project JASPIC can be activated by putting the following in the pom.xml of a Maven project:


<dependency>
<groupId>org.omnifaces</groupId>
<artifactId>omnifaces-security-jaspic-undertow</artifactId>
<version>1.0</version>
</dependency>

The above causes JBoss WildFly/Undertow to load an extension that uses a number of internal APIs. It's not entirely clear why, but some of those are directly available, while other ones have to be declared as available. Luckily this can be done from within the application as well by creating a META-INF/jboss-deployment-structure.xml file with the following content:


<?xml version='1.0' encoding='UTF-8'?>
<jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.2">
<deployment>
<dependencies>
<module name="org.wildfly.extension.undertow" services="export" export="true" />
</dependencies>
</deployment>
</jboss-deployment-structure>

So how does the extension exactly work?

The most important code consists out of two parts. A reflective part to retrieve what JBoss calls the "security domain" (the default is "other") and another part that uses the Undertow internal APIs to activate JASPIC. This is basically the same code Undertow would execute if the dummy domain is put in standalone.xml.

For completeness, the reflective part to retrieve the domain is:


String securityDomain = "other";

IdentityManager identityManager = deploymentInfo.getIdentityManager();
if (identityManager instanceof JAASIdentityManagerImpl) {
try {
Field securityDomainContextField = JAASIdentityManagerImpl.class.getDeclaredField("securityDomainContext");
securityDomainContextField.setAccessible(true);
SecurityDomainContext securityDomainContext = (SecurityDomainContext) securityDomainContextField.get(identityManager);

securityDomain = securityDomainContext.getAuthenticationManager().getSecurityDomain();

} catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException e) {
logger.log(Level.SEVERE, "Can't obtain name of security domain, using 'other' now", e);
}
}

The part that uses Undertow APIs to activate JASPIC is:


ApplicationPolicy applicationPolicy = new ApplicationPolicy(securityDomain);
JASPIAuthenticationInfo authenticationInfo = new JASPIAuthenticationInfo(securityDomain);
applicationPolicy.setAuthenticationInfo(authenticationInfo);
SecurityConfiguration.addApplicationPolicy(applicationPolicy);

deploymentInfo.setJaspiAuthenticationMechanism(new JASPIAuthenticationMechanism(securityDomain, null));
deploymentInfo.setSecurityContextFactory(new JASPICSecurityContextFactory(securityDomain));
The full source can be found on GitHub.

Conclusion

For JBoss WildFly it's needed to activate JASPIC. There are two hacks available to do this. One requires a modification to standalone.xml and a jboss-web.xml, while the other requires a jar on the classpath of the application and a jboss-deployment-structure.xml file.

It would be best if such activation was not required at all. Hopefully this will indeed be the case in a future JBoss.

Arjan Tijms

How Servlet containers all implement identity stores differently

$
0
0
In Java EE security two artefacts play a major role, the authentication mechanism and the identity store.

The authentication mechanism is responsible for interacting with the caller and the environment. E.g. it causes a UI to be rendered that asks for details such as a username and password, and after a postback retrieves these from the request. As such it's roughly equivalent to a controller in the MVC architecture.

Java EE has standardised 4 authentication mechanisms for a Servlet container, as well as a JASPIC API profile to provide a custom authentication mechanism for Servlet (and one for SOAP, but let's ignore that for now). Unfortunately standard custom mechanisms are only required to be supported by a full Java EE server, which means the popular web profile and standalone servlet containers are left in the dark.

Servlet vendors can adopt the standard API if they want and the Servlet spec even encourages this, but in practice few do so developers can't depend on this. (Spec text is typically quite black and white. *Must support* means it's there, anything else like *should*, *is encouraged*, *may*, etc simply means it's not there)

The following table enumerates the standard options:

  1. Basic
  2. Digest (encouraged to be supported, not required)
  3. Client-cert
  4. Form
  5. Custom/JASPIC(encouraged for standalone/web profile Servlet containers, required for full profile Servlet containers)

The identity store on its turn is responsible for providing access to a storage system where caller data and credentials are stored. E.g. when being given a valid caller name and password as input it returns a (possibly different) caller name and zero or more groups associated with the caller. As such it's roughly equivalent to a model in the MVC architecture; the identity store knows nothing about its environment and does not interact with the caller. It only performs the {credentials in, caller data out} function.

Identity stores are somewhat shrouded in mystery, and not without reason. Java EE has not standardised any identity store, nor has it really standardised any API or interface for them. There is a bridge profile for JAAS LoginModules, which are arguably the closest thing to a standard interface, but JAAS LoginModules can not be used in a portable way in Java EE since essential elements of them are not standardised. Furthermore, this bridge profile can only be used for custom authentication mechanisms (using JASPIC), which is itself only guaranteed to be available for Servlet containers that reside within a full Java EE server as mentioned above.

What happens now is that every Servlet container provides a proprietary interface and lookup method for identity stores. Nearly all of them ship with a couple of default implementations for common storage systems that the developer can choose to use. The most common ones are listed below:

  • In-memory (properties file/xml file based)
  • Database (JDBC/DataSource based)
  • LDAP

As a direct result of not being standardised, not only do Servlet containers provide their own implementations, they also each came up with their own names. Up till now no less than 16(!) terms were discovered for essentially the same thing:

  1. authenticator
  2. authentication provider
  3. authentication repository
  4. authentication realm
  5. authentication store
  6. identity manager
  7. identity provider
  8. identity store
  9. login module
  10. login service
  11. realm
  12. relying party
  13. security policy domain
  14. security domain
  15. service provider
  16. user registry

Following a vote in the EG for the new Java EE security JSR, it was decided to use the term "identity store" going forward. This is therefor also the term used in this article.

To give an impression of how a variety of servlet containers have each implemented the identity store concept we analysed a couple of them. For each one we list the main interface one has to implement for a custom identity store, and if possible an overview of how the container actually uses this interface in an authentication mechanism.

The servlet containers and application servers containing such containers that we've looked at are given in the following list. Each one is described in greater detail below.

  1. Tomcat
  2. Jetty
  3. Undertow
  4. JBoss EAP/WildFly
  5. Resin
  6. GlassFish
  7. Liberty
  8. WebLogic

 

Tomcat

Tomcat calls its identity store "Realm". It's represented by the interface shown below:


public interface Realm {

Principal authenticate(String username);
Principal authenticate(String username, String credentials);
Principal authenticate(String username, String digest, String nonce, String nc, String cnonce, String qop, String realm, String md5a2);
Principal authenticate(GSSContext gssContext, boolean storeCreds);
Principal authenticate(X509Certificate certs[]);

void backgroundProcess();
SecurityConstraint [] findSecurityConstraints(Request request, Context context);
boolean hasResourcePermission(Request request, Response response, SecurityConstraint[] constraint, Context context) throws IOException;
boolean hasRole(Wrapper wrapper, Principal principal, String role);
boolean hasUserDataPermission(Request request, Response response, SecurityConstraint[] constraint) throws IOException;

void addPropertyChangeListener(PropertyChangeListener listener);
void removePropertyChangeListener(PropertyChangeListener listener);

Container getContainer();
void setContainer(Container container);
CredentialHandler getCredentialHandler();
void setCredentialHandler(CredentialHandler credentialHandler);
}

According to the documentation, "A Realm [identity store] is a "database" of usernames and passwords that identify valid users of a web application (or set of web applications), plus an enumeration of the list of roles associated with each valid user."

Tomcat's bare identity store interface is rather big as can be seen. In practice though implementations inherit from RealmBase, which is a base class (as its name implies). Somewhat confusingly its JavaDoc says that it's a realm "that reads an XML file to configure the valid users, passwords, and roles".

The only methods that most of Tomcat's identity stores implement are authenticate(String username, String credentials) for the actual authentication, String getName() to return the identity store's name (this would perhaps have been an annotation if this was designed today), and startInternal() to do initialisation (would likely be done via an @PostConstruct annotation today).

Example of usage

The code below shows an example of how Tomcat actually uses its identity store. The following shortened fragment is taken from the implementation of the Servlet FORM authentication mechanism in Tomcat.


// Obtain reference to identity store
Realm realm = context.getRealm();

if (characterEncoding != null) {
request.setCharacterEncoding(characterEncoding);
}
String username = request.getParameter(FORM_USERNAME);
String password = request.getParameter(FORM_PASSWORD);

// Delegating of authentication mechanism to identity store
principal = realm.authenticate(username, password);

if (principal == null) {
forwardToErrorPage(request, response, config);
return false;
}

if (session == null) {
session = request.getSessionInternal(false);
}

// Save the authenticated Principal in our session
session.setNote(FORM_PRINCIPAL_NOTE, principal);

What sets Tomcat aside from most other systems is that the authenticate() call in most cases directly goes to the custom identity store implementation instead of through many levels of wrappers, bridges, delegators and what have you. This is even true when the provided base class RealmBase is used.

 

Jetty

Jetty calls its identity store LoginService. It's represented by the interface shown below:


public interface LoginService {
String getName();
UserIdentity login(String username, Object credentials, ServletRequest request);
boolean validate(UserIdentity user);

IdentityService getIdentityService();
void setIdentityService(IdentityService service);

void logout(UserIdentity user);
}

According to its JavaDoc, a "Login service [identity store] provides an abstract mechanism for an [authentication mechanism] to check credentials and to create a UserIdentity using the set [injected] IdentityService".

There are a few things to remark here. The getName() method names the identity store. This would likely be done via an annotation had this interface been designed today.

The essential method of the Jetty identity store is login(). It's username/credentials based, where the credentials are an opaque Object. The ServletRequest is not often used, but a JAAS bridge uses it to provide a RequestParameterCallback to Jetty specific JAAS LoginModules.

validate() is essentially a kind of shortcut method for login() != null, albeit without using the credentials.

A distinguishing aspect of Jetty is that its identity stores get injected with an IdentityService, which the store has to use to create user identities (users) based on a Subject, (caller) Principal and a set of roles. It's not 100% clear what this was intended to accomplish, since the only implementation of this service just returns new DefaultUserIdentity(subject, userPrincipal, roles), where DefaultUserIdentity is mostly just a simple POJO that encapsulates those three data items.

Another remarkable method is logout(). This is remarkable since the identity store typically just returns authentication data and doesn't hold state per user. It's the authentication mechanism that knows about the environment in which this authentication data is used (e.g. knows about the HTTP request and session). Indeed, almost no identity stores make use of this. The only one that does is the special identity store that bridges to JAAS LoginModules. This one isn't stateful, but provides an operation on the passed in user identity. As it appears, the principal returned by this bridge identity store encapsulates the JAAS LoginContext, on which the logout() method is called at this point.

Example of usage

The code below shows an example of how Jetty uses its identity store. The following shortened and 'unfolded' fragment is taken from the implementation of the Servlet FORM authentication mechanism in Jetty.


if (isJSecurityCheck(uri)) {
String username = request.getParameter(__J_USERNAME);
String password = request.getParameter(__J_PASSWORD);

// Delegating of authentication mechanism to identity store
UserIdentity user = _loginService.login(username, password, request);
if (user != null) {
renewSession(request, (request instanceof Request? ((Request)request).getResponse() : null));

HttpSession session = request.getSession(true);
session.setAttribute(__J_AUTHENTICATED, new SessionAuthentication(getAuthMethod(), user, password));

// ...

base_response.sendRedirect(redirectCode, response.encodeRedirectURL(nuri));
return form_auth;
}
// ...
}

In Jetty a call to the identity store's login() method will in most cases directly call the installed identity store, and will not go through many layers of delegation, bridges, etc. There is a convenience base class that identity store implementations can use, but this is not required.

If the base class is used, two abstract methods have to be implemented; UserIdentity loadUser(String username) and void loadUsers(), where typically only the former really does something. When this base class is indeed used, the above call to login() goes to the implementation in the base class. This first checks a cache, and if the user is not there calls the sub class via the mentioned loadUser() class.


public UserIdentity login(String username, Object credentials, ServletRequest request) {

UserIdentity user = _users.get(username);

if (user == null)
user = loadUser(username);

if (user != null) {
UserPrincipal principal = (UserPrincipal) user.getUserPrincipal();
if (principal.authenticate(credentials))
return user;
}

return null;
}

The user returned from the sub class has a feature that's a little different from most other servers; it contains a Jetty specific principal that knows how to process the opaque credentials. It delegates this however to a Credential implementation as shown below:


public boolean authenticate(Object credentials) {
return credential != null && credential.check(credentials);
}

The credential used here is put into the user instance and represents the -expected- credential and can be of a multitude of types e.g. Crypt, MD5 or Password. MD5 means the expected password is MD5 hashed, while just Password means the expected password is plain text. The check for the latter looks as follows:


public boolean check(Object credentials) {
if (this == credentials)
return true;
if (credentials instanceof Password)
return credentials.equals(_pw);
if (credentials instanceof String)
return credentials.equals(_pw);
if (credentials instanceof char[])
return Arrays.equals(_pw.toCharArray(), (char[]) credentials);
if (credentials instanceof Credential)
return ((Credential) credentials).check(_pw);
return false;
}

 

Undertow

Undertow is one of the newest Servlet containers. It's created by Red Hat to replace Tomcat (JBossWeb) in JBoss EAP, and can already be used in WildFly 8/9/10 which are the unsupported precursors for JBoss EAP 7. Undertow can also be used standalone.

The native identity store interface of Undertow is the IdentityManager, which is shown below:


public interface IdentityManager {
Account verify(Credential credential);
Account verify(String id, Credential credential);
Account verify(Account account);
}
Peculiar enough there are no direct implementations for actual identity stores shipped with Undertow.

Example of usage

The code below shows an example of how Undertow actually uses its identity store. The following shortened fragment is taken from the implementation of the Servlet FORM authentication mechanism in Undertow.


FormData data = parser.parseBlocking();
FormData.FormValue jUsername = data.getFirst("j_username");
FormData.FormValue jPassword = data.getFirst("j_password");
if (jUsername == null || jPassword == null) {
return NOT_AUTHENTICATED;
}

String userName = jUsername.getValue();
String password = jPassword.getValue();
AuthenticationMechanismOutcome outcome = null;
PasswordCredential credential = new PasswordCredential(password.toCharArray());

// Obtain reference to identity store
IdentityManager identityManager = securityContext.getIdentityManager();

// Delegating of authentication mechanism to identity store
Account account = identityManager.verify(userName, credential);

if (account != null) {
securityContext.authenticationComplete(account, name, true);
outcome = AUTHENTICATED;
} else {
securityContext.authenticationFailed(MESSAGES.authenticationFailed(userName), name);
}

if (outcome == AUTHENTICATED) {
handleRedirectBack(exchange);
exchange.endExchange();
}

return outcome != null ? outcome : NOT_AUTHENTICATED;

 

JBoss EAP/WildFly

JBoss identity stores are based on the JAAS LoginModule, which is shown below:


public interface LoginModule {
void initialize(Subject subject, CallbackHandler callbackHandler, Map<String,?> sharedState, Map<String,?> options);
boolean login() throws LoginException;
boolean commit() throws LoginException;
boolean abort() throws LoginException;
boolean logout() throws LoginException;
}
As with most application servers, the JAAS LoginModule interface is used in a highly application server specific way.

It's a big question why this interface is used at all, since you can't just implement that interface. Instead you have to inherit from a credential specific base class. Therefor the LoginModule interface is practically an internal implementation detail here, not something the user actually uses. Despite that, it's not uncommon for users to think "plain" JAAS is being used and that JAAS login modules are universal and portable, but they are anything but.

For the username/password credential the base class to inherit from is UsernamePasswordLoginModule. As per the JavaDoc of this class, there are two methods that need to be implemented: getUsersPassword() and getRoleSets().

getUsersPassword() has to return the actual password for the provided username, so the base code can compare it against the provided password. If those passwords match getRoleSets() is called to retrieve the roles associated with the username. Note that JBoss typically does not map groups to roles, so it returns roles here which are then later on passed into APIs that normally would expect groups. In both methods the username is available via a call to getUsername().

The "real" contract as *hypothetical* interface could be thought of to look as follows:


public interface JBossIdentityStore {
String getUsersPassword(String username);
Group[] getRoleSets(String username) throws LoginException;
}

Example of usage

There's no direct usage of the LoginModule in JBoss. JBoss EAP 7/WildFly 8-9-10 directly uses Undertow as its Servlet container, which means the authentication mechanisms shipped with that uses the IdentityManager interface exactly as shown above in the Undertow section.

For usage in JBoss there's a bridge implementation of the IdentityManager to the JBoss specific JAAS LoginModule available.

The "identityManager.verify(userName, credential)" call shown above ends up at JAASIdentityManagerImpl#verify. This first wraps the username, but extracts the password from PasswordCredential. Abbreviated it looks as follows:


public Account verify(String id, Credential credential) {
if (credential instanceof DigestCredential) {
// ..
} else if(credential instanceof PasswordCredential) {
return verifyCredential(
new AccountImpl(id),
copyOf(((PasswordCredential) credential).getPassword())

);
}
return verifyCredential(new AccountImpl(id), credential);
}
The next method called in the "password chain" is somewhat troublesome, as it doesn't just return the account details, but as an unavoidable side-effect also puts the result of authentication in TLS. It takes a credential as an Object and delegates further to an isValid() method. This one uses a Subject as an output parameter (meaning it doesn't return the authentication data but puts it inside the Subject that's passed in). The calling method then extracts this authentication data from the subject and puts it into its own type instead.

Abbreviated again this looks as follows:


private Account verifyCredential(AccountImpl account, Object credential)
Subject subject = new Subject();
boolean isValid = securityDomainContext
.getAuthenticationManager()
.isValid(account.getOriginalPrincipal(), credential, subject);

if (isValid) {

// Stores details in TLS
getSecurityContext()
.getUtil()
.createSubjectInfo(account.getOriginalPrincipal(), credential, subject);

return new AccountImpl(
getPrincipal(subject), getRoles(subject),
credential, account.getOriginalPrincipal()
);
}

return null;
}
The next method being called is isValid() on a type called AuthenticationManager. Via two intermediate methods this ends up calling proceedWithJaasLogin.

This method obtains a LoginContext, which wraps a Subject, which wraps the Principal and roles shown above (yes, there's a lot of wrapping going on). Abbreviated the method looks as follows:


private boolean proceedWithJaasLogin(Principal principal, Object credential, Subject theSubject) {
try {
copySubject(defaultLogin(principal, credential).getSubject(), theSubject);
return true;
} catch (LoginException e) {
return false;
}
}

The defaultLogin() method finally just calls plain Java SE JAAS code, although just before doing that it uses reflection to call a setSecurityInfo() method on the CallbackHandler. It's remarkable that even though this method seems to be required and known in advance, there's no interface used for this. The handler being used here is often of the type JBossCallbackHandler.

Brought back to its essence the method looks like this:


private LoginContext defaultLogin(Principal principal, Object credential) throws LoginException {

CallbackHandler theHandler = (CallbackHandler) handler.getClass().newInstance();
setSecurityInfo.invoke(theHandler, new Object[] {principal, credential});

LoginContext lc = new LoginContext(securityDomain, subject, handler);
lc.login();

return lc;
}

Via some reflective magic the JAAS code shown here will locate, instantiate and at long last will call our custom LoginModule's initialize(), login() and commit() methods, which on their turn will call the two methods that we needed to implement in our subclass.

 

Resin

Resin calls its identity store "Authenticator". It's represented by a single interface shown below:


public interface Authenticator {
String getAlgorithm(Principal uid);
Principal authenticate(Principal user, Credentials credentials, Object details);
boolean isUserInRole(Principal user, String role);
void logout(Principal user);
}
There are a few things to remark here. The logout() method doesn't seem to make much sense, since it's the authentication mechanism that keeps track of the login state in the overarching server. Indeed, the method does not seem to be called by Resin, and there are no identity stores implementing it except for the AbstractAuthenticator that does nothing there.

isUserInRole() is somewhat remarkable as well. This method is not intended to check for the roles of any given user, such as you could for instance use in an admin UI. Instead, it's intended to be used by the HttpServletRequest#isUserInRole call, and therefor only for the *current* user. This is indeed how it's used by Resin. This is remarkable, since most other systems keep the roles in memory. Retrieving it from the identity store every time can be rather heavyweight. To combat this, Resin uses a CachingPrincipal, but an identity store implementation has to opt-in to actually use this.

Example of usage

The code below shows an example of how Resin actually uses its identity store. The following shortened fragment is taken from the implementation of the Servlet FORM authentication mechanism in Resin.


// Obtain reference to identity store
Authenticator auth = getAuthenticator();

// ..

String userName = request.getParameter("j_username");
String passwordString = request.getParameter("j_password");

if (userName == null || passwordString == null)
return null;

char[] password = passwordString.toCharArray();
BasicPrincipal basicUser = new BasicPrincipal(userName);
Credentials credentials = new PasswordCredentials(password);

// Delegating of authentication mechanism to identity store
user = auth.authenticate(basicUser, credentials, request);

return user;

A nice touch here is that Resin obtains the identity store via CDI injection. A somewhat unknown fact is that Resin has its own CDI implementation, CanDI and uses it internally for a lot of things. Unlike some other servers, the call to authenticate() here goes straight to the identity store. There are no layers of lookup or bridge code in between.

That said, Resin does encourage (but not require) the usage of an abstract base class it provides: AbstractAuthenticator. IFF this base class is indeed used (again, this is not required), then there are a few levels of indirection the flow goes through before reaching one's own code. In that case, the authenticate() call shown above will start with delegating to one of three methods for known credential types. This is shown below:


public Principal authenticate(Principal user, Credentials credentials, Object details) {
if (credentials instanceof PasswordCredentials)
return authenticate(user, (PasswordCredentials) credentials, details);
if (credentials instanceof HttpDigestCredentials)
return authenticate(user, (HttpDigestCredentials) credentials, details);
if (credentials instanceof DigestCredentials)
return authenticate(user, (DigestCredentials) credentials, details);
return null;
}

Following the password trail, the next level will merely extract the password string:


protected Principal authenticate(Principal principal, PasswordCredentials cred, Object details) {
return authenticate(principal, cred.getPassword());
}

The next authenticate method will call into a more specialized method that only obtains a User instance from the store. This instance has the expected password embedded, which is then verified against the provided password. Abbreviated it looks as follows:


protected Principal authenticate(Principal principal, char[] password) {
PasswordUser user = getPasswordUser(principal);

if (user == null || user.isDisabled() || (!isMatch(principal, password, user.getPassword()) && !user.isAnonymous()))
return null;

return user.getPrincipal();
}

The getPasswordUser() method goes through one more level of convenience, where it extracts the caller name that was wrapped by the Principal:


protected PasswordUser getPasswordUser(Principal principal) {
return getPasswordUser(principal.getName());
}

This last call to getPasswordUser(String) is what typically ends up in our own custom identity store.

Finally, it's interesting to see what data PasswordUser contains. Abbreviated again this is shown below:


public class PasswordUser {
Principal principal;
char[] password;

boolean disabled;
boolean anonymous;
String[] roles;
}

 

Glassfish

GlassFish identity stores are based on the JAAS LoginModule, which is shown below:


public interface LoginModule {
void initialize(Subject subject, CallbackHandler callbackHandler, Map<String,?> sharedState, Map<String,?> options);
boolean login() throws LoginException;
boolean commit() throws LoginException;
boolean abort() throws LoginException;
boolean logout() throws LoginException;
}

Just as we saw with JBoss above, the LoginModule interface is again used in a very application server specific way. In practice, you don't just implement a LoginModule but inherit from com.sun.enterprise.security.BasePasswordLoginModule or it's empty subclass com.sun.appserv.security.AppservPasswordLoginModule for password based logins, or com.sun.appserv.security.AppservCertificateLoginModule/com.sun.enterprise.security.BaseCertificateLoginModule for certificate ones.

As per the JavaDoc of those classes, the only method that needs to be implemented is authenticateUser(). Inside that method the username is available via the protected variable(!) "_username", while the password can be obtained via getPasswordChar(). When a custom identity store is done with its work commitUserAuthentication() has to be called with an array of groups when authentication succeeded and a LoginException thrown when it failed. So essentially that's the "real" contract for a custom login module. The fact that the other functionality is in the same class is more a case of using inheritance where aggregation might have made more sense. As we saw with JBoss, the LoginModule interface itself seems more like an implementation detail instead of something a client can really take advantage of.

The "real" contract as *hypothetical* interface looks as follows:


public interface GlassFishIdentityStore {
String[] authenticateUser(String username, char[] password) throws LoginException;
}

Even though a LoginModule is specific for a type of identity store (e.g. File, JDBC/database, LDAP, etc), LoginModules in GlassFish are mandated to be paired with another construct called a Realm. While having the same name as the Tomcat equivalent and even a nearly identical description, the type is completely different. In GlassFish it's actually a kind of DAO, albeit one with a rather heavyweight contract.

Most of the methods of this DAO are not actually called by the runtime for authentication, nor are they used by application themselves. They're likely intended to be used by the GlassFish admin console, so a GlassFish administrator can add and delete users. However, very few actual realms support this and with good reason. It just doesn't make much sense for many realms really. E.g. LDAP and Solaris have their own management UI already, and JDBC/database is typically intended to be application specific so there the application already has its own DAOs and services to manage users, and exposes its own UI as well.

A custom LoginModule is not forced to use this Realm, but the base class code will try to instantiate one and grab its name, so one must still be paired to the LoginModule.

The following lists the public and protected methods of this Realm class. Note that the body is left out for the non-abstract methods.


public abstract class Realm implements Comparable {

public static synchronized Realm getDefaultInstance();
public static synchronized String getDefaultRealm();
public static synchronized Enumeration getRealmNames();
public static synchronized void getRealmStatsProvier();
public static synchronized Realm getInstance(String);
public static synchronized Realm instantiate(String, File);
public static synchronized Realm instantiate(String, String, Properties);
public static synchronized void setDefaultRealm(String);
public static synchronized void unloadInstance(String);
public static boolean isValidRealm(String);
protected static synchronized void updateInstance(Realm, String);

public abstract void addUser(String, String, String[]);
public abstract User getUser(String);
public abstract void updateUser(String, String, String, String[]);
public abstract void removeUser(String);

public abstract Enumeration getUserNames();
public abstract Enumeration getGroupNames();
public abstract Enumeration getGroupNames(String);

public abstract void persist();
public abstract void refresh();

public abstract AuthenticationHandler getAuthenticationHandler();
public abstract boolean supportsUserManagement();
public abstract String getAuthType();

public int compareTo(Object);
public String FinalgetName();
public synchronized String getJAASContext();
public synchronized String getProperty(String);
public synchronized void setProperty(String, String);

protected void init(Properties);
protected ArrayList<String> getMappedGroupNames(String);
protected String[] addAssignGroups(String[]);
protected final void setName(String);
protected synchronized Properties getProperties();
}

Example of usage

To make matters a bit more complicated, there's no direct usage of the LoginModule in GlassFish either. GlassFish' Servlet container is internally based on Tomcat, and therefor the implementation of the FORM authentication mechanism is a Tomcat class (which strongly resembles the class in Tomcat itself, but has small differences here and there). Confusingly, this uses a class named Realm again, but it's a totally different Realm than the one shown above. This is shown below:


// Obtain reference to identity store
Realm realm = context.getRealm();

String username = hreq.getParameter(FORM_USERNAME);
String pwd = hreq.getParameter(FORM_PASSWORD);
char[] password = ((pwd != null)? pwd.toCharArray() : null);

// Delegating of authentication mechanism to identity store
principal = realm.authenticate(username, password);

if (principal == null) {
forwardToErrorPage(request, response, config);
return (false);
}

if (session == null)
session = getSession(request, true);

session.setNote(FORM_PRINCIPAL_NOTE, principal);

This code is largely identical to the Tomcat version shown above. The Tomcat Realm in this case is not the identity store directly, but an adapter called RealmAdapter. It first calls the following slightly abbreviated method for the password credential:


public Principal authenticate(String username, char[] password) {
if (authenticate(username, password, null)) {
return new WebPrincipal(username, password, SecurityContext.getCurrent());
}
return null;
}
Which on its turn calls the following abbreviated method that handles two supported types of credentials:

protected boolean authenticate(String username, char[] password, X509Certificate[] certs) {
try {
if (certs != null) {
// ... create subject
LoginContextDriver.doX500Login(subject, moduleID);
} else {
LoginContextDriver.login(username, password, _realmName);
}
return true;
} catch (Exception le) {}

return false;
}
Again (strongly) abbreviated the login method called looks as follows:

public static void login(String username, char[] password, String realmName){
Subject subject = new Subject();
subject.getPrivateCredentials().add(new PasswordCredential(username, password, realmName));

LoginContextDriver.login(subject, PasswordCredential.class);
}

This new login method checks for several credential types, which abbreviated looks as follows:


public static void login(Subject subject, Class cls) throws LoginException {
if (cls.equals(PasswordCredential.class))
doPasswordLogin(subject);
else if (cls.equals(X509CertificateCredential.class))
doCertificateLogin(subject);
else if (cls.equals(AnonCredential.class)) {
doAnonLogin();
else if (cls.equals(GSSUPName.class)) {
doGSSUPLogin(subject);
else if (cls.equals(X500Name.class)) {
doX500Login(subject, null);
else
throw new LoginException("Unknown credential type, cannot login.");
}

As we're following the password trail, we're going to look at the doPasswordLogin() method here, which strongly abbreviated looks as follows:


private static void doPasswordLogin(Subject subject) throws LoginException
try {
new LoginContext(
Realm.getInstance(
getPrivateCredentials(subject, PasswordCredential.class).getRealm()
).getJAASContext(),
subject,
dummyCallback
).login();
} catch (Exception e) {
throw new LoginException("Login failed: " + e.getMessage()).initCause(e);
}
}

We're now 5 levels deep, and we're about to see our custom login module being called.

At this point it's down to plain Java SE JAAS code. First the name of the realm that was stuffed into a PasswordCredential which was stuffed into a Subject is used to obtain a Realm instance of the type that was shown way above; the GlassFish DAO like type. Via this instance the realm name is mapped to another name; the "JAAS context". This JAAS context name is the name under which our LoginModule has to be registered. The LoginContext does some magic to obtain this LoginModule from a configuration file and initializes it with the Subject among others. The login(), commit() and logout() methods can then make use of this Subject later on.

At long last, the login() method call (via 2 further private helper methods, not shown here) will at 7 levels deep cause the login() method of our LoginModule to be called. This happens via reflective code which looks as follows:


// methodName == "login" here

// find the requested method in the LoginModule
for (mIndex = 0; mIndex < methods.length; mIndex++) {
if (methods[mIndex].getName().equals(methodName))
break;
}

// set up the arguments to be passed to the LoginModule method
Object[] args = { };

// invoke the LoginModule method
boolean status = ((Boolean) methods[mIndex].invoke(moduleStack[i].module, args)).booleanValue();
But remember that in GlassFish we didn't directly implemented LoginModule#login() but the abstract authenticateUser() method of the BasePasswordLoginModule, so we still have one more level to go. The final call at level 8 that causes our very own custom method to be called can be seen below:

final public boolean login() throws LoginException {

// Extract the username, password and realm name from the Subject
extractCredentials();

// Delegate the actual authentication to subclass (finally!)
authenticateUser();

return true;
}

 

Liberty

Liberty calls its identity stores "user registry". It's shown below:


public interface UserRegistry {
void initialize(Properties props) throws CustomRegistryException, RemoteException;

String checkPassword(String userSecurityName, String password) throws PasswordCheckFailedException, CustomRegistryException, RemoteException;
String mapCertificate(X509Certificate[] certs) throws CertificateMapNotSupportedException, CertificateMapFailedException, CustomRegistryException, RemoteException;
String getRealm() throws CustomRegistryException, RemoteException;

Result getUsers(String pattern, int limit) throws CustomRegistryException, RemoteException;
String getUserDisplayName(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
String getUniqueUserId(String userSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
String getUserSecurityName(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException;
boolean isValidUser(String userSecurityName) throws CustomRegistryException, RemoteException;

Result getGroups(String pattern, int limit) throws CustomRegistryException, RemoteException;
String getGroupDisplayName(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
String getUniqueGroupId(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
List getUniqueGroupIds(String uniqueUserId) throws EntryNotFoundException, CustomRegistryException, RemoteException;
String getGroupSecurityName(String uniqueGroupId) throws EntryNotFoundException, CustomRegistryException, RemoteException;
boolean isValidGroup(String groupSecurityName) throws CustomRegistryException, RemoteException;

List getGroupsForUser(String groupSecurityName) throws EntryNotFoundException, CustomRegistryException, RemoteException;
WSCredential createCredential(String userSecurityName) throws NotImplementedException, EntryNotFoundException, CustomRegistryException, RemoteException;    
}

As can be seen it's clearly one of the most heavyweight interfaces for an identity store that we've seen till this far. As Liberty is closed source we can't exactly see what the server uses all these methods for.

As can be seen though it has methods to list all users and groups that the identity store manages (getUsers(), getGroups()) as well as methods to get what IBM calls a "display name", "unique ID" and "security name" which are apparently associated with both user and role names. According to the published JavaDoc display names are optional. It's perhaps worth it to ask the question if the richness that these name mappings potentially allow for are worth the extra complexity that's seen here.

createCredential() stands out as the JavaDoc mentions it's never been called for at least the 8.5.5 release of Liberty.

The main method that does the actual authentication is checkPassword(). It's clearly username/password based. Failure has to be indicated by trowing an exception, success returns the passed in username again (or optionally any other valid name, which is a bit unlike what most other systems do). There's support for certificates via a separate method, mapCertificate(), which seemingly has to be called first, and then the resulting username passed into checkPassword() again.

Example of usage

Since Liberty is closed source we can't actually see how the server uses its identity store. Some implementation examples are given by IBM and myself.

 

WebLogic

It's not entirely clear what an identity store in WebLogic is really called. There are many moving parts. The overall term seems to be "security provider", but these are subdivided in authentication providers, identity assertion providers, principal validation providers, authorization providers, adjudication providers and many more providers.

One of the entry points seems to be an "Authentication Provider V2", which is given below:


public interface AuthenticationProviderV2 extends SecurityProvider {

AppConfigurationEntry getAssertionModuleConfiguration();
IdentityAsserterV2 getIdentityAsserter();
AppConfigurationEntry getLoginModuleConfiguration();
PrincipalValidator getPrincipalValidator();
}

Here it looks like the getLoginModuleConfiguration() has to return an AppConfigurationEntry that holds the fully qualified class name of a JAAS LoginModule, which is given below:


public interface LoginModule {
void initialize(Subject subject, CallbackHandler callbackHandler, Map<String,?> sharedState, Map<String,?> options);
boolean login() throws LoginException;
boolean commit() throws LoginException;
boolean abort() throws LoginException;
boolean logout() throws LoginException;
}
ItseemsWebLogic's usage of the LoginModule is not as highly specific to the application server as we saw was the case for JBoss and GlassFish. The user can implement the interface directly, but has to put WebLogic specific principals in the Subject as these are not standardized.

Example of usage

Since WebLogic is closed source it's not possible to see how it actually uses the Authentication Provider V2 and its associated Login Module.

 

Conclusion

We took a look at how a number of different servlet containers implemented the identity store concept. The variety of ways to accomplish essentially the same thing is nearly endless. Some containers pass two strings for a username and password, others pass a String for the username, but a dedicated Credential type for the password, a char[] or even an opaque Object for the password. Two containers pass in a third parameter; the http servlet request.

The return type is varied as well. A (custom) Principal was used a couple of times, but several other representations of "caller data" were seen as well; like an "Account" and a "UserIdentity". In one case the container deemed it necessary to modify TLS to set the result.

The number of levels (call depth) needed to go through before reaching the identity store was different as well between containers. In some cases the identity store was called immediately with absolutely nothing in between, while in other cases up to 10 levels of bridging, adapting and delegating was done before the actual identity store was called.

Taking those intermediate levels into account revealed even more variety. We saw complete LoginContext instances being returned, we saw Subjects being used as output parameters, etc. Likewise, the mechanism to indicate success or failure ranged from an exception being thrown, via a boolean being returned, to a null being returned for groups.

One thing that all containers had in common though was that there's always an authentication mechanism that interacts with the caller and environment and delegates to the identity store. Then, no matter how different the identity store interfaces looked, every one of them had a method to perform the {credentials in, caller data out} function.

It's exactly this bare minimum of functionality that is arguably in most dire need of being standardised in Java EE. As it happens to be the case this is indeed what we're currently looking at in the security EG.

Arjan Tijms

The state of portable authentication for GlassFish, Payara, JBoss/WildFly, WebLogic and Liberty

$
0
0
Almost exactly 3 years ago I took an initial look at custom container authentication in Java EE. Java EE has a dedicated API for this called JASPIC. Even though JASPIC was a mandatory part of Java EE, support at the time was not really good. In this article we'll take a look at where things were and how things are in the current crop of servers in 2015.

To begin with, there were a number of spec omissions in JASPIC 1.0 (Java EE 6). The biggest one was that in order to register a server authentication module (SAM) an application ID had to be provided. This ID could not be obtained in a portable way. The JASPIC 1.1 MR rectified this.

Other spec omissions concerned JASPIC being silent about what would need to happen with respect to HttpServletRequest#login and HttpServletRequest#logout, and with forward and includes done from a SAM. The JASPIC 1.1 MR rectified these omissionstoo.

With respect to the actual behaviour there were a large number of very serious problems. Most concerned the very basic stateless nature of JASPIC. A JASPIC SAM is like a Servlet Filter; it's called for every request to both public and protected resources, and doesn't automatically create a session when a caller is authenticated. What actually happened differed per server back then. Some only called the SAM for protected resources, some automatically created a session and never called the SAM again, etc.

Another class of problems concerned the life cycle. A SAM has two seemingly simple methods; "validateRequest" that has to be called before Filters and Servlets are invoked, and "secureResponse" that has to be called after. Especially this "after" was ill understood. Some servers called "validateRequest" and "secureResponse" both before the Filters right after each other, while others called "secureResponse" every time data was written to the response.

A specifically peculiar thing was that no server back then was able to wrap the request and response, even though the JASPIC spec clearly states that this is required. Accessing resources from a SAM, such as EJB beans or datasources via JNDI, or CDI beans via the bean manager was a hit or miss as well. Basically every server behaved differently there.

Finally there were big issues with interpreting how portable a SAM should exactly be, and whether the technology should "just be there", or whether some server specific configuration had to be done first. One vendor seemingly interpreted the JASPIC spec as a portable "authentication mechanism" (the artefact that interacts with the user, such as Servlet's FORM), that then delegated to a proprietary (server specific) "identity store" (the artefact that stores the user data and groups, such as LDAP or a database).

In response to this I created a series of tests, that were later donated to the Java EE 7 samples project. Subsequently I worked with all vendors and asked them to improve their JASPIC implementations. With the exception of Geronimo all vendors were very cooperative, so I'd like to take the opportunity here to give them all a big thanks for their hard work.

So after 3 years of creating tests and reporting issues, what's the current situation like? To find out I executed the JASPIC tests against the current crop of servers. The result is shown below:

Running the Java EE 7 samples JASPIC tests
ModuleTestGlassFish 4.1.1Payara 4.1.1.154JBoss EAP 7 alpha1
WildFly 10rc4
WebLogic 12.2.1Liberty 8.5.5.7
9 beta 2015.10
lifecycletestBasicSAMMethodsCalled
Passed
Passed
Passed
Passed
Passed
lifecycletestLogout
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedPageNotLoggedin
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedPageLoggedin
Passed
Passed
Passed
Failure
Passed
basic-authenticationtestPublicPageLoggedin
Passed
Passed
Passed
Failure
Passed
basic-authenticationtestPublicPageNotLoggedin
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestPublicAccessIsStateless
Passed
Passed
Passed
Failure
Passed
basic-authenticationtestProtectedAccessIsStateless
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedAccessIsStateless2
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedThenPublicAccessIsStateless
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFwithCDIForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFwithCDIForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFIncludeViaPublicResource
Failure
Failure
Failure
Failure
Failure
dispatching-jsf-cditestJSFForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestCDIForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestCDIForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestCDIIncludeViaPublicResource
Passed
Passed
Passed
Passed
Failure
dispatching-jsf-cditestJSFwithCDIIncludeViaPublicResource
Failure
Failure
Failure
Failure
Failure
dispatchingtestBasicIncludeViaPublicResource
Passed
Passed
Passed
Passed
Failure
dispatchingtestBasicForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
dispatchingtestBasicForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
custom-principaltestPublicPageLoggedin
Failure
Passed
Passed
Failure
Passed
custom-principaltestPublicAccessIsStateless
Passed
Passed
Passed
Failure
Passed
custom-principaltestProtectedAccessIsStateless
Passed
Passed
Passed
Passed
Passed
custom-principaltestProtectedAccessIsStateless2
Passed
Passed
Passed
Passed
Passed
custom-principaltestProtectedThenPublicAccessIsStateless
Passed
Passed
Passed
Passed
Passed
custom-principaltestProtectedPageLoggedin
Failure
Passed
Passed
Failure
Passed
invoke-ejb-cdiprotectedInvokeCDIFromSecureResponse
Passed
Passed
Passed
Failure
Failure
invoke-ejb-cdiprotectedInvokeCDIFromCleanSubject
Passed
Passed
Passed
Passed
Failure
invoke-ejb-cdiprotectedInvokeCDIFromValidateRequest
Passed
Passed
Passed
Failure
Failure
invoke-ejb-cdiprotectedInvokeEJBFromSecureResponse
Failure
Failure
Passed
Passed
Passed
invoke-ejb-cdiprotectedInvokeEJBFromCleanSubject
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdiprotectedInvokeEJBFromValidateRequest
Failure
Failure
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeEJBFromSecureResponse
Failure
Failure
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeEJBFromValidateRequest
Failure
Failure
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeEJBFromCleanSubject
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeCDIFromSecureResponse
Passed
Passed
Passed
Failure
Failure
invoke-ejb-cdipublicInvokeCDIFromValidateRequest
Passed
Passed
Passed
Failure
Failure
invoke-ejb-cdipublicInvokeCDIFromCleanSubject
Passed
Passed
Passed
Passed
Passed
register-sessiontestJoinSessionIsOptional
Passed
Passed
Passed
Failure
Passed
register-sessiontestRemembersSession
Passed
Passed
Passed
Failure
Passed
status-codestest404inResponse
Passed
Passed
Failure
Passed
Passed
status-codestest404inResponse
Passed
Passed
Failure
Passed
Passed
async-authenticationtestBasicAsync
Passed
Passed
Passed
Passed
Passed
ejb-propagationpublicServletCallingPublicEJBThenLogout
Passed
Passed
Passed
Failure
Passed
ejb-propagationprotectedServletCallingProtectedEJB
Passed
Passed
Passed
Failure
Passed
ejb-propagationprotectedServletCallingPublicEJB
Passed
Passed
Passed
Failure
Passed
ejb-propagationpublicServletCallingProtectedEJB
Passed
Passed
Passed
Failure
Passed
wrappingtestResponseWrapping
Passed
Passed
Passed
Passed
Passed
wrappingtestRequestWrapping
Passed
Passed
Passed
Passed
Passed

 

As can be seen the situation has greatly improved. With the unfortunate exception of WebLogic 12.2.1 the basics now work everywhere. WebLogic 12.2.1 is perhaps a special case as it seems to be hit by a major bug where the most basic version of authentication doesn't work anymore, while it did work in the previous version 12.1.3. The fact that "testProtectedPageLoggedin" and "testPublicPageLoggedin" fail mean that actual authentication doesn't work properly. In this specific case it appears that when a caller authenticates with name "test" and gets the role "architect", then those are not available to the application. E.g. request#getUserPrincipal() still returns null and request#isUserInRole() returns false. This unfortunately means that for the moment until this bug is fixed JASPIC can not really be used on WebLogic at all.

Looking further at the results we see that the seemingly difficult to understand "secureResponse" method is now always called at the correct moment, and wrapping the request and response that once no server was able to do is now working well in all servers.

Forwards are now supported by all servers as are logouts. Includes are supported by most servers, only Liberty seems to have some issues with these. Curiously no server is able to include a resource that uses JSF. This is likely a JSF issue (as a JSF EG member and Mojarra committer this is something I probably have to fix myself ;))

Invoking resources has improved somewhat, but remains troublesome. Neither EJB beans nor CDI beans can be obtained and invoked on every server. EJB (specially those in the app scope such as java:comp, java:app, etc) work on JBoss EAP/WildFly, WebLogic and Liberty, but not on GlassFish and derivative Payara. CDI beans work in GlassFish, Payara and WildFly, but not in WebLogic and Liberty. WildFly is the one server where they both work.

The resources situation is still a spec issue as well and JASPIC 1.1 remains silent on whether this should work or not. The spec lead has clarified that even though the spec is silent on accessing EJB beans and other resources from the web component's JNDI namespaces, this is something that ought to work and GlassFish' current behaviour is just a bug. A next revision of the JASPIC spec should clarify this though. For the CDI beans no such clarification has been given, so vendors can't be asked to support this based on what the spec requires. However, accessing CDI from a SAM is very likely going to be a requirement coming from JSR 375 (Java EE security). So even though JASPIC doesn't mandate this now, it would be good if vendors already supported this in order to be prepared for Java EE 8.

Another case worth looking at is providing a custom principal from a SAM. This is a feature of JASPIC where a SAM can provide its own custom principal, e.g. org.example.MyPrincipal, which then has to be returned from request#getUserPrincipal(). This works on most servers except on GlassFish. It currently also doesn't work on WebLogic, but without further investigation it's hard to say whether it doesn't support this at all, or just because of the earlier failure of making the principal (custom or not) available.

Setting a response status code from a SAM (like e.g. a 404 - NOT FOUND) is something that is supported by all Servers, except for JBoss EAP/WildFly. This is currently the only unique failure for WildFly. Sort of, since it actually has already been fixed, but a build containing that fix has not yet been released.

From the outcome of the tests shown above it would seem JBoss EAP/WildFly clearly has the best JASPIC implementation, but there's one small but very important detail not shown in that table; the question whether JASPIC needs to be activated in a proprietary way. Unfortunately, JBoss EAP/WildFly indeeds needs such activation. If this activation would entail placing a special configuration file in the application archive it wouldn't be so bad, but JBoss EAP/WildFly actually requires the container to be modified before JASPIC can be used. This therefor means a SAM can not be deployed to a stock JBoss EAP/WildFly, which is very unfortunate indeed. There's a programmatic workaround available that doesn't require the container to be modified (see the activation link), but this is rather hacky and may break with every new release of JBoss EAP/WildFly.

The other server that needs server specific configuration is Liberty. Earlier versions of Liberty required all users and groups that a JASPIC SAM handles to be known by Liberty's proprietary user registry. An often downright impossible requirement in general and specifically for fully portable SAMs, and one that even violates the JASPIC spec. The current versions of Liberty have somewhat improved the situation by only requiring groups to be made known to Liberty. While still a very unfortunate requirement, it's at least possible to do this. Still, listing all the groups that an application uses in a proprietary file inside the container is a bit anti to one of the major use cases for which JASPIC is used; portable and application managed custom authentication. Instead of listing all the groups there's a workaround available where a NOOP user registry is installed and configured.

Conclusion

JBoss EAP 7/WildFly 10rc4 are almost perfect, if only JASPIC worked out of the box or could be activated from within the application archive using a configuration file. Payara 4.1.1.154 is another very good server for JASPIC. Here JASPIC works out of the box, but it suffers from a somewhat nasty bug that prevents it from using application scoped JNDI namespaces. GlassFish 4.1.1 is almost as good, but suffers from an extra bug that prevents it from using custom principals.

Liberty is quite good as well. It has slightly more bugs to fix than JBoss and Payara, but about the same as GlassFish. GlassFish can't use custom principals, Liberty can't do includes. Both can't obtain and invoke a specific bean type (for GlassFish this is EJB, for Liberty it's CDI). But above all Liberty suffers from its conflicting user registry requirement, although by far not as badly as before.

WebLogic 12.2.1 can at the moment not be recommended for JASPIC. It suffers from a severe bug that prohibits an application to use the authenticated identity, which is the core of what JASPIC does. Hopefully the WebLogic team is able to squash this particular bug soon.

All in all we've seen there's a steady and definite improvement going on for the various JASPIC implementations, but as can be seen there's still room left for improvement.

Arjan Tijms

Latest versions Payara and WildFly improve Java EE 7 authentication compliance

$
0
0
Two months ago we looked at the state of portable authentication for GlassFish, Payara, JBoss/WildFly, WebLogic and Liberty in Java EE 7. With the exception of WebLogic 12.2.1, most servers performed pretty well, but there were still a number of bugs present.

Since then both Payara and WildFly have seen bug fixes that again reduce the number of bugs present where it concerns portable Java EE authentication. Do note that both updated servers have not had an official (supported) release yet, but pre-release resp. rc/cr builds containing those fixes can be downloaded from the vendors.

In anticipation of the final version of those Java EE 7 servers we already took a look at how they improved. The results are shown in the table below. For reference we show several older versions as well. For Payara we took the GlassFish release upon which Payara based its additional fixes, while for WildFly it's a selection of older builds. (no less than 29 builds were released for WildFly 8,9,10/EAP 7 alpha,beta).

Running the Java EE 7 samples JASPIC tests
ModuleTestPayara 4.1.1.161-preGlassFish 4.1.1WildFly 10rc5WildFly 10rc4WildFly 9.0.1WildFly 8.0.0
async-authenticationtestBasicAsync
Passed
Passed
Passed
Passed
Passed
Failed
basic-authenticationtestProtectedPageNotLoggedin
Passed
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedPageLoggedin
Passed
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestPublicPageLoggedin
Passed
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestPublicPageNotLoggedin
Passed
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestPublicAccessIsStateless
Passed
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedAccessIsStateless
Passed
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedAccessIsStateless2
Passed
Passed
Passed
Passed
Passed
Passed
basic-authenticationtestProtectedThenPublicAccessIsStateless
Passed
Passed
Passed
Passed
Passed
Passed
custom-principaltestProtectedPageLoggedin
Passed
Failure
Passed
Passed
Passed
Passed
custom-principaltestPublicPageLoggedin
Passed
Failure
Passed
Passed
Passed
Passed
custom-principaltestPublicAccessIsStateless
Passed
Passed
Passed
Passed
Passed
Passed
custom-principaltestProtectedAccessIsStateless
Passed
Passed
Passed
Passed
Passed
Passed
custom-principaltestProtectedAccessIsStateless2
Passed
Passed
Passed
Passed
Passed
Passed
custom-principaltestProtectedThenPublicAccessIsStateless
Passed
Passed
Passed
Passed
Passed
Passed
dispatchingtestBasicForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatchingtestBasicForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatchingtestBasicIncludeViaPublicResource
Passed
Passed
Passed
Passed
Passed
Failure
dispatching-jsf-cditestCDIForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestCDIForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestCDIIncludeViaPublicResource
Passed
Passed
Passed
Passed
Passed
Failure
dispatching-jsf-cditestJSFwithCDIForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFwithCDIForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFwithCDIIncludeViaPublicResource
Failure
Failure
Failure
Failure
Failure
Failure
dispatching-jsf-cditestJSFForwardViaPublicResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFForwardViaProtectedResource
Passed
Passed
Passed
Passed
Passed
Passed
dispatching-jsf-cditestJSFIncludeViaPublicResource
Failure
Failure
Failure
Failure
Failure
Failure
ejb-propagationpublicServletCallingProtectedEJB
Passed
Passed
Passed
Passed
Passed
Failure
ejb-propagationprotectedServletCallingProtectedEJB
Passed
Passed
Passed
Passed
Passed
Failure
ejb-propagationpublicServletCallingPublicEJBThenLogout
Passed
Passed
Passed
Passed
Passed
Failure
ejb-propagationprotectedServletCallingPublicEJB
Passed
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdiprotectedInvokeCDIFromSecureResponse
Passed
Passed
Passed
Passed
Failure
Failure
invoke-ejb-cdiprotectedInvokeCDIFromCleanSubject
Passed
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdiprotectedInvokeCDIFromValidateRequest
Passed
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeCDIFromSecureResponse
Passed
Passed
Passed
Passed
Failure
Failure
invoke-ejb-cdipublicInvokeCDIFromValidateRequest
Passed
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeCDIFromCleanSubject
Passed
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdiprotectedInvokeEJBFromSecureResponse
Passed
Failure
Passed
Passed
Failure
Passed
invoke-ejb-cdiprotectedInvokeEJBFromCleanSubject
Passed
Passed
Passed
Passed
Passed
Passed
invoke-ejb-cdiprotectedInvokeEJBFromValidateRequest
Passed
Failure
Passed
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeEJBFromSecureResponse
Passed
Failure
Passed
Passed
Failure
Passed
invoke-ejb-cdipublicInvokeEJBFromValidateRequest
Passed
Failure
Passed
Passed
Passed
Passed
invoke-ejb-cdipublicInvokeEJBFromCleanSubject
Passed
Passed
Passed
Passed
Passed
Passed
jacc-propagationcallingJACCWhenAuthenticated
Passed
Passed
Failure
Failure
Failure
Failure
jacc-propagationcallingJACCWhenAuthenticated
Passed
Passed
Failure
Failure
Failure
Failure
jacc-propagationcallingJACCWhenNotAuthenticated
Passed
Passed
Passed
Passed
Passed
Passed
lifecycletestBasicSAMMethodsCalled
Passed
Passed
Passed
Passed
Failure
Passed
lifecycletestLogout
Passed
Passed
Passed
Passed
Passed
Passed
register-sessiontestJoinSessionIsOptional
Passed
Passed
Passed
Passed
Passed
Passed
register-sessiontestRemembersSession
Passed
Passed
Passed
Passed
Passed
Passed
status-codestest404inResponse
Passed
Passed
Passed
Failure
Failure
Passed
status-codestest404inResponse
Passed
Passed
Passed
Failure
Failure
Passed
wrappingtestResponseWrapping
Passed
Passed
Passed
Passed
Passed
Passed
wrappingtestRequestWrapping
Passed
Passed
Passed
Passed
Passed
Passed

Not shown in the table, but the absolute greatest improvement since JBoss switched to its new JASPIC implementation all the way back in WildFly 8.0.0.Alpha1 is the fact that JASPIC now finally works without the need of modifying WildFly by putting a dummy fragment in its standalone.xml file. It's not 100% perfect yet as the application archive (.war) still needs what is effectively a marker file to activate JASPIC, but this is much, much preferred over having to modify a server in order to activate a standard Java EE API that should just be there. Kudos to the JBoss team and a special thanks to Jason Greene for finally making this happen!

As can be seen, WildFly has seen many improvements over the years. Along the way a few regressions were introduced, but they were fixed again and now WildFly10rc5 is almost perfect with respect to the known bugs. Role propagation to JACC however still doesn't work. Although the usage of custom JACC providers is not that high, the test in question here uses the default provider for a rather useful query; "Can the authenticated user access a given resource?", e.g. "Can Pete access http://example.com/assets/someresource?".

The top performer as of now is Payarra, which passes all tests except for one of minor importance where a JSF based resource is included by an authentication module. As mentioned in the previous report this likely has to be fixed on the JSF side of things.

If all goes well we'll see a new beta of Liberty 9 this month which should also contain a number of fixes. The most problematic server at this moment is still WebLogic, which introduced a major regression between 12.1.3 and 12.2.1. Hopefully WebLogic will fix this regression soon. We'll repeat this test again when either of those publish their latest version.

Arjan Tijms

Viewing all 76 articles
Browse latest View live