## Dynamic Validation with Spring Boot Validation

After it has been quite for a while, I have written a new blog post about dynamic validation with Spring Boot validation. The post has been published on the codecentric blog. You can find it here. Enjoy it!

## Gatling Load Testing Part 1 – Using Gatling

This time, it’s not the usual blog entry. I want to point out that my blog post about Gatling load testing has been published on my employer’s blog. If you want to read it you can find it here. Feel free to make comments!

## camunda BPM platform OSGi 2.0.0 released

It has been a while since we had the last release of camunda BPM platform OSGi that included some new features and I am glad to be able to announce the new major version today.

The new version includes one new feature,some dependency adjustments and a restructuring of the whole project.

The new feature is the OSGi Event Bridge, which I already explained here. So now you'll be able to receive camunda process events in an OSGi way.

The most notable change in the dependencies is the change from OSGi 4.2 to version 4.3. This version enables e.g. the usage of generics and of the Require-Capabilityand Provide-Capability headers (one example how you could use them is explained in another blog post).

Finally, the whole project is now more modularized. Using one of the 1.x.x versions, many features were included in the camunda-bpm-osgi module, which you always needed. That ways, you would always have the classes for file install, process application or Blueprint present, if you used them or not. With the new structure you can better choose, which features you want to use and which not to.

Configadmin, Fileinstall and Processapplication are now separate bundles and no longer contained in camunda-bpm-osgi. What is left in the "main" module are the capabilities to find process definitions in your bundles, EL resolving, locating scripting engines and utility classes, e.g. for classloading. Also, all integration tests (except for the Karaf ones), are now located in a central itest module.

I hope all those changes ease the use for you to combine the powers of OSGi and camunda BPM. If you have any feedback or would like to make a wish for a new feature, feel free to leave a comment, open an issue on GitHub or open a pull request.

## Extension/Service/Plugin mechanisms in Java

Since I started to deep dive into OSGi I was wondering more and more how frameworks that have some way of extension mechanism, e.g. Apache Camel where you can define your own endpoint or the Eclipse IDE with its plugins, handle finding and instantiating extensions. I remember very well a presentation from the JAX 2013, it was by Kai Tödter, where he showed the combination of Vaadin and OSGi. While the web app was running he could add and remove menu entries, just by starting and stopping the bundles.
For a while now I have taken a look at several approaches on how to create an extensible application and you can find resources for every single method. I want to give a medium sized (not short ;)) overview here of the different ways I know to make a Java application extensible. Also, I will add a list of advantages and disadvantages, from my point of view, to each method. For every method I try to give a simple example.
To avoid confusion, when I write about the advantages and disadvantages, I will write from the point of view, as if you want to provide this extension mechanism in your framework, not from the API consumer point of view.

## Passing the object

This is the most obvious method. The framework defines a method which takes the SPI interface and you simply pass the object. Camel, next to other methods, makes use of this (example taken from the Camel FAQ):
CamelContext context = new DefaultCamelContext();

Internally, Camel doesn't do much magic (code taken from Camel on GitHub).
public void addComponent(String componentName, final Component component) {
ObjectHelper.notNull(component, "component");
synchronized (components) {
if (components.containsKey(componentName)) {
}
component.setCamelContext(this);
components.put(componentName, component);
for (LifecycleStrategy strategy : lifecycleStrategies) {
}

// keep reference to properties component up to date
if (component instanceof PropertiesComponent && "properties".equals(componentName)) {
propertiesComponent = (PropertiesComponent) component;
}
}
}

Every component has to have an unique name and is somehow bound to a lifecycle. Removal of a component is also possible, but has to be made somewhere from the user code.

• Easy and straightforward
• No need for an additional framework
• Compiler checks for the correct interface

• Allowing changes during the runtime is possible but complicated, since it has to be assured the component is removed everywhere
• Your framework has to take care of the whole component lifecycle and any additional requirements it enforces

## Interface and Reflection

This method is used quite often (basically it is also how the ServiceLoader works, see next section) and you can find it with small variances. The differences are where and how exactly interface and implementation name reach the application. Placing them somewhere inside a properties file or passing them to the framework during startup are most common. The implementation is then instantiated using reflection. Creating a context with an InitialContextFactory works like this e.g.:
  Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jboss.naming.remote.client.InitialContextFactory");


• Easy and straightforward
• No need for an additional framework
• No need to provide central class (in properties file approach)

• No type safety (if text based)
• Your framework has to take care of the whole lifecycle and any additional requirements it enforces
• Check for correct wiring only during runtime (if text based, check either at startup or when the code is being called, where the former is better than the latter)

Frameworks using the java.util.ServiceLoader can also be found quite often. What the ServiceLoader does is, it uses during runtime a ClassLoader and checks the META-INF/services directory for a text file, whose name equals the passed interface (SPI) name and then reads the class name inside that file. Then it instantiates the class via Reflection. All the magic happens in the LazyIterator inside the ServiceLoader class (see OpenJDK). Basically, it's just reading a file and instantiating the object. E.g. Camel and HiveMQ use this method.

• Easy and straightforward
• ServiceLoader is part of JDK
• No need for an additional framework

• No lifecycle
• Class has to provide standard constructor
• Support for runtime changes must be implemented (as mentioned here)
• Check for correct wiring only during runtime (the filename or the string inside the file could be wrong)

## (Eclipse) Extension Points

 Picture under BSD license, see here
As far as I know the concept of Extension Points never got popular outside Eclipse, although it is possible to include them in every application. To achieve loose coupling the definition of places where you can add your plugin and the plugins themselves is extracted into XML files.
To define an extension point you need something like this:
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<extension-point
id="de.blogspot.wrongtracks.FooService"
name="FooService"
schema="schema/de.blogspot.wrongtracks.FooService.exsd"/>

The extension provider then has to define an appropriate extension for that point:
<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>
<plugin>
<extension
point="de.blogspot.wrongtracks.FooService">
<implementation
class="com.example.impl.FooServiceImpl"
id="com.example.impl.FooServiceImpl"
name="FooServiceImpl">
</implementation>
</extension>
</plugin>

I got to admit, that I am not completely sure how exactly you can integrate the extension points, but I guess you will need quite a lot from the basic Eclipse runtime. There is a blog post, which explains how you can use extension points without depending on OSGi.

• Extensions can be added during runtime
• Good tool support inside Eclipse
• Wrong wiring only affects single extension
• Loose coupling (more or less, since the extensions depend on the extension point id)

• Dependencies to Eclipse
• Overhead from the Eclipse platform (I actually cannot prove this point but I assume there must be a considerate overhead involved in comparison to the previous methods)
• Check for correct wiring only during runtime

## Spring XML

The Spring framework tried to find a way for loosely coupled components long before CDI, as we know it today, appeared. Their solution was an XML file in which the different classes are being wired together (I am well aware of the fact that nowadays there are also other ways, but since they are also based on annotations they don't differ enough from CDI as that I'll give them an own paragraph). In the basic XML file you define all your beans and Spring will take care of the instantiation. It is also possible to distribute the configuration among several XML files. A very simple example (taken and modified from the Spring documentation) looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="accountDao"
class="org.springframework.samples.jpetstore.dao.jpa.JpaAccountDao">
</bean>

<bean id="petStore" class="org.springframework.samples.jpetstore.services.PetStoreServiceImpl">
<property name="accountDao" ref="accountDao"/>
</bean>
</beans>

If you want to provide your users a way to add their services/plugins to the framework, you'll have to provide a setter method where the users can add their object. E.g. like this (taken from camunda documentation):
<bean id="processEngineConfiguration" class="org.camunda.bpm.engine.spring.SpringProcessEngineConfiguration">
...
<property name="processEnginePlugins">
<list>
<bean id="spinPlugin" class="org.camunda.spin.plugin.impl.SpinProcessEnginePlugin" />
</list>
</property>
</bean>


• Spring is lightweigth
• Lifecycle support from Spring

• XML needs to be maintained
• No auto detection, users have to write the XML when they want to add something
• The Spring IoC container is needed
• Correct wiring is only checked at startup

## OSGi Services

OSGi was created embracing runtime changes and bundles dynamically providing and removing their services. With this in mind OSGi strongly supports applications being extended by services, provided by different bundles. The simplest approach is to implement a ServiceListener or a ServiceTracker. Both should be created on bundle start and they will react when a new implementation of the service appears. A ServiceListener can be as simple as this (taken from the Knoplerfish tutorial):
 ServiceListener sl = new ServiceListener() {
public void serviceChanged(ServiceEvent ev) {
ServiceReference sr = ev.getServiceReference();
switch(ev.getType()) {
case ServiceEvent.REGISTERED:
{
HttpService http = (HttpService)bc.getService(sr);
http.registerServlet(...);
}
break;
default:
break;
}
}
};

String filter = "(objectclass=" + HttpService.class.getName() + ")";

Where bc is a BundleContext object. And a ServiceTracker can be used like this:
ServiceTracker<HttpService,HttpService> serviceTracker = new ServiceTracker<HttpService, HttpService>(bc, HttpService.class, null);
serviceTracker.open();

There are more elegant ways to get hold of an OSGi service using Blueprint, Declarative Services or the Apache Felix Dependency Manager but the ServiceListener is the basic way.

• OSGi lifecycle support
• Changes during runtime "encouraged" ;)
• Compiler checks wiring (not for the ServiceListener but for the rest)
• Problems with services are restricted to single bundle

• You have to buy the whole OSGi package: imports, exports, bundles and everything
• Having the full OSGi lifecycle makes the world more complicated since every service can disappear at every moment

Since the biggest disadvantage of OSGi is that you have to get the whole package, I want to mention here another approach, which is called PojoSR or OSGi Light. The goal of it is to give you the OSGi service concept without the rest that comes with OSGi. Unfortunately, I could not find much documentation about it and the activity around this project seems to be very low at the moment. There is an article here and the PojoSR framework itself. Also, it looks like PojoSR is now a part of Apache Felix called "Connect", but its version is 0.1.0. So if anyone of you knows more about it, please let me know.

## CDI

Contexts and Dependency injection was a big step for Java EE, allowing developers to write more loosely coupled code. The CDI container takes care of automagically wiring the different parts together. The developer only has to use the correct annotations. Depending on which CDI beans are present at runtime, concrete implementations can be changed without changing the code that uses them. When trying to use a class the basic injection looks like this:
@Inject
private MyServiceInterface service;

If there is need to get all of the implementations (which we actually want here), then the class Instance must be used:
@Inject @Any
private Instance<MyServiceInterface> services;

Since Instance is an Iterable a simple for-each loop can be used to access all the objects. Alternatively the select() method can be used to further specify requirements.

• Compiler checks for correct type
• CDI container checks correct wiring at startup
• Part of JEE standard but can also be used without application serve (use a JSR-330 implementation like Guice or HK2)r
• CDI lifecycle support

• A CDI container is needed
• Changes during runtime are not possible
• Annotatiomania (at least if you don't watch out)

## Summary

As you can see many different frameworks/methods evolved in the Java ecosystem. Every single one with its specific advantages and disadvantages. I think we can summarize the different extension mechanisms as three types (with their members):
1. String and well-known location ("Interface and Reflection", "ServiceLoader", "(Eclipse) Extension Points", "Spring XML")
2. Programmatic wiring ("Passing the object", "Interface and Reflection", "OSGi Services")
3. Classpath scanning ("CDI")
Of course the three types are not exclusive. You may provide your users more than one way and let them choose. Also CDI is not exactly the only framework that uses classpath scanning. Spring with its two other ways for configuring the IoC container relies on that method, too.

I hope this article provides an good and sufficient overview of the different methods on how to create an extensible framework. Choosing the right one will make your users surely happy. If you know another method, which I forgot, please let me know, I will gladly add it here.

Please note that the lists of advantages and disadvantages are based on my reasoning. I tried to be objective but like every programmer I have my favorites and my experiences with the frameworks that may make me a little bit biased.

## What can capabilities do for your processes?

Before we release camunda BPM OSGi 2.0 I want to do a little bit more of advertisement for it and show what is possible with the new version. One change in the new version will be, that it depends on OSGi 4.3 and no longer 4.2. One change, besides the fact that I can now use generics in the code (yay!) is that with OSGi 4.3 the capabilities headers will work. So, what's so impressive about them?

The capability headers are two header Provide-Capability and Require-Capability. They are a further abstraction of the Import-Package and Export-Package headers we all (should ;)) know. But with the capability headers you are not as limited as with the package headers. Arbitrary things can be defined, e.g.
Provide-Capability: sensor; type=gyro

would be a valid statement. But you are not limited to one attribute:
Provide-Capability: sensor; type=heat; minTemp=0; maxTemp=100

is also possible. And the bundle that requires such capabilities can use an LDAP filter expression:
Require-Capability: sensor; filter:="(&(type=type=heat)(minTemp=0)(maxTemp=100))"

That ways it is possible to find exactly what is needed in a way that allows to specify more than just packages and versions.

One use-case that came quickly to my mind were process definitions that depend on each other, e.g. if you have a process with a call activity. An example could look like this (please excuse that I didn't prepare an exhaustive example):
Let's call this one the "Hunger process". And the callee process, the "Phone process" can be as simple as this:

The last time I checked there is nothing that would stop you to try to start the Hunger process although the Phone process hasn't been deployed yet. If the Hunger process would be something that you want to start automatically you would run into a nasty exception. Here, the headers can help. You could simply describe in your MANIFEST that you require the Phone process before your bundle can be started:
Require-Capability: process; filter:="(key=Phone_process)"

You could also add a version number or whatever seems useful. The bundle containing the Phone process should then of course contain the appropriate part:
Provide-Capability: process; key=Phone_process

So, when you deploy the bundle with the Hunger process it cannot be started without the bundle containing the Phone process. That ways you can manage your process interdependencies without running into exceptions.
Finally, if you use the maven-bundle-plugin I want to give you a short example.

## Setting the headers with the maven-bundle-plugin

With the maven-bundle-plugin it is really easy to set the headers. I'll suppose that you use <packaging>bundle</packaging> in your POM. Here's how you can set the headers:
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<instructions>
<Provide-Capability>process; key=Phone_process</Provide-Capability>
</instructions>
</configuration>

See, piece of cake ;)

I hope I could give you some idea how you could use the capability headers that OSGi 4.3 introduced. This was just a quick example but I think it shows nicely, how OSGi can support your BPMN processes.

Copyright @ 2013 Wrong tracks of a developer.

Designed by Templateiy