Tuesday, November 18, 2014

camunda BPM engine: use custom VariableType to resist the urge to flush

Introduction

I hope all of you are aware of the fact that you can provide a ProcessEnginewith your own VariableTypes. If not, I'll give you a short introduction. Please note that my descriptions are based on camunda-engine 7.1.0. There will be some changes in versoin 7.2.0 and I am not sure if my observations will still be true.

VariableType

 

VariableTypes help the ProcessEngine store your process variables in the table ACT_RU_VARIABLE. I would call them a mediator between the possible variables and the database schema. There are VariableType implementations for
  • Boolean
  • Serizable
  • Date
  • Double
  • Integer
  • JPA Entities
  • Long
  • Null
  • Short
  • String
  • and CustomObjects (about which I'll talk later)

If you try to add an object as process variable, which doesn't belong to one of those types, you'll see this exception:

org.camunda.bpm.engine.ProcessEngineException: couldn't find a variable type that is able to serialize \<object\>
    at org.camunda.bpm.engine.impl.variable.DefaultVariableTypes.findVariableType(DefaultVariableTypes.java:62)
    at org.camunda.bpm.engine.impl.persistence.entity.VariableScopeImpl.getNewVariableType(VariableScopeImpl.java:315)
    at org.camunda.bpm.engine.impl.persistence.entity.VariableScopeImpl.createVariableInstance(VariableScopeImpl.java:395)
    at org.camunda.bpm.engine.impl.persistence.entity.VariableScopeImpl.createVariableLocal(VariableScopeImpl.java:332)
    at org.camunda.bpm.engine.impl.persistence.entity.VariableScopeImpl.setVariable(VariableScopeImpl.java:259)
    at org.camunda.bpm.engine.impl.persistence.entity.VariableScopeImpl.setVariable(VariableScopeImpl.java:242)
    at de.blogspot.wrongtracks.StoreDataDelegate.execute(StoreDataDelegate.java:9)
    at org.camunda.bpm.engine.impl.delegate.JavaDelegateInvocation.invoke(JavaDelegateInvocation.java:34)
    at org.camunda.bpm.engine.impl.delegate.DelegateInvocation.proceed(DelegateInvocation.java:39)
    at org.camunda.bpm.engine.impl.delegate.DefaultDelegateInterceptor.handleInvocation(DefaultDelegateInterceptor.java:42)
    at org.camunda.bpm.engine.impl.bpmn.behavior.ServiceTaskJavaDelegateActivityBehavior.execute(ServiceTaskJavaDelegateActivityBehavior.java:49)


Provide your variable type

 

Every ProcessEngineConfiguration should have the methods setCustomPostVariableTypes(List<VariableType>) and setCustomPreVariableTypes(List<VariableType>) so you can add your variable types when configuring the engine.
But wait, why are there two methods, pre and post?
When searching which VariableType can handle the object you want to store as process variable the engine iterates over the list of VariableTypes and the first one, which can handle the object, wins. Maybe you want your own types to have precedence over the default types.

Flushing

 

Now that you know about VariableTypes I want to present to you my use case.

The case

 

Imagine a process that's supposed to run synchronously (i.e. without a wait state) within a JTA transaction and every task needs a result from the preceding one. Additionally, the results are JPA Entities. By default the JPAEntityVariableType would take care of the entity.
The implementation shows that every time setValue() is called the JPAEntityVariableType calls flush() on the EntityManager. Since the process runs synchronously within a transaction the flush results in unnecessary queries on my database during process execution.

The solution

 

Here comes the CustomObjectType class. The CustomObjectType only needs a name and a class to work. The class is used to determine if it can handle a certain object. The CustomObjectType stores all objects in the cache of the ValueField. To get rid of the flush I instantiated a CustomObjectType with the class of my result and passed it to the configuration. Now, every time I put an entity inside the process variables the CustomObjectType places them inside the cache and no flush is called.

 

 The downside

 

Well, nothing comes without a price: If I should ever need a wait state my solution won't work and I'll have to find another solution or live with the flush.

 

Alternatives

 

I am not sure if my solution is the best way to solve my problem. If anyone knows a better way please let me know.

 

Small example

 

I also created a small example to show the use of the CustomObjectType here on GitHub

Tuesday, November 11, 2014

camunda BPM engine: How many tasks can you execute without wait state

Today I got quite curious today about this topic and I don't know if anyone ever wondered/tried.
At work I introduced the camunda BPM engine a few months ago. One requirement was not to reach any wait state during the execution. That ways we want to make sure the process ends synchronously and we don't show stale data.

As you can image the stacktraces got pretty big when an exception occurred during the end of the process (>1000 lines). So I wondered how many tasks could be executed before the java stack is full (or anything else unexpected happens).
To try this I wrote a simple Java class:

public class Main {
private static final int NUMBER_TASKS = 100;

public static void main(String[] args) throws InterruptedException {

    ProcessEngineConfiguration configuration = ProcessEngineConfiguration
            .createStandaloneProcessEngineConfiguration();
    configuration.setJdbcUrl("jdbc:h2:mem:test;DB_CLOSE_DELAY=-1");
    configuration
            .setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_CREATE_DROP);
    configuration.setJdbcUsername("sa");
    configuration.setJdbcPassword("");
    configuration.setHistory(ProcessEngineConfiguration.HISTORY_NONE);

    ProcessEngine engine = configuration.buildProcessEngine();
    BpmnModelInstance bpmn = erzeugeBpmn();
    engine.getRepositoryService().createDeployment()
            .addModelInstance("manyTasks.bpmn", bpmn).deploy();
    RuntimeService runtimeService = engine.getRuntimeService();
    ProcessInstance processInstance = runtimeService
            .startProcessInstanceByKey("manyTasks");
    System.out.println("Done");

}

private static BpmnModelInstance erzeugeBpmn() {
    AbstractFlowNodeBuilder<?, ?> builder = Bpmn.createProcess().id("manyTasks").executable().startEvent();
    for(int i = 0; i < NUMBER_TASKS; i++){
    builder = builder.serviceTask().camundaClass(EmptyDelegate.class.getName());
    }
    return builder.endEvent().done();
}

As you can see, nothing fancy (and I am still happy that there is a Java API for generating BPMN).
My computer has two Intel Core i7 with 2.9GHz and 16GB of RAM and runs Java 1.7.0_72 64bit. To run this I used the Eclipse defaults (Kepler SR2 x64):

--launcher.XXMaxPermSize
256M
--launcher.XXMaxPermSize
256m
-Xms40m
-Xmx512m


Ten and 100 tasks are no problem.
To get an overview of the size of the stacktraces I'll add a JavaDelegate, which throws an exception at the end.
So the method looks like this:

private static BpmnModelInstance erzeugeBpmn() {
   AbstractFlowNodeBuilder<?, ?> builder = Bpmn.createProcess().id("manyTasks").executable().startEvent();
      for(int i = 0; i < ANZAHL_TASKS; i++){
        builder = builder.serviceTask().camundaClass(EmptyDelegate.class.getName());
      }
      return builder.serviceTask().camundaClass(ExceptionDelegate.class.getName()).endEvent().done();
   }

Also, I'd like to see how log output grows, so here are the numbers. The additional task is always the exception task so the numbers are 11, 101, 1001...
  • 11 tasks: 392 lines log
  • 101 tasks: 1025 lines log
  • 1001 tasks: SOF
Yay, I reached the limit ;-)

Exception in thread "main" java.lang.StackOverflowError
    at java.lang.ThreadLocal$ThreadLocalMap.getEntry(ThreadLocal.java:376)
    at java.lang.ThreadLocal$ThreadLocalMap.access$000(ThreadLocal.java:261)
    at java.lang.ThreadLocal.get(ThreadLocal.java:146)
    at org.camunda.bpm.engine.impl.context.Context.getStack(Context.java:95)
    at org.camunda.bpm.engine.impl.context.Context.getCommandContext(Context.java:46)
    at org.camunda.bpm.engine.impl.persistence.entity.ExecutionEntity.performOperationSync(ExecutionEntity.java:728)
    at org.camunda.bpm.engine.impl.persistence.entity.ExecutionEntity.performOperation(ExecutionEntity.java:719)
...

It seems like I gotta take some smaller steps:
  • 501 tasks: SOF
  • 401 tasks: SOF
  • 301 tasks: SOF
  • 201 tasks: 1025 lines log
  • 151 tasks: 1025 lines log
Wait, what?
Yes, strangely Eclipse always shows me the same amount of lines for the exception after reaching a certain threshold. And no, I didn't limit the console output in Eclipse. If anyone knows why this limit exists, please let me know.

The task limit I reached was 268 tasks (267 "normal" ones and one exception task).
I am not sure about the practical implications of my "research" but as I said, I was just curious.
Maybe we can agree that processes of a certain size should reach a wait state due to organizational and technical reasons ;-)

EDIT: Please note that when executing a task as multi instance every loop counts as one (I learned that the hard way ;-) )

Thursday, October 9, 2014

Assemble your custom Apache Karaf with the karaf-maven-plugin

I was quite happy to find out there is a Maven Plugin with which you can assembly a full Apache Karaf and include your own features/bundles.
From time to time I like to test my bundles in a real environment. Because of that the plugin is a great way to save the steps of unzipping a new Karaf, adding my feature and installing it.
So the plugin basically serves my laziness ;-)
But before the lazy part starts (for me and you) we have to do some work to get the plugin running.

I will start to describe the things I figured out. Then I will show you my final configuration and at the end I will talk about the problems I encountered.
Of course you can take a look at the documentation (here and here), too.

Karaf-assembly

 

To start your assembly project you just need an empty maven project with the packaging "karaf-assembly" and the plugin, of course.

To configure the features for the plugin (so the features will end up in the Karaf) there are three options:
  1. startupFeature
  2. bootFeature
  3. installedFeature
Here is an example:

<configuration>
  <bootFeatures>
    <feature>standard</feature>
    <feature>management</feature>

    <feature>camunda-bpm-karaf-feature-minimal</feature> 
  </bootFeatures>
</configuration>

All three types result in a different configuration. Since I don't want to copy the documentation I'll give a very brief explanation.

startupFeatures

All the bundles from your feature will appear in the startup.properties, copied to system/ and started with the Karaf.

bootFeatures

All the bundles from your feature will be copied to system/. The features you listed will appear in org.apache.karaf.features.cfg and installed when starting Karaf. The path to your feature.xml will be added to org.apache.karaf.features.cfg as feature repository.

installFeatures

All the bundles from your feature will be copied to system/. The path to your feature.xml will be added to org.apache.karaf.features.cfg as feature repository.

You can see that every kind of *Features gets a little bit less serious than the one before. Please note that "compile" dependencies in your POM will be treated like a startupFeature.

All the dependencies you want to include have to be ether of type "kar" or have to have the classifier "feature" and type "xml", e.g:

<dependency>
  <groupId>org.apache.karaf.features</groupId>
  <artifactId>standard</artifactId>
  <version>3.0.2-SNAPSHOT</version>
  <classifier>features</classifier>
  <type>xml</type>
  <scope>runtime</scope>
</dependency>

Other dependencies will be ignored.

That was all I could figure out about the configuration of the plugin. Now let's have a look at my project.

My project

 

As mentioned before my project contains no classes or anything under src/resources. It just has the pom.xml that looks like this (Google Drive link).
I added a small shell script because the karaf start file wasn't executable and because I didn't want to move to target/assembly/... every time. Also I had a small problem with Java (see following heading).
The script looks like this:

export JAVA_HOME=$(/usr/libexec/java_home -v 1.6)
chmod 777 ./target/assembly/bin/karaf
./target/assembly/bin/karaf start


Nothing fancy ;-) So, that's already all about my project. Finally, I want to tell you about the problems I faced.

Issues


Plugin version

I had the problem that when a feature contained nested features the nested ones wouldn't be resolved. It took me a while and some remote debugging to find the problem. After I asked in the mailing list I was told that the problem existed in my version (3.0.1) and is fixed in the next one.
So you should definitely use the 3.0.2-SNAPSHOT version despite the fact that it's a snapshot. Jean-Baptiste did some great improvements in that version. The logging is way better and you can have nested features.

Ordering of dependencies

After upgrading my version I could see that all of my bundles were successfully installed into the system/ directory. But after starting my Karaf they weren't deployed. The "mvn:" URL for my feature was missing in the org.apache.karaf.features.cfg "featuresRepositories" property.
I found out that the problem was in the order of my dependencies.

My feature was the first dependency and then followed the Apache Karaf dependencies. Like this: 
<dependencies>
  <dependency>
    <groupId>org.camunda.bpm.extension.osgi</groupId>
    <artifactId>camunda-bpm-karaf-feature</artifactId>
    <version>1.1.0-SNAPSHOT</version>
    <classifier>features</classifier>
    <type>xml</type>
    <scope>runtime</scope>
  </dependency>
  <dependency>
    <groupId>org.apache.karaf.features</groupId>
    <artifactId>framework</artifactId>
    <version>3.0.2-SNAPSHOT</version>
    <type>kar</type>
  </dependency>
  <dependency>
    <groupId>org.apache.karaf.features</groupId>
    <artifactId>standard</artifactId>
    <version>3.0.2-SNAPSHOT</version>
    <classifier>features</classifier>
    <type>xml</type>
    <scope>runtime</scope>
  </dependency>
</dependencies>

The problem is that the framework Kar contains all the configuration files. So when the plugin tries to update the config-file with my feature it is not present. So be careful that the framework kar is your first dependency.

Java 8

 

Edit: As Jean-Baptiste told me (thank you again) the Java 8 problem is only related to version 3.0.1 which I can hereby confirm. So if you have followed my advice and use 3.0.2 you can skip this part.

Being the young and hip person I am ;-) my MacBook was already running Java 8. When I assembled and started a Karaf it would start without a problem (at least it seemed so). But hitting tab only showed this small amount of commands:


Every command, even help, would answer with a NullPointerException. The NPE itself looked like this:

2014-10-07 16:55:55,232 | ERROR | Local user karaf | ShellUtil                        | 37 - org.apache.karaf.shell.console - 3.0.1 | Exception caught while executing command
java.lang.NullPointerException
    at org.apache.felix.gogo.runtime.Reflective.invoke(Reflective.java:61)[37:org.apache.karaf.shell.console:3.0.1]
    at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:82)[37:org.apache.karaf.shell.console:3.0.1]
    at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:477)[37:org.apache.karaf.shell.console:3.0.1]
    at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:403)[37:org.apache.karaf.shell.console:3.0.1]
    at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[37:org.apache.karaf.shell.console:3.0.1]


At first I thought something was missing. But checking the logs again, looking at what happened during startup, revealed some IllegalArgumentExceptions:

2014-10-07 16:53:23,402 | INFO  | FelixStartLevel  | ServiceRecipe                    | 19 - org.apache.aries.blueprint.core - 1.4.0 | Unable to create a proxy object for the service .component-1 defined in bundle org.apache.karaf.deployer.features at version 3.0.1 with id 25. Returning the original object instead.
java.lang.IllegalArgumentException

at org.objectweb.asm.ClassReader

I found out (thank you internet) that this is a Java 8 related problem. The command

export JAVA_HOME=$(/usr/libexec/java_home -v 1.6)

solved my problem. To always start my Karaf with Java 6 I added this line to my start script (see previous heading).

That was all about my karaf-maven-plugin experience. I am sure there are some more hidden things I couldn't figure out. I hope my experience will be useful for someone else.
Have fun with your own custom Karaf!

Monday, September 29, 2014

Create a ProcessEngine with the ConfigurationAdminService

There is a new feature in the camunda BPM OSGi extension and I would like to introduce it to you. So, let's start with the news.

What's new?

 

The OSGi extension now exports a ManagedServiceFactory to provide another way to configure and automatically share a ProcessEngine. The factory will be automatically exported when the OSGi compendium classes are present. You can then provide your configuration and the engine will be created and exported.

If you've never heard of the ConfigurationAdminService I would like to give you a short introduction.

What is the ConfigurationAdminService?

 

The ConfigurationAdminService is supposed to make the provision and change of configuration during easier. When you provide a configuration object (a dictionary) the service will find the according ManagedService or ManagedServiceFactory based on a pid (persistent id) and pass the configuration to it.

There are (way ;-) ) better descriptions in the OSGi Alliance blog and the Apache Felix documentation if you want to learn a little bit more about it. Let's see how we can use the service.

How to use it?

 

As I mentioned before the configuration is just a dictionary. The keys have to corresspondent to the fields of a ProcessEngineConfiguration object. Simply create a HashTable and put everything in it you need to run your engine:
    Hashtable<String, Object> props = new Hashtable<String, Object>();
    props.put("databaseSchemaUpdate", ProcessEngineConfiguration.DB_SCHEMA_UPDATE_CREATE_DROP);
    props.put("jdbcUrl", "jdbc:h2:mem:camunda;DB_CLOSE_DELAY=-1");
    props.put("jobExecutorActivate", true);
    props.put("processEngineName", "TestEngine");


Next you gotta get the ConfigurationAdminService and call createFactoryConfiguration() with the following PId:
org.camunda.bpm.extension.osgi.configadmin.ManagedProcessEngineFactory

There is also a constant in the ManagedProcessEngineFactory interface. After that pass your dictionary to the Configuration object by calling the update() method. And that's it. Your ProcessEngine will be created and exported.

Now that you know how to use the service I would like to tell you what makes it special.

Why use the ConfigurationAdminService?

 

I remember when I first read about the ConfigurationAdminService my thought was: "That's a really great idea!". By using the service you have several ways of providing configuration for your ProcessEngine. The easiest thing to image is that you store your configuration files in separate bundles. Every time something changes you update that bundle.

Depending on your environment there are more ways. In Apache Karaf you could place a file named
org.camunda.bpm.extension.osgi.configadmin.ManagedProcessEngineFactory.cfg
in the etc directory. Karaf would find the factory and pass the configuration to it.
Apache Felix and Equinox also provide ways to read and use configuration files.

Also, the ConfigurationAdminServices helps you to provide different configurations for different environments. At least text files are to change and provide than .class files.

Finally I want to tell you some details about the implementation.

How is it implemented?

 

I gotta admit that the implementation is not that special. The factory uses Commons BeanUtils to find the setters for the properties. Because the setters of ProcessEngineConfiguration provide a fluent way I couldn't use the classes BeanUtils or PropertyUtils. That's why I "combine" the setter-name on my own and invoke the method with MethodUtils.

Every time the configuration of a ProcessEngine changes I stop that engine, unregister it and create a new one and register that one. That is the only way to "change" the configuration of a ProcessEngine. Maybe a ProcessEngine/Configuration needs an update() method.

I would appreciate any hints or recommendations on how to improve the factory. Since it's my first try implementing a ManagedServiceFactory.

So, enjoy the new service!

Saturday, September 27, 2014

camunda BPM platform OSGi presents: integration with Process Application API

I am happy to announce that there is a new way to configure a ProcessEngine and deploy processes.
You can now use the Process Application API.
Luckily, using this API in your project is quite easy.
There are three things you have to do:
  1.  provide a processes.xml file
  2.  make a subclass of org.camunda.bpm.extension.osgi.application.OSGiProcessApplication
  3. export it as OSGi service
After that the process will be deployed and the engine will be started and exported.
To show you how easy it can be I created an example project.

Please note that the feature is right now only usable when using Blueprint.
Also you'll have to build camunda-bpm-platform and camunda-bpm-platform-osgi yourself. But the next releases should be right around the corner ;-)

Unfortunately, I wasn't able to activate the process application local scan for process definitions (see here). I couldn't figure out a way to find resources inside an embedded jar.
Neil Bartlett mentioned the BundleWiring class. Seems like I have to wait until we upgrade the project to OSGi 4.3.
If anyone knows a way please let me know.

So, enjoy the OSGiProcessApplication and give me some feedback if you want to!

Tuesday, June 17, 2014

camunda BPM Platform OSGi 1.0.0 released

Today I am happy to announce the version 1.0.0 release of the camunda BPM Platform OSGi project.
Especially because I am the maintainer of the project ;-)

The project started on 17th of November when we moved the "old" OSGi module out of the core platform and made it a community project.
So let's start with a review.

What did we do?

 

First of all we now have a lot more test coverage. At the beginning there were zero tests and now we should have roughly 80% test coverage across all modules. Next to the tests there was a lot of refactoring to have smaller classes with clear responsibilities (that's what refactoring is all about, right? ;-))

Secondly, we have a Apache Karaf feature.xml and a Blueprint example project.

The first contribution from "outside" was the Apache Karaf commands module, which was developed by Elek from DCP Consulting. Thank you, again!

Finally, there is the new OSGiELResolver, which was included a few weeks ago. The new ELResolver gives us some independence from Blueprint.

As you can see, we did quite a few things, but there are still some tasks left.

What's left to do?

 

The ToDo-list states the following:
  1. adapt Process Application API for OSGi
  2. camunda webapp WAB (cockpit, tasklist, admin)
  3. create example for configuring engine using PAX-CDI
Number one Daniel, Roman and I tried to solve in May. All the results are in the platfrom-api-hack branch. They still need some review.
Number two and three are still open.

That's what's left on the ToDo-list, but what else is there to do? 

The future

Of course it would be great to get some feedback from "real world" users and I hope more people will use the OSGi module in the future.
Then the open ToDos got to get done and I guess we'll find some ideas for the future (maybe Apache ServiceMix with camunda BPM).

After we've taken a look at the past, the present and the future there is only one thing left:

Finally

 

A big "thank you" to Daniel and Roman for guidance, support and having time for a hackathon with me! It is a pleasure working with you!

Sunday, May 25, 2014

Consuming arbitrary remote services with the OSGiELResolver (camunda BPM OSGi)

In my last blog post I promised to give a slightly more advanced example about how to use the new OSGiELResolver. And as I promised, here it is ;-)

Prerequisites


The setup is quite simple. We have three bundles:
  1. API
  2. Service Provider
  3. Service Consumer
You can find all the sources here. (feel free to suggest improvements, possible bugs, etc.)
As runtime I used two Apache Karaf instances on my computer (version 2.3.5; I had some problems with 3.0.1).
For remoting we'll use Apache CXF 1.4 (single bundle release).
And of course we'll need camunda BPM platform OSGi, which you'll have to build yourself.
Before I tell you more about the three bundles I'd like to point the book "Enterprise OSGi in Action" out. Without that great book I couldn't have provided this example. It's definitely worth reading.

So, enough advertisement, let's take a look at the bundles.

API bundle

 

The API bundle is really simple. It only contains one interface with a method. We'll need the bundle in both runtimes.

Provider bundle

 

Now we're getting a little bit more serious. The provider bundle contains the service implementation we want to use.
The context.xml contains the important parts for remoting:
<entry key="service.exported.interfaces" 
 value="de.blogspot.wrongtracks.osgielresolver.api.SomethingService"/><entry key="service.exported.configs"
       value="org.apache.cxf.ws"

<entry key="org.apache.cxf.ws.address"
       value="http://localhost:9001/somethingservice"/>


"service.exported.interfaces" should be obvious.
"service.exported.configs" tells Distributed OSGi to look for implementation specific properties.
Lastly "org.apache.cxf.ws.address" lets us define an alternative address. It is quite helpful if you don't want to type the fully qualified name of the class in your browser or other config files.

Consumer bundle

 

Let's take a look at the consumer. This bundle needs a little bit more information to work properly. To be able to consume remote services we need the OSGI-INF/remote-service/remote-services.xml. It doesn't have to be that name or that directory. You can specify the path inside the bundle with the "Remote-Service" header, which I set in the POM to:
      <Remote-Service>OSGI-INF/remote-service/*.xml</Remote-Service>
I won't walk you through the remote-services.xml. I'm sure you'll find better explanations somewhere else. (e.g. in Enterprise OSGi in Action ;-) )

After we configured this we can use the reference tag in the context.xml to find the service.
To make the service work with the OSGiELResolver we have to add two things. In the remote-services.xml the property "processExpression" has to be set and in the context.xml we have to use a filter.
As you may know the ELResolver uses the filter to search for classes. Searching only worked when both, attribute and filter, were set.

The provider Karaf

 

Like I said, I used Karaf as runtime. The "provider" Karaf needs three bundles:
  1. API
  2. Provider
  3. Apache CXF
Just drop them into the deploy directory. It worked best for me when I started them in the order API, CXF and provider. Then everything should work as expected.

The consumer Karaf

 

The "consumer" Karaf needs a little bit more bundles (and if you run it on the same machine you'll have to change three ports). You have to add:
  1. API
  2. Consumer
  3. Apache CXF
  4. camunda BPM platform OSGi and dependencies
Drop API, consumer and CXF jars into deploy (again, starting API, CXF and then consumer works best). Adding camunda BPM platform OSGi isn't very difficult because there is a feature.xml (assumed it is installed in your local Maven repository).
To install it type:
features:addurl mvn:org.camunda.bpm.extension.osgi/camunda-bpm-karaf-feature/1.0.0-SNAPSHOT/xml/features

and then:
features:install camunda-bpm-karaf-feature-minimal

This should resolve all you bundles. Now, If you start the consumer bundle you should see the log saying "Started process". Strangely the logger of the service implementation was quiet. But if you uncomment the exception you can see that the service was called.

So, as you can see, the new OSGiELResolver makes it possible to consume arbitrary remote services, which is quite an improvement. I hope my example is understandable and helps to see the possibilities.

Hint

 

When you encounter this exception:
java.lang.IllegalStateException: Invalid BundleContext
just start the CXF bundle again, then it should work.

Monday, May 19, 2014

camunda BPM OSGi: the new OSGiELResolver

Introduction


Some of you may know that I am the maintainer of the camunda BPM OSGi project.
Several weeks ago I started to implement a new ELResolver (EL = expression language) and because it's finished now I want to do some shameless self-advertising for my work ;-)

The problem


The "old" ELResolver had some limitations: It could only work with one kind of classes (those who implement the JavaDelegate interface) and you had to register the ELResolver as service listener.
Also, the implementation depends on Blueprint because it used the registered component id to find the classes.

The new OSGiELResolver


The new OSGiELResolver doesn't have those limitations. You can use it theoretically with every class and it doesn't depend on Blueprint. If you want to know more, please have a look at the updated README. I would be happy if you could give me some feedback or ideas for improvement.

So far for now. I'll try to put together a more advanced example, soon.

Please note: this change breaks the API because I moved some classes, so this version would be a new major version number, if it weren't for the snapshot ;-)

Saturday, May 10, 2014

First steps with Apache ACE

Introduction


"Apache ACE is a software distribution framework that allows you to centrally manage and distribute software components, configuration data and other artifacts to target systems." (from https://ace.apache.org/)
Well, that sounds good enough to try it out, at least for me.
I like the idea to centrally configure deployments with different version and have a way to automatically distribute those.

Starting ACE


Setting up Apache ACE was pretty easy. The Getting started guide contains all the necessary steps.
My MacBook was the ACE server and my Raspberry Pi a target.

Using the Web GUI is easy and straightforward (nice one guys ;-) ).
But that's for little children. I wanna find a way to automate everything with scripts.

The Client Shell API

Basically there are two ways to talk to the server remotely. One is the Client Shell API and the other way is via REST API. For now I'll stick with the Shell API

1st step: connecting to the server as shell client

Before we can write a script we have to connect to the server.
With some help from the iQSpot people (see here) I figured it out.
They suggest starting the client like this:

java -Dagent.discovery.serverurls="http://server:port"
     -Dorg.apache.ace.server="server:port"
     -Dorg.apache.ace.obr="server:port"
     -Dorg.osgi.service.http.port=-1
     -jar client.jar
 
Unfortunately, that didn't work for me (even after adding some missing backslashes).
The default is that you should be in the directory of client.jar. "-jar client.jar" wasn't the problem.
The startup searches for the client/conf directory, so when you see this exception:

java.lang.IllegalArgumentException: Bad arguments; either not an existing directory or an invalid interval.
    at org.apache.ace.configurator.Configurator.<init>(Configurator.java:89)
    at org.apache.ace.configurator.Activator.init(Activator.java:33)
    at org.apache.felix.dm.DependencyActivatorBase.start(DependencyActivatorBase.java:76)
    at org.apache.felix.framework.util.SecureAction.startActivator(SecureAction.java:645)
    at org.apache.felix.framework.Felix.activateBundle(Felix.java:2146)
    at org.apache.felix.framework.Felix.startBundle(Felix.java:2064)
    at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1291)
    at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:304)
    at java.lang.Thread.run(Thread.java:722)

You're probably starting the client from a different directory.
To get rid of that exception we have to set a property:
-Dorg.apache.ace.configurator.CONFIG_DIR=
All in all the command to start the client looks like this:

java -Dagent.discovery.serverurls="http://server:port"\
     -Dorg.apache.ace.server="server:port"\
     -Dorg.apache.ace.obr="server:port"\
     -Dorg.osgi.service.http.port=-1\
     -Dorg.apache.ace.configurator.CONFIG_DIR="apache-ace-2.0.1-bin/client/conf"\
     -jar apache-ace-2.0.1-bin/client/client.jar


Now we can start the client.
But to pass a script to the shell we need two more arguments. Thanks again to the iQSpot people. They already pointed out those arguments:
  • -Dgosh.args="–-args"
  • -Dace.gogo.script.delay=delay
  • -Dace.gogo.script=/path/to/script.gogo
What do those three do?
Everything you'll pass as "gosh.args" will be executed immidiately. If pass "--help" for example and start the client you'll see the help output.
The delay is helpful when you want to give your client some time to synchronize with the server.
"ace.gogo.script" should be obvious ;-)
We end up with the following command:

java -Dagent.discovery.serverurls="http://server:port"\
     -Dorg.apache.ace.server="server:port"\
     -Dorg.apache.ace.obr="server:port"\

     -Dorg.osgi.service.http.port=-1\
     -Dorg.apache.ace.configurator.CONFIG_DIR="apache-ace-2.0.1-bin/client/conf"\
     -Dace.gogo.script.delay="3000"\
     -Dace.gogo.script="script.foo"\

     -jar apache-ace-2.0.1-bin/client/client.jar

Now we have to find out, what we should put into "script.foo".
 

Shell commands


Every (basic) command is described here.
The steps are quite simple: cw, ca, cf, ca2f, cd, cf2d
If you don't like or get the abbreviations (it took me a while) there is also a nice picture in the REST API documentation:
What the picture is missing is cw or "create workspace". When using the Shell API you need a workspace, which you can commit later.

The script

 

So, what should skript.foo do? Let's assume we have to upload some generated artifacts from our CI server 
The steps are
  1. create workspace
  2. add the new Jars as artifacts from certain directory
  3. create a new feature
  4. add artifacts to feature
  5. create a new distribution
  6. add new feature and existing ones to distribution
  7. add feature to existing target 
I have to admit that it took me quite a while to figure everything out because I'm not very experienced with Apache Felix GoGo.
Creating the workspace is easy: w = (cw)
Now we can call the workspace with $w. Adding the Jars was more difficult. Let's assume the directory is ./toAdd. Then the command looks like this: 

each ([(ls toAdd)]) {$w ca (($it toURL) toString) false}

You "toAdd" can be changed to any path and you could use some wildcards, like ls toAdd/*.jar
I guess if you're used to GoGo the command won't be a surprise. If you're not used to it, I would like to explain the different parts to you:
each takes a list and a function. ls toAdd returns a File array. That's why we need the brackets. They convert the array into a list. After that comes the function, indicated by the braces.
$w ca is the method to create an arfifact. $it is the iterator over the list that is provided by each.
Then we call the methods toURL and toString because reflection makes it possible ;-)

Third step: add all artifacts to feature

each ($w la "(Bundle-SymbolicName=org.camunda.*)") {symbolicName=($it getAttribute "Bundle-SymbolicName"); $w ca2f "(Bundle-SymbolicName="$symbolicName")" "(name=test-feature)"}

Again, we use a for-each-loop.
$w la lists all the bundles that match the passed pattern. (Here, I want to add all camunda bundles, no advertisement ;-))
Then I save the symbolic name in a variable, so it's easier for me later to reference it.
org.apache.ace.client.repository.RepositoryObject has a getAttribute method, which we use here.
Also, please note the semicolon.
We use the symbolic name as part of the first argument for ca2f (create artifact2feature).
The String contains three parts
  • "(Bundle-SymbolicName="
  •  $symbolicName
  • ")"
I don't know why, but we don't ne a "+" for string concatenation. The second argument is the name of the feature. I just assume it to stay the same: "test-feature"
Creating a distribution and a feature2distribution are nothing special.

All in all we end up with the following:

w = (ace:cw)
$w cf test-feature
$w cd test-distro

each ([(ls toAdd)]) {$w ca (($it toURL) toString) false}

each ($w la "(Bundle-SymbolicName=org.camunda.*)") {symbolicName=($it getAttribute "Bundle-SymbolicName"); $w ca2f "(Bundle-SymbolicName="$symbolicName")" "(name=test-feature)"}

$w cf2d "(name=test-feature)" "(name=test-distro)"

$w commit


That should do the trick so far.
Stay tuned for my next steps with ACE ;-)

Tuesday, May 6, 2014

Glassfish 4, Commons Mail and "UnsupportedDataTypeException: no object DCH for MIME type multipart/mixed"

I know there are a bazillion posts/threads/etc. about the exception mentioned in the title and now there are a bazillion + one ;-)
Unfortunately I couldn't find the solution I want to present to you anywhere else.

First some context:
My class extends an Activiti class and uses Apache Commons Mail to send an email.
The email contains some text and has a file (txt/pdf/docs) attached.
Everything runs inside a Glassfish 4 and the Jars are deployed as OSGi bundles.

When calling email.send() the server threw the feared UnsupportedDataTypeException:

Caused by: javax.activation.UnsupportedDataTypeException: no object DCH for MIME type multipart/mixed;
    boundary="----=_Part_0_397989068.1398665205325"
    at javax.activation.ObjectDataContentHandler.writeTo(DataHandler.java:891)
    at javax.activation.DataHandler.writeTo(DataHandler.java:317)
    at javax.mail.internet.MimeBodyPart.writeTo(MimeBodyPart.java:1574)
    at javax.mail.internet.MimeMessage.writeTo(MimeMessage.java:1840)
    at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:1119)
    ... 120 more


Like I mentioned the Internet is full of solutions but none of them worked for me.
Because the Glassfish showed me that all of my bundles were correctly linked and resolved the problem had to be another place.

My colleague then told me I should try to change the TCCL. After some try-and-error it worked (I tried the one from commons.mail, the one from javax.activation and one I forgot ;-)).

The solution was to import javax.mail in my bundle and change the TCCL to the javax.mail classloader:

Thread.currentThread().setContextClassLoader(javax.mail.Message.class.getClassLoader());
email.send()

I am not sure why only the javax.mail classloader works. For me it is some arcane dependency/classloading/visibility problem.
Nevertheless, I hope this post helps some Glassfish/OSGi users.

Finally, I would like to thank my colleague @spost1970 for helping me find a solution.

Saturday, April 26, 2014

Integrating Apache Aries blueprint into Glassfish 4

After experimenting with Glassfish 4 lately I would like to let you know what I am up to. It's nothing big (yet) ;-)

A little bit of background

Glassfish comes with integrated OSGi support (Apache Felix) but without Blueprint (as far as I know). So putting a Blueprint container into Glassfish became my task.

The nice thing about Glassfish is that it combines Java EE (especially EJBs) and OSGi and that, in contrast to JBoss, it has a nice OSGi web console.
If you want to get to know more about the OSGi-JEE combination the keyword for your favourite search engine is "fighterfish".

Most of the credit goes to Yong Tang and his blog entry. He describes the integration of Aries Application but also describes the basic parts necessary for my task.

The problem


So, before we start, what is the problem? If you're familiar with Glassfish you certainly know there is an autodeploy/bundles directory. Why don't I just drop the necessary bundles into the autodeploy/bundles directory?
When using the autodeploy directory Glassfish doesn't start the bundles so you'll have to do it by hand every time you empty the osgi-cache directory and, of course, initially.
But there's a more convenient way.

Let's get it on


What's the better way?
Just drop the jars
  • Aries Blueprint Api(v1.0.0)
  • Aries Blueprint(v1.1.0)
  • Aries Proxy (v1.0.0) 
  • Aries Util(v1.0.0)
into glassfish/modules/autostart.
To make sure all dependencies are there add slf4j api (v1.7.2), logback core(v1.0.13) and logback classic (v1.0.13) (or whichever logging framework you prefer). You don't need any additional configuration because we didn't create a subdirectory.

See, piece of cake. The trick is to find the right directory. Now the Blueprint extender will do its job right after start up.
glassfish/modules/autostart
glassfish/modules/autostart
glassfish/modules/autostart
glassfish/modules/autostar
glassfish/modules/autostar

Wednesday, January 22, 2014

Activiti/camunda BPM: custom behavior and BPMN extension elements using Blueprint

Introduction 

 

In the last few weeks I have been working on a problem regarding OSGi-Blueprint and Activiti. Because it wasn't as easy as I would have hoped I want to share my solution with you. I will start by explaining my environment, show you the problem and then I will explain my first attempt. After that I will present my solution. Finally I will show some ideas how to make it better and things that I did not test.

Just a short hint about the writing: when I reference a class or some XML it's written in italic, e.g. Object. When you see "process engine", I am talking about the whole thing, but when you see ProcessEngine it's the actual class.

My environment

 

I use the Activiti-framework in version 5.12.1, Apache Aries in version 1.0.0 with a little modification and my own ProSt bundles. ProSt can be found here. The README.md explains why and what I changed in Aries.

I haven't tried, yet, but I am pretty sure that camunda BPM suffers the same problem because
both share the same MailActivityBehavior and BlueprintELResolver classes and use <extension-elements> for injection. So if you prefer camunda and see "activiti" somewhere you just have to replace it with "camunda" in your head ;-)

The problem 

 

In general, I just wanted to send an e-mail during my process-execution. Sounds pretty easy, right?

My SendMailWithAttachmentBehaviour class extends the previously mentioned MailActivitiBehavior class. The process definition contains all the necessary information to send the e-mail, e.g. from, to and subject. Only the attachment is missing, which I get from the execution environment.

Because I use Blueprint I cannot use the activit:class or type="mail" attributes in the process definition. I have to declare the class this way:
activiti:deleExpression="${sendMailWithAttachmentBehavior}"

A little hint: the name in the braces has to match the one used as bean id in the blueprint.xml.

The other ways do not work with OSGi because of class visibility etc.

The easy part was to extend the BlueprintELResolver class (ProStBlueprintELResolver) and add a way to add custom behavior classes at the moment.

So, what happens when the process engine tries to resolve the expression?
When the bundle is loaded Blueprint creates a dynamic proxy and registers it at the ProStBlueprintELManager.
After the process reaches the ServiceTask which delegates to the ${sendMailWithAttachmentBehaviour} the process-engine asks its ExpressionLanguageResolvers if they know something with the name "sendMailWithAttachmentBehaviour". Logically the proxy is found.
After that the process engine tries to set the extension-elements at the class.
First it tries to find setter methods and if it cannot find setters it tries field injection. (see ClassDelegate.applyFieldDeclaration())
Both ways do not work.
But why?
Of course a proxy does not have any fields. But why is it not possible to just add the setters to the SendMailWithAttachmendBehaviour class?
The call is proxy.getClass().getMethods() and according to the documentation this will return all the methods of the interfaces that the proxy was created with. ActivityBehavior does not declare the setSubject() etc. methods because they are only needed for e-mails.

First attempt

 

At first I thought the solution was quite obvious. I would just export a second interface containing the setters like this:

<bean id="sendMailWithAttachment" class="de.blogspot.wrongtracks.prost.example.behaviour.SendMailWithAttachmentBehaviour" />
  
<service ref="sendMailWithAttachment">
  <interfaces>
<value>org.activiti.engine.impl.pvm.delegate.ActivityBehavior</value>
<value>de.blogspot.wrongtracks.prost.example.behavior.ExtensionElementsMailSetter</value>
  </interfaces>
</services>
But wait, if you take a look at the (old) context.xml you can see that my reference listener just listens for ActivityBehavior and not the other interface. That's why the created proxy won't contain the methods from the other interface. Too bad...

The solution

 

I found the solution accidentally while reading the Apache Aries Blueprint documentation. This chapter points out that you can also listen for service references.
I changed the methods to accept a ServiceReference instead of a ActivitiyBehavior and when the expression should be resolved I use the BundleContext to get the service. At that point it is not a proxy, it is the implementation.

Then everything works just fine. I don't even need the setter interface anymore.
You can see the solution when you look at the new context.xml and the previously mentioned ProStBlueprintELResolver. (the previously showed link pointed to an old version so nothing would be spoiled ;-) )

That's it, that is my solution to add custom behavior to the process engine and use extension elements in the BPMN XML.

How could we improve the whole thing?

 

Strangely, I have no idea how the whole thing could be improved. I would like to hear your ideas. Also, I would like to know if you think that the way presented here is good or bad or something in between.

What didn't I try?

 

You should note that I have not tried to find out how JavaDelegates behave in the same situation. I just did not have time and I wanted to show you my solution as soon as I finished it.

 

Copyright @ 2013 Wrong tracks of a developer.

Designed by Templateiy