A typical pain point in continuous delivery is the database schema and contents. The database schema changes over time and can not be deployed like bundles. Sometimes even the database contents have to be adapted when the schema changes. In a modern deployment pipeline we of course also want to automate this part of the deployment. This tutorial builds upon the db tutorial and shows how to leverage liquibase to automatically keep the db schema up to date with the java code. We assume familiarity with how to create DataSources using pax-jdbc-config and declarative services.

Tutorial code

The full code of this example can be found at github in repo Karaf-Tutorial liquibase.

Declare schema using liquibase changesets

Liquibase manages a database schema over time using changesets. A changeset is created at least for every release that needs to change the schema.
In our example the first changeset creates a simple table and populates it with a record.

<changeSet id="1" author="cs">
<createTable tableName="person">
<column name="id" type="bigint" autoIncrement="true">
<constraints primaryKey="true" nullable="false" />
<column name="name" type="varchar(255)" />
<insert tableName="person">
<column name="id">1</column>
<column name="name">Chris</column>

The changeset can be stored in different places. For me the schema is closely related to the application code. So it makes sense to store it inside a bundle. In the example the changesets can be found in migrator/src/main/resources/db/changesets.xml.

Applying the changeset

Liquibase provides many ways to apply the schema. It can be done programmatically, as a servlet filter, from spring or from maven. In many cases it makes sense to apply the schema changes before the application starts. So when the user code starts it knows that the schema is in the correct state. In case the application has no db admin rights liquibase can also create a SQL script tailored to the database that an administrator can apply. While this is necessary in some settings it breaks the idea of fully unattended deployments. In our example we want to apply the schema to the DataSource that is given to our application and we want to make sure that no user code can work on the DataSource before the schema is updated. We create the DataSource from an OSGi config using pax-jdbc-config. Luckily pax-jdbc-config 1.1.0 now supports a feature called PreHook. This allows to define code that runs on the DataSource before it is published as a service.

Using PreHook to apply the database changes

To register a PreHook we implement the PreHook interface and publish our implementation as an OSGi service. We also give it a name using the service property "name".

Our PreHook to do the Liquibase schema update looks like this:

public class Migrator implements PreHook {

	public void prepare(DataSource ds) throws SQLException {
		try (Connection connection = ds.getConnection()) {
		} catch (LiquibaseException e) {
			throw new RuntimeException(e);
	private void prepare(Connection connection) throws DatabaseException, LiquibaseException {
		DatabaseConnection databaseConnection = new JdbcConnection(connection);
		Database database = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(databaseConnection);
		ClassLoader classLoader = this.getClass().getClassLoader();
		ResourceAccessor resourceAccessor = new ClassLoaderResourceAccessor(classLoader);
		Liquibase liquibase = new Liquibase("db/changesets.xml", resourceAccessor, database);
		liquibase.update(new Contexts());

By itself this service would not be called. We also need to reference it in our DataSource config using the property "ops4j.preHook":

Accessing the DB

The PersonRepo class uses simple jdbc4 code to query the person table and return a List of Person objects. Person is a imply bean with id and name properties.

@Component(service = PersonRepo.class)
public class PersonRepo {

	DataSource ds;

	public List<Person> list() {
		try (Connection con = ds.getConnection()) {
			ArrayList<Person> persons = new ArrayList<Person>();
			ResultSet rs = con.createStatement().executeQuery("select id, name from person");
			while ( {
			return persons;
		} catch (SQLException e) {
			throw new RuntimeException(e.getMessage(), e);
	private Person read(ResultSet rs) throws SQLException {
		Person person = new Person();
		return person;

Testing the code

mvn clean install

Download and start Apache Karaf 4.1.1. Then install the example-lb feature

cat | tac etc/org.ops4j.datasource-person.cfg
feature:install example-lb jdbc
jdbc:tables person

This shows the list of tables for the DataSource person. In our case it should contain a table person with the columns id and name.


This should display one person named Chris with id 1. The schema as well as the data was created using liquibase.

Introducing a new column

Now a typical case is that we want to add a new column to a table in the next release of the software. We will do this in code and schema step by step.

After our first test run with the old code the database will exist in the old state. So we of course want all data be preserved when we update to the new version.

Add a new changeset to liquibase

We add the new changeset to the file

<changeSet id="2" author="cs">
<addColumn tableName="person">
<column name="age" type="int" defaultValue="42"/>

When liquibase updates the database it will see that the current state does not include the new changeset and apply it.
So all old data should still be present and the person table should have a new column age with all ages of persons set to the default value 42.

Use the new column in the code

The Person model object is already prepared for the new property to keep things simple.

So we only need to adapt the PersonRepo. We add age to the select:

select id, name, age from person

and also make sure we read the age from the resultset and store it in the person record:

person.age = rs.getInt("age");

Note that this code will break if there is no age column. So this will show that the new column is applied correctly.

Test the new code

mvn clean install

Then we update the service bundle to pick up the db changes and the new code


First we do a quick check to see the column is actually added

jdbc:tables person

The person table should now have three columns id, name and age


The person Chris should name have the default age of 42.

 The declarative services (DS) spec has some hidden gems that really help to make the most out of your application.

Use the DS spec annotations to define your component

Some older articles about DS define the components using xml. While this is still possible it is much simpler to use annotations for this purpose.
There are 3 sets of annotations available bnd style, felix style and OSGi DS spec style. While the first two set can still be seen in the wild you
should only use the OSGi spec annotations for new code as the other sets are deprecated.

At runtime DS only works with the xml so make sure your build creates xml descriptors from your annotated components. Recent versions of bnd, maven-bundle-plugin
and bnd-maven-plugin all handle the spec DS annoations by default. So no additional settings are required.

Activate component by configuration

name = "mycomponent",
immediate = true,
configurationPolicy = ConfigurationPolicy.REQUIRE,

In some cases it makes sense to always install a bundle but to be able to activate and deactivate a service it provides.
By using configurationPolicy = REQUIRE the component is only activated if the configuration pid "myComponent" exists.
Do not forget immediate=true as by defaullt the component would be lazy and thus not activate unless someone requires it.

Override service properties using config

By default a DS component is published as a service with all properties that are set in the @Component annotation.
Every component is also configurable using a config pid that matches the component name. It is less well known that the
configuration properties also show on the service properties and override the settings in the annotation.

One use case for this is to publish a componeent using Remote Service Admin that was not marked by the developer.
Another use case is to override the topic a EventAdmin EventHandler listens on. See

Overide injected services of a component using config

If a component is injected with a service using @Reference then the service is normally statically filtered using the target property of the annotation in the
form of an ldap filter.
This filter can be overridden using a config property target.refname where refname is the name of the property the service is injected into.

Create multiple instances of a component using config

Another not so well known fact is that a DS component not only reacts on a single configuation pid but also on factory configs. If the pid of your component config is "myconfig" then in apache karaf you can create configs named myconfig-1.cfg and myconfig-2.cfg and DS will create two instances of your component.

Typesafe configuration and Metatype information

Starting with DS 1.3 you can define type safe configs and also have them available as meta type information for config UIs.

@ObjectClassDefinition(name = "Server Configuration")
@interface ServerConfig {
  String host() default "";
  int port() default 8080;
  boolean enableSSL() default false;

@Designate(ocd = ServerConfig.class)
public class ServerComponent {

  public void activate(ServerConfig cfg) {
    ServerSocket sock = new ServerSocket();
    sock.bind(new InetSocketAddress(, cfg.port()));
    // ...


See Neil Bartletts post for the details.

Internal wiring

In DS every component publishes a service. So compared to blueprint DS seems to miss a feature for creating internal components / beans that are only visible inside the bundle.
This can be achieved by putting a component into a private package and setting the service property to the class of the component. The component is still exported as a service
but the service will not be visible to the outside as the package is private. Still the service can be injected into other classes of the bundle using the component class.

Field injection and constructor injection

Since DS 1.3 (part of the OSGi 6 specs) you can also inject services directly into a field like:

EventAdmin eventAdmin;

You can even inject into a private field but remember this will make it very difficult to write a unit test for your component. I personally always use package visibility for
fields I inject stuff into. I then put the unit test into the same package and can set the field inside the test without doing any special magic.

Constructor injection is not possible at the time of writing this article but it is part of DS 1.4 (part of the OSGi spec 7). The implementation of this spec is currently on the way at felix scr.

Injecting multiple matching services into a List<MyService>

Since DS 1.3 it is possible to inject all services matching the interface and an optional filter into a List

List<MyService> myservices;

By default DS assume the static policy. This means that whenever the list of services changes the component is deactivated and activated again. While this is the safest way it might be too slow for your use case.
So injecting services dynamically can make sense.

Injecting services dynamically

By default DS will restart your component on reference changes. If this is too slow in your case you can allow DS to dynamically change the injected service(s).


volatile MyService myService;

This tutorial shows how to use Declarative Services together with the new Aries JPA 2.0.You can find the full source code on github Karaf-Tutorial/tasklist-ds

Declarative Services

Declarative Services (DS) is the biggest contender to blueprint. It is a slim service injection framework that is completely focused on OSGi. DS allows you to offer and consume OSGi services and to work with configurations.

At the core DS works with xml files to define scr components and their dependencies. They typically live in the OSGI-INF directory and are announced in the Manifest using the header "Service-Component" with the path to the component descriptor file.  Luckily it is not necessary to directly work with this xml as there is also support for DS annotations. These are processed by the maven-bundle-plugin. The only prerequisite is that they have to be enabled by a setting in the configuration instructions of the plugin.


For more details see

DS vs Blueprint

Let us look into DS by comparing it to the already better known blueprint. There are some important differences:

  1. Blueprint always works on a complete blueprint context. So the context will be started when all mandatory service deps are present. It then publishes all offered services. As a consequence a blueprint context can not depend on services it offers itself. DS works on Components. A component is a class that offers a service and can depend on other services and configuration. In DS you can manage each component separately like start and stop it. It is also possible that a bundle offers two components but only one is started as the dependencies of the other are not yet there.
  2. DS supports the OSGi service dynamics better than blueprint. Lets look into a simple example:
    You have a DS and blueprint module component that offers a service A and depends on a mandatory service B. Blueprint will wait on the first start for the mandatory service to be available. If it does not come up it will fail after a timeout and will not be able to recover from this. Once the blueprint context is up it stays up even if the mandatory service goes away. This is called service damping and has the goal to avoid restarting blueprint contexts too often. Services are injected into blueprint beans as dynamic proxies. Internally the proxy handles the replacement and unavailability of services. One problem this causes is that calls to a non available service will block the thread until a timeout and then throw a RuntimeException.
    In DS on the other hand a component lifecycle is directly bound to dependent services. So a component will only be activated when all mandatory services are present and deactivated as soon as one goes away. The advantage is that the service injected into the component does not have to be proxied and calls to it should always work.
  3. Every DS component must be a service. While blueprint can have internal beans that are just there to wire internal classes to each other this is not possible in DS. So DS is not a complete dependency injection framework and lacks many of the features blueprint offers in this regard.
  4. DS does not support extension namespaces. Aries blueprint has support for quite a few other Apache projects using extension namespaces. Examples are: Aries jpa, Aries transactions, Aries authz, CXF, Camel. So using these technologies in DS can be a bit more difficult.
  5. DS does not support support interceptors. In blueprint an extension namespace can introduce and interceptor that is always called before or after a bean. This is for example used for security as well as transation handling. For this reason DS did not support JPA very well as normal usage mandates to have interceptors. See below how jpa can work on DS.

So if DS is a good match for your project depends on how much you need the service dynamics and how well you can integrate DS with other projects.


The JPA spec is based on JEE which has a very special thread and interceptor model. In JEE you use session beans with a container managed EntityManger
to manipulate JPA Entities. It looks like this:

class TaskServiceImpl implements TaskService {

  private EntityManager em;

  public Task getTask(Integer id) {
    return em.find(Task.class, id);

In JEE calling getTask will by default participate in or start a transaction. If the method call succeeds the transaction will be committed, if there is an exception it will be rolled back.
The calls go to a pool of TaskServiceImpl instances. Each of these instances will only be used by one thread at a time. As a result of this the EntityManager interface is not thread safe!

So the advantage of this model is that it looks simple and allows pretty small code. On the other hand it is a bit difficult to test such code outside a container as you have to mimic the way the container works with this class. It is also difficult to access e.g. em
 as it is private and there is not setter.

Blueprint supports a coding style similar to the JEE example and implements this using a special jpa and tx namespace and
interceptors that handle the transaction / em management.

DS and JPA

In DS each component is a singleton. So there is only one instance of it that needs to cope with multi threaded access. So working with the plain JEE concepts for JPA is not possible in DS.

Of course it would be possible to inject an EntityManagerFactory and handle the EntityManager lifecycle and transactions by hand but this results in quite verbose and error prone code.

Aries JPA 2.0.0 is the first version that offers special support for frameworks like DS that do not offer interceptors. The solution here is the concept of a JPATemplate together with support for closures in Java 8. To see how the code looks like peek below at chapter persistence.

Instead of the EntityManager we inject a thread safe JpaTemplate into our code. We need to put the jpa code inside a closure and run it with jpa.txEpr() or jpa.tx(). The JPATemplate will then guarantee the same environment like JEE inside the closure. As each closure runs as its own
instance there is one em per thread. The code will also participate/create a transaction and the transaction  commit/rollback also works like in JEE.

So this requires a little more code but the advantage is that there is no need for a special framework integration.
The code can also be tested much easier. See TaskServiceImplTest in the example.


  • features
  • model
  • persistence
  • ui


Defines the karaf features to install the example as well as all necessary dependencies.


This module defines the Task JPA entity, a TaskService interface and the persistence.xml. For a detailed description of model see the tasklist-blueprint example. The model is exactly the same here.


public class TaskServiceImpl implements TaskService {

    private JpaTemplate jpa;

    public Task getTask(Integer id) {
        return jpa.txExpr(em -> em.find(Task.class, id));

    @Reference(target = "(")
    public void setJpa(JpaTemplate jpa) {
        this.jpa = jpa;

We define that we need an OSGi service with interface TaskService and a property "" with the value "tasklist".

public class InitHelper {
    Logger LOG = LoggerFactory.getLogger(InitHelper.class);
    TaskService taskService;
    public void addDemoTasks() {
        try {
            Task task = new Task(1, "Just a sample task", "Some more info");
        } catch (Exception e) {
            LOG.warn(e.getMessage(), e);
    public void setTaskService(TaskService taskService) {
        this.taskService = taskService;

The class InitHelper creates and persists a first task so the UI has something to show. It is also an example how business code that works with the task service can look like.
@Reference TaskService taskService injects the TaskService into the field taskService.
@Activate makes sure that addDemoTasks() is called after injection of this component.

Another interesting point in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special
persistence.xml for testing to create the EntityManagerFactory. It also shows how to instantiate a ResourceLocalJpaTemplate
to avoid having to install a JTA transaction manager for the test. The test code shows that indeed the TaskServiceImpl can
be used as plain java code without any special tricks.


The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.

@Component(immediate = true,
service = { Servlet.class },
property = { "alias:String=/tasklist" }
public class TaskListServlet extends HttpServlet {
    private TaskService taskService;
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException,
        IOException {
        // Actual code omitted

    public void setTaskService(TaskService taskService) {
        this.taskService = taskService;

The above snippet shows how to specify which interface to use when exporting a service as well as how to define service properties.

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist".
So it is available on the url http://localhost:8181/tasklist.


Make sure you use JDK 8 and run:

mvn clean install


Make sure you use JDK 8.
Download and extract Karaf 4.0.0.
Start karaf and execute the commands below

Create DataSource config and Install Example
cat | tac -f etc/org.ops4j.datasource-tasklist.cfg
feature:install example-tasklist-ds-persistence example-tasklist-ds-ui

Validate Installation

First we check that the JpaTemplate service is present for our persistence unit.

service:list JpaTemplate 

------------------------------------------- = tasklist
 transaction.type = JTA = 164
 service.bundleid = 57
 service.scope = singleton
Provided by : 
 tasklist-model (57)
Used by: 
 tasklist-persistence (58)

Aries JPA should have created this service for us from our model bundle. If this did not work then check the log for messages from Aries JPA. It should print what it tried and what it is waiting for. You can also check for the presence of an EntityManagerFactory and EmSupplier service which are used by JpaTemplate.

A likely problem would be that the DataSource is missing so lets also check it:

service:list DataSource

 dataSourceName = tasklist
 felix.fileinstall.filename = file:/home/cschneider/java/apache-karaf-4.0.0/etc/org.ops4j.datasource-tasklist.cfg = H2-pool-xa = tasklist
 service.factoryPid = org.ops4j.datasource = org.ops4j.datasource.cdc87e75-f024-4b8c-a318-687ff83257cf
 url = jdbc:h2:mem:test = 156
 service.bundleid = 113
 service.scope = singleton
Provided by : 
 OPS4J Pax JDBC Config (113)
Used by: 
 Apache Aries JPA container (62)

This is like it should look like. Pax-jdbc-config created the DataSource out of the configuration in "etc/org.ops4j.datasource-tasklist.cfg".  By using a DataSourceFactory wit the property "". So the resulting DataSource should be pooled and fully ready for XA transactions.

Next we check that the DS components started:


ID | State  | Component Name
1  | ACTIVE |
2  | ACTIVE |
3  | ACTIVE |

If any of the components is not active you can inspect it in detail like this:


Component Details
  Name                :
  State               : ACTIVE
  Properties          :
  Reference           : Jpa
    State             : satisfied
    Multiple          : single
    Optional          : mandatory
    Policy            : static
    Service Reference : Bound Service ID 164


Open the url below in your browser.

You should see a list of one task

 http://localhost:8181/tasklist?add&taskId=2&title=Another Task


You may already know the old CXF LoggingFeature (org.apache.cxf.feature.LoggingFeature). You added it to a JAXWS endpoint to enable logging for a CXF endpoint at compile time.

While this already helped a lot it was not really enterprise ready. The logging could not be controlled much at runtime and contained too few details. This all changes with the new CXF logging support and the up coming Karaf Decanter.

Logging feature in CXF 3.1.0

In CXF 3.1 this code was moved into a separate module and gathered some new features.

  • Auto logging for existing CXF endpoints
  • Uses slf4j MDC to log meta data separately
  • Adds meta data for Rest calls
  • Adds MD5 message id and exchange id for correlation
  • Simple interface for writing your own appenders
  • Karaf decanter support to log into elastic search

Manual Usage

CXF LoggingFeature
    <jaxws:endpoint ...>
       <bean class="org.apache.cxf.ext.logging.LoggingFeature"/>

Auto logging for existing CXF endpoints in Apache Karaf

Simply install and enable the new logging feature:

Logging feature in karaf
feature:repo-add cxf 3.1.0
feature:install cxf-features-logging
config:property-set -p org.apache.cxf.features.logging enabled true

Then install CXF endpoints like always. For example install the PersonService from the Karaf Tutorial Part 4 - CXF Services in OSGi. The client and endpoint in the example are not equipped with the LoggingFeature. Still the new logging feature will enhance the clients and endpoints and log all SOAP and Rest calls using slf4j. So the logging data will be processed by pax logging and by default end up in your karaf log.

A log entry looks like this:

Sample Log entry
2015-06-08 16:35:54,068 | INFO  | qtp1189348109-73 | REQ_IN                           | 90 - org.apache.cxf.cxf-rt-features-logging - 3.1.0 | <soap:Envelope xmlns:soap=""><soap:Body><ns2:addPerson xmlns:ns2="" xmlns:ns3=""><arg0><id>3</id><name>Test2</name><url></url></arg0></ns2:addPerson></soap:Body></soap:Envelope>

This does not look very informative. You only see that it is an incoming request (REQ_IN) and the SOAP message in the log message. The logging feature provides a lot more information though. You just need to configure the pax logging config to show it.

Slf4j MDC values for meta data

This is the raw logging information you get for a SOAP call:

MDC.content-typetext/xml; charset=UTF-8
MDC.headers{content-type=text/xml; charset=UTF-8, connection=keep-alive, Host=localhost:8181, Content-Length=251, SOAPAction="", User-Agent=Apache CXF 3.1.0, Accept=*/*, Pragma=no-cache, Cache-Control=no-cache}
message<soap:Envelope xmlns:soap=""><soap:Body><ns2:getAll xmlns:ns2=""; xmlns:ns3=""/></soap:Body></soap:Envelope>;

Some things to note:

  • The logger name is <service namespace>.<ServiceName>.<type> karaf by default only cuts it to just the type.
  • A lot of the details are in the MDC values

You need to change your pax logging config to make these visible.

You can use the logger name to fine tune which services you want to log this way. For example set the debug level to WARN for noisy services to avoid that they are logged or log some services to another file.

Message id and exhange id

The messageId allows to uniquely identify messages even if you collect them from several servers. It is also transported over the wire so you can correlate a request sent on one machine with the request received on another machine.

The exchangeId will be the same for an incoming request and the response sent out or on the other side for an outgoing request and the response for it. This allows to correlate request and responses and so follow the conversations.

Simple interface to write your own appenders

Write your own LogSender and set it on the LoggingFeature to do custom logging. You have access to all meta data from the class LogEvent.

So for example you could write your logs to one file per message or to JMS.

Karaf decanter support to write into elastic search

Many people use elastic search for their logging. Fortunately you do not have to write a special LogSender for this purpose. The standard CXF logging feature will already work.

It works like this:

  • CXF sends the messages as slf4j events which are processed by pax logging
  • Karaf Decanter LogCollector attaches to pax logging and sends all log events into the karaf message bus (EventAdmin topics)
  • Karaf Decanter ElasticSearchAppender sends the log events to a configurable elastic search instance

As Decanter also provides features for a local elastic search and kibana instance you are ready to go in just minutes.

Installing Decanter for CXF Logging
feature:repo-add mvn:org.apache.karaf.decanter/apache-karaf-decanter/3.0.0-SNAPSHOT/xml/features
feature:install decanter-collector-log decanter-appender-elasticsearch elasticsearch kibana

After that open a browser at http://localhost:8181/kibana. When decanter is released kibana will be fully set up. At the moment you have to add the logstash dashboard and change the index name to [karaf-]YYYY.MM.DD.

Then you should see your cxf messages like this:

Kibana easily allows to filter for specific services and correlate requests and responses.

This is just a preview of decanter. I will do a more detailed post when the first release is out.


Shows how to create a small application with a model, persistence layer and UI just with CDI annotations running as blueprint.


Writing blueprint xml is quite verbose and large blueprint xmls are difficult to keep in sync with code changes and especially refactorings. So many people prefer to do most declarations using annoations. Ideally these annotations should be standardized so it is clearly defined what they do. The aries maven-blueprint-plugin allows to configure blueprint using annotations. It scans one or more paths for annotated classes and creates a blueprint.xml in target/generated-resources. See aries documentation of the maven-blueprint-plugin.

Example tasklist-blueprint-cdi

This example shows how to create a small application with a model, persistence layer and UI completely without handwritten blueprint xml.

You can find the full source code on github Karaf-Tutorial/tasklist-cdi-blueprint


  • features
  • model
  • persistence
  • ui

Creating the bundles

The bundles are created using the maven bundle plugin. The plugin is only used in the parent project and uses <_include>osgi.bnd</_include> to extract the OSGi configs into a separate file. So each bundle project just needs a osgi.bnd file which is empty by default and can contain additional configs.

As bnd figures out most settings automatically the osgi.bnd file are typically very small.


Defines the karaf features to install the example as well as all necessary dependencies.


The model project defines Task as a jpa entity and the Service TaskService as an interface. As model does not do any dependency injection the blueprint-maven-plugin is not involved here.

Task JPA Entity
public class Task {
    Integer id;
    String title;
    String description;
    Date dueDate;
    boolean finished;
    // Getters and setters omitted
TaskService (CRUD operations for Tasks)
public interface TaskService {
    Task getTask(Integer id);
    void addTask(Task task);
    void updateTask(Task task);
    void deleteTask(Integer id);
    Collection<Task> getTasks();
<persistence version="2.0" xmlns=""

    <persistence-unit name="tasklist" transaction-type="JTA">
            <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/>
            <property name="" value="create-drop"/>


Persistence.xml defines the persistence unit name as "tasklist" and to use JTA transactions. The jta-data-source points to the jndi name of the DataSource service named "tasklist". So apart from the JTA DataSource name it is a normal hibernate 4.3 style persistence definition with automatic schema creation.

One other important thing is the configuration for the maven-bundle-plugin.

Configurations for maven bundle plugin
<Import-Package>*, org.hibernate.proxy, javassist.util.proxy</Import-Package>

The Meta-Persistence points to the persistence.xml and is the trigger for aries jpa to create an EntityManagerFactory for this bundle.
The Import-Package configurations import two packages that are needed by the runtime enhancement done by hibernate. As this enhancement is not known at compile time we need to give the maven bundle plugin these hints.


The tasklist-cdi-persistence bundle is the first module in the example to use the blueprint-maven-plugin. In the pom we set the scanpath to "". So all classes in this package and sub packages are scanned.

In the pom we need a special configuration for the maven bundle plugin:
<Import-Package>!javax.transaction, *, javax.transaction;version="[1.1,2)"</Import-Package>
In the dependencies we use the transaction API 1.2 as it is the first spec version to include the @Transactional annotation. At runtime though we do not need this annotation and karaf only provides the transaction API version 1.1. So we tweak the import to be ok with the version karaf offers. As soon as the transaction API 1.2 is available for karaf this line will not be necessary anymore.

@OsgiServiceProvider(classes = {TaskService.class})
public class TaskServiceImpl implements TaskService {
    EntityManager em;
    public Task getTask(Integer id) {
        return em.find(Task.class, id);
    public void addTask(Task task) {
    // Other methods omitted

TaskServiceImpl uses quite a lot of the annotations. The class is marked as a blueprint bean using @Singleton. It is also marked to be exported as an OSGi Service with the interface TaskService.

The class is marked as @Transactional. So all methods are executed in a jta transaction of type Required. This means that if there is no transaction it will be created. If there is a transaction the method will take part in it. At the end of the transaction boundary the transaction is either committed or in case of an exception it is rolled back.

A managed EntityManager for the persistence unit "tasklist" is injected into the field em. It transparently provides one EntityManager per thread which is created on demand and closed at the end of the transaction boundary.

public class InitHelper {
    Logger LOG = LoggerFactory.getLogger(InitHelper.class);
    @Inject TaskService taskService;
    public void addDemoTasks() {
        try {
            Task task = new Task(1, "Just a sample task", "Some more info");
        } catch (Exception e) {
            LOG.warn(e.getMessage(), e);

The class InitHelper is not strictly necessary. It simply creates and persists a first task so the UI has something to show. Again the @Singleton is necessary to mark the class for creation as a blueprint bean.
@Inject TaskService taskService injects the first bean of type TaskService it finds in the blueprint context into the field taskService. In our case this is the implementation above.
@PostConstruct makes sure that addDemoTasks() is called after injection of all fields of this bean.

Another interesting thing in the module is the test TaskServiceImplTest. It runs outside OSGi and uses a special persistence.xml for testing to create the EntityManagerFactory without a jndi DataSource which would be difficult to supply. It also uses RESOURCE_LOCAL transactions so we do not need to set up a transaction manager. The test injects the plain EntityManger into the TaskServiceImpl class. So we have to manually begin and commit the transaction. So this shows that you can test the JPA code with plain java which results in very simple and fast tests.

Servlet UI

The tasklist-ui module uses the TaskService as an OSGi service and publishes a Servlet as an OSGi service. The Pax-web whiteboard bundle will then pick up the exported servlet and publish it using the HttpService so it is available on http.
In the pom this module needs the blueprint-maven-plugin with a suitable scanPath.

@Properties({@Property(name="alias", value="/tasklist")})
public class TaskListServlet extends HttpServlet {
    @Inject @OsgiService
    TaskService taskService;
    protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException,
        IOException {
        // Actual code omitted

The TaskListServlet is exported with the interface javax.servlet.Servlet with the service property alias="/tasklist". So it is available on the url http://localhost:8181/tasklist.

@Inject @OsgiService TaskService taskService creates a blueprint reference element to import an OSGi service with the interface TaskService. It then injects this service into the taskService field of the above class.
If there are several services of this interface the filter property can be used to select one of them.

Abgular JS / Bootstrap UI

The Angular UI is just a bundle that statically exposes a html and js resource. It uses Angular and Bootstrap to create a nice looking and functional interface. The Tasks are read and manipulated using the Tasklist REST service.As the code completely runs on the client there is not much to talk about it from the blueprint point of view.

The example uses $http to do the rest requests. This is because I am not yet familiar enough with the $resource variant which would better suit the rest concepts.

From the OSGi point of view the Angular UI bundle simply sets the Header Web-ContextPath: /tasklist and provides the html and js in the src/main/resources folder.


mvn clean install

Installation and test

See Readme.txt on github.


Some time ago I did some CXF performance measurements. See How fast is CXF ? - Measuring CXF performance on http, https and jms.

For cxf 3.0.0 I did some massive changes on the JMS transport. So I thought it is a good time to compare cxf 2.x and 3 in JMS performance. My goal was to reach at least the original performance. As my test system is different now I am also measuring the cxf 2.x performance again to have a good comparison.

Test System

Dell Precision with Intel Core i7, 16 GB Ram, 256 GB SSD running ubuntu Linux 13.10.

Test Setup

I am using a new version of my performance-tests project on github.

The test runs on one machine using one activemq Server, one test server and one test client.

The test calls the example cxf CustomerService.

The following call types are supported:

Call type





Asynchronous one way call. Sends one soap message to server


List<Customer> customers = customerService.getCustomersByName("test2");

Synchronous request reply. Sends one soap message to server and waits for the reply


Future<GetCustomersByNameResponse> resp = customerService.getCustomersByNameAsync("test2");
GetCustomersByNameResponse res1 = resp.get();

Sends an asynchronous request reply. Sends one soap message to server and returns without waiting.
In this test we wait directly after the call for simplicity.

The requests above are sent using an executor with a fixed number of threads.

For the test you can specify the total number of messages, the number of threads and the call type.
First the number of requests are sent for warmup and then for the real measured test.
To run the test with cxf 3.0.0-SNAPSHOT you have to compile cxf from source.

Test execution

1. Run a standalone activemq 5.9.0 server with the activemq.xml from the github sources above.

bin/activemq console

2. Start the jms server in a new console from the project source using:

mvn -Pserver test

3. Start the jms client using:

mvn -Pclient test -Dmessages=40000 -Dthreads=20 -DcallType=oneway

Test results

The test is executed with several combinations of the parameters. Using the pom property cxf.version we also switch between cxf 2.7.10 and cxf 3.0.0-SNAPSHOT.

CXF 2.7.10

call type

















call type

















The first interesting fact here is that one way messaging does not profit from the number of threads. One thread already seems to achieve the same performance like 40 threads. This is quit intuitive as activemq needs to synchronize the calls on the one thread holding the jms connection. On the other hand using more processes also does not seem to improve the performance so we seem to be quite at the limit of activemq here which is good.

For request reply the performance seems to scale with the number threads. This can be explained as we have to wait for the response and can use this time to send some more requests.

One really astonishing thing here is that CXF 2.7.10 seems to be really bad when using synchronous request reply. This is because it uses consumer.receive in this case while it uses a jms message listener for async calls. So the jms message listener seems to perform much better than the consumer.receive case. For CXF 2.7.10 this means we can speed up our calls if we use the asynchronous interface even if it is more inconvenient.

The most important observation here is that CXF 3 performs a lot better for the synchronous request reply case. It is as fast as for the asynchronous case. The reason is that we now also use a message listener for synchronous calls as long as our correlation id is based on the conduit id prefix. This is the default so this case is vastly improved. CXF 3 is up to 5 times faster than CXF 2.7.10.

There is one down side still. If you use message id as correlation id or a user correlation id set on the cxf message then cxf 3 will switch back to consumer.receive and will be as slow as CXF 2 again.

Apache karaf is an open source OSGi server developed by the Apache foundation. It provides very convenient management functionality on top of existing OSGi frameworks. Karaf is used in several open source and commercial solutions.

Like often convenience and security do not not go well together. In the case of karaf there is one known security hole in default installations that was introduced to make the initial experience with karaf very convenient. Karaf by default starts an ssh server. It also delivers a bin/client command that is mainly meant to connect to the local karaf server without a password.

Is your karaf server vulnerable?

Some simple steps to check if your karaf installations is open.

  • Check the "etc/" for the attribute sshPort. Note this port number. By default it is 8101
  • Do "ssh -p 8101 karaf@localhost". Like expected it will ask for a password. This may also be dangerous if you do not change the default password but is quite obvious.
  • Now just do bin/client -a 8101. You will get a shell without supplying a password. If this works then your server is vulnerable

How does it work

The client command has a built in ssh private key which is used when connecting to karaf. There is a config "etc/" in karaf which defines the public keys that are allowed to connect to karaf.

Why is this dangerous?

The private key inside the client command is fixed and publicly available. See karaf.key. As the mechanism also works with remote connections "bin/client -a 8101 -h hostname" this means that anyone with access to your server ip can remotely control your karaf server. As the karaf shell also allows to execute external programs (exec command) this even allows further access to your machine.

How to secure your server ?

Simply remove the public key of the karaf user in the "etc/". Unfortunately this will stop the bin/client command from working.

Also make sure you change the password of the karaf user in "etc/".

Nicely timed as a christmas present Apache Karaf 3.0.0 was released on the 24th of December. As a user of karaf 2.x you might ask yourself why to switch to the new major version. Here are 10 reasons why the switch is worth the effort.

External dependencies are cached locally now

One of the coolest features of karaf is that it can load features and bundles from a maven repository. In karaf 2.x the drawback was though that external dependencies thaat are not already in the system dir and local maven repo were always loaded from the external repo. Karaf 3 now uses the real maven artifact resolution. So it automatically caches downloaded artifacts in the local maven repo. So the artifacts only have to be loaded the first time.

Delay shell start till all bundles are up and running

A typical problem in karaf 2.x and also karaf 3 with default settings is that the shell comes up before all bundles are started. So if you enter a command you migh get an error that the command is unknown - simply because the respective bundle is not yet loaded. In karaf 3 you can set the property "karaf.delay.console=true". Karaf will then show a progress bar on startup and start the console when all bundles are up and running. If you are in a hurry you can still type enter to start the shell faster.

Create kar archives from existing features

If you need some features for offline deploymeent then kar files are a nice alternative to setting up a maven repo or copying everything to the system dir. Most features are not available as kar files though. In karaf 3 the kar:create command allows to create a kar file from any installed feature repository. Kar files now also can be defined as pure repositories. So they can be installed without installing all contained features.


feature:repo-add camel 2.12.2
kar:create camel-2.12.2

A kar file with all camel features will be created below data/kar. You can also select which features to include.

More consistent commands

In karaf 2.x the command naming was not very consistent. For karaf 3 we have the common scheme of <subject>:<command> or <subject>:<secondary-subject>-<command>. For example adding feature repos now is:

feature:repo-add <url or short name> ?<version>

Instead of features:chooseurl and features:addurl.

The various dev commands are now moved to the subjects they affect. Like bundle:watch instead of dev:watch or system:property instead of dev:system-property.

JDBC commands

Karaf 3 allows to directly interact with jdbc databases from the shell. Examples are creating a datasource. Executing a sql command, showing the results of a sql query. For more details see blog article from JB: New enterprise JDBC feature.

JMS commands

Similar to jdbc karaf 3 now contains commands for jms interactions from the shell. You can create connection factories, send and consume messages. See blog article from JB : new enterprise jms feature.

Role based access control for commands and services

In karaf 2.x every user with shell access can use every command, OSGi services are not protected at all. Karaf 3 contains role based access control for commands and services. So for example you can define a group of users that can only list bundles and do other non admin task by simply changing some configuration files. Similar you can protect any OSGi service so it can only be called from a process with successful jaas login and with the correct roles. More details about this feature can be found at

Diagnostics for blueprint and spring dm

In karaf 2.x it was difficult to diagnose problems with bundles using blueprint and spring dm. Karaf 3 now has the simple bundle:diag command that lists diagnostics about all bundles that did not start. For example you can see that a blueprint bundle waits for a namespace or that a blueprint file has a syntax error. Simply try this the next time your bundles do not work like expected.

Features for persistence frameworks

Karaf 3 now has features for openjpa and hibernate. So along with the already present jpa and jta features this makes it easy to install everything you need to do jpa based persistence.

Features for CDI and EJB

The cdi feature installs pax cdi. This allows to use the full set of CDI annotations including any portable extensions in Apache Karaf. The openjpa feature allows to even install openejb for full ejb support on Apache Karaf.

This only lists some of the most noteable features of karaf 3. There is a lot more to discover. Take your time and dig around the features and commands.

In this talk from WJAX 2013 I show best practices for OSGi development in a practical example based around an online voting application.

The UI allows to vote on a topic and shows the existing votes in a diagram. It is done in Javascript and HTML using jQuery and google graph. Additionally votes can be sent using twitter, irc and karaf commands. The image below shows how to vote for the topic camel using your twitter status. 

The architecture of the example follows the typical separation of model, service layer and front end.

Architecture voting

In the talk I explain the difficulties people typically face with OSGi and how to solve them using karaf, maven bundle plugin and blueprint.

By default OSGi services are only visible and accessible in the OSGi container where they are published. Distributed OSGi allows to define services in one container and use them in some other (even over machine boundaries).For this tutorial we use the DOSGi sub project of CXF which is the reference implementation of the OSGi Remote Service Admin specification, chapter 122 in the OSGi 4.2 Enterprise Specification).

Example on github

Introducing the example

Following the hands on nature of these tutorial we start with an example that can be tried in some minutes and explain the details later.

Our example is again the tasklist example from Part 1 of this tutorial. The only difference is that we now deploy the model and the persistence service on container A and model and UI to container B and we install the dosgi runtime on bother containers.


As DOSGi should not be active for all services on a system the spec defines that the service property "osgi.remote.interfaces" triggers if DOSGi should process the service. It expects the interface names that this service should export remotely. Setting the property to "*" means that all interfaces the service implements should be exported. The tasklist persistence service already sets the property so the service is exported with defaults.

Installing the service

To keep things simple we will install container A and B on the same system.

Install Service
config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper.server clientPort 2181
feature:repo-add cxf-dosgi 1.7.0
feature:install cxf-dosgi-discovery-distributed cxf-dosgi-zookeeper-server
feature:install example-tasklist-persistence

After these commands the tasklist persistence service should be running and be published on zookeeper.

You can check the wsdl of the exported service http://localhost:8181/cxf/net/lr/tasklist/model/TaskService?wsdlBy starting the zookeeper client from a zookeeper distro you can optionally check that there is a node for the service below the osgi path.

Installing the UI

  • Unpack into folder container_b
  • Start bin/karaf


Install Client
config:property-set -p org.ops4j.pax.web org.osgi.service.http.port 8182
config:property-set -p org.apache.cxf.dosgi.discovery.zookeeper zookeeper.port 2181
feature:repo-add cxf-dosgi 1.7.0
feature:install cxf-dosgi-discovery-distributed
feature:install example-tasklist-ui


The tasklist client ui should be in status Active/Created and the servlet should be available on http://localhost:8182/tasklist. If the ui bundle stays in status graceperiod then DOSGi did not provide a local proxy for the persistence service.

How does it work


The Remote Service Admin spec defines an extension of the OSGi service model. Using special properties when publishing OSGi services you can tell the DOSGi runtime to export a service for remote consumption. The CXF DOSGi runtime listens for all services deployed on the local container. It only processes services that have the "osgi.remote.interfaces" property. If the property is found then the service is either exported with the named interfaces or with all interfaces it implements.The way the export works can be fine tuned using the CXF DOSGi configuration options.

By default the service will be exported using the CXF servlet transport. The URL of the service is derived from the interface name. The servlet prefix, hostname and port number default to the Karaf defaults of "cxf", the ip address of the host and the port 8181. All these options can be defined using a config admin configuration (See the configuration options). By default the service uses the CXF Simple Frontend and the Aegis Databinding. If the service interface is annotated with the JAX-WS @WebService annotation then the default is JAX-WS frontend and JAXB databinding.

The service informations are then also propagated using the DOSGi discovery. In the example we use the Zookeeper discovery implementation. So the service metadata is written to a zookeeper server.

The container_b will monitor the local container for needed services. It will then check if a needed service is available on the discovery impl (on the zookeeper server in our case). For each service it finds it will create a local proxy that acts as an OSGi service implementing the requested interface. Incoming request are then serialized and sent to the remote service endpoint.

So together this allows for almost transparent service calls. The developer only needs to use the OSGi service model and can still communicate over container boundaries.

On thursday I had a talk about Apache Camel at W-JAX in Munich. Like on the last conferences there was a lot of interest in Camel and the room was really full. You can find the slides "Integration ganz einfach mit Apache Camel" here and the sources for the examples on github.

On Friday I joined the Eclipse 4 RCP workshop from Kai Tödter. Learned a lot about the new Eclipse. At last Eclipse RCP programming is becoming easier.

I just did my ApacheCon talk about OSGi best practices.It was the last slot but the room was still almost full. In general the OSGi track had a lot of listeners and there were a lot of talks that involved Apache Karaf. So I think that is a nice sign for greater adoption of OSGi and Karaf.

You can find the Slides at google docs.

The demo application can be found inside my Karaf tutorial code at github.

Practical Camel example that polls from a database table and sends the contents as XML to a jms queue. The route uses a JTA transaction to synchronize the DB and JMS transactions. An error case shows how you can handle problems.

Route and Overview

.bean(new ExceptionDecider())

The route starts with a jpa endpoint. It is configured with the fully qualified name of a JPA @Entity. From this entity camel knows the table to poll and how to read and remove the row. The jpa endpoint polls the table and creates a Person object for each row it finds. Then it calls the next step in the route with the Person object as body. The jpa component also needs to be set up separately as it needs an EntityManagerFactory.

The onException clause makes the route do up to 3 retries with backoff time increasing by factor 2 each time. If it still fails the message is passed to a file in the error directory.

The next step transacted() marks the route as transactional it requires that a TransactedPolicy is setup in the camel context. It then makes sure all steps in the route have the chance to participate in a transaction. So if an error occurs all actions can be rolled back. In case of success all can be committed together.

The marshal(df) step converts the Person object to xml using JAXB. It references a dataformat df that sets up the JAXBContext. For brevity this setup is not shown here.

The ExceptionDecidet bean allows to trigger an exception if the name of the person is error. So this allows us to test the error handling later.

The last step to("jms:person") sends the xml representation of person to a jms queue. It requires that a JmsComponent named jms is setup in the camel context.


This second route simply listens on the person queue, reads and displays the content. In a production system this part would tpyically be in another module.

Person as JPA Entity JAXB class

The Person class acts as a JPA Entity and as a JAXB annotated class. This allows us to use it in the camel-jpa component as well as during the marshalling. Keep in mind though that this
would rather be a bad practice in production as it would tie the DB model and the format of the JMS message together. So for real integrations it would be better to have separate beans for JPA and JAXB and do a manual
conversion between them.

public class Person {
    private String name;
    private String twitterName;

    public String getName() { return name; }
    public void setName(String name) { = name; }
    public String getTwitterName() { return twitterName; }
    public void setTwitterName(String twitterName) { this.twitterName = twitterName; }

DataSource and ConnectionFactory setup

We use an XADataSource for Derby (See As the default ConnectionDactory provided by ActiveMQ in Karaf is not XA ready we define the broker and ConnectionFactory definition by hand (See Together with the Karaf transaction feature these provide the basis to have JTA transactions.

JPAComponent, JMSComponent and transaction setup

An important part of this example is to use the jpa and jms components in a JTA transaction. This allows to roll back both in case of an error.
Below is the blueprint context we use. We setup the JMS component with a ConnectionFactory referenced as an OSGi service.
The JPAComponent is setup with an EntityManagerFactory using the jpa:unit config from Aries JPA. See Apache Karaf Tutorial Part 6 - Database Access for how this works.
The TransactionManager proviced by Aries transaction is referenced as an OSGi service, wrapped as a spring PlattformTransactionManager and injected into the JmsComponent and JPAComponent.

<blueprint xmlns=""

    <reference id="connectionFactory" interface="javax.jms.ConnectionFactory" />

    <bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
        <property name="connectionFactory" ref="connectionFactory"/>
        <property name="transactionManager" ref="transactionManager"/>

    <bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
        <argument ref="jmsConfig"/>

    <reference id="jtaTransactionManager" interface="javax.transaction.TransactionManager"/>

    <bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
        <argument ref="jtaTransactionManager"/>

    <bean id="jpa" class="org.apache.camel.component.jpa.JpaComponent">
        <jpa:unit unitname="person2" property="entityManagerFactory"/>
        <property name="transactionManager" ref="transactionManager"/>

    <bean id="jpa2jmsRoute" class=""/>

    <bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy">
        <property name="transactionManager" ref="transactionManager"/>

    <camelContext id="jpa2jms" xmlns="">
        <routeBuilder ref="jpa2jmsRoute" />


Running the Example

You can find the full example on github : JPA2JMS Example
Follow the Readme.txt to install the necessary Karaf features, bundles and configs.

Apart from fthis example we also install the dbexamplejpa. This allows us to use the person:add command defined there to populate the database table.
Open the Karaf console and type:

person:add "Christian Schneider" @schneider_chris

You should then see the following line in the log:

2012-07-19 10:27:31,133 | INFO  | Consumer[person] | personreceived ...
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <name>Christian Schneider</name>

So what happened

We used the person:add command to add a row to the person table. Our route picks up this record, reads and converts it to a Person object. Then it marshals it into xml and sends to the jms queue person.
Our second route then picks up the jms message and shows the xml in the log.

Error handling

The route in the example contains a small bean that reacts on the name of the person object and throws an exception if the name is "error".
It also contains some error handling so in case of an exception the xml is forwarded to an error directory.

So you can type the following in the Karaf:

person:add error error

This time the log should not show the xml. Instead it should appear as a file in the error directory below your karaf installation.


In this tutorial the main things we learned are how to use the camel-jpa component to write as well as to poll from a database and how to setup and use jta transactions to achieve solid error handling.

Back to Karaf Tutorials

Yesterday evening I did a talk about Apache Karaf and OSGi best practice together with Achim Nierbeck. Achim did the first part about OSGi basics and Apache Karaf and I did the second part about OSGi best practices.

One slide from the presentation about Karaf shows the big number of features that can be installed easily. So while the Karaf download is just about 8 MB you can install additional features transparently using maven that make it a full blown integration or enterprise application server.

OSGi best practices

In my part I showed how blueprint, OSGi Services and the config admin service can be used together to build a small example application consisting of the typical modules model, persistence and UI like shown below.

Except for the UI the example was from my first Karaf tutorial. While in the tutorial I used a simple Servlet UI that merely is able to display the Task objects I wanted to show a fancier UI for this talk. Since I met the makers of Vaadin on the last W-JAX conferences I got interested in this simple but powerful framework. So I gave it as spin. I had only about two days to prepare for the talk so I was not sure if I would be able to create a good UI with it. Fortunately it was really easy to use and it just took me about a day to learn th basics and build a full CRUD UI for my Task example complete with data binding to the persistence service.

One additional challenge was to use vaadin in OSGi. The good thing is that it is already a bundle. So a WAB (Web application bundle) deployment of my UI would have worked. I wanted it to be pure OSGi though so I searched a bit a found the vaadinbrige from Neil Bartlet. It allows to simply create a vaadin Application and factory class in a normal bundle and publish it as a service. The bridge will then pick it up and publish it to the HttpService.

The end result looks like this:

So you have a table with the current tasks (or to do items). You can add and delete tasks with the menu bar. When you select a task you can edit it in the form below. Any changes are directly sent to the service and updated in the UI.
The nice thing about vaadin is that it handles the complete client server communication and databinding for you. So this whole UI takes only about 120 lines of code. See ExampleApplication on github.

So the general idea of my part of the talk was to show how easy it is to create nice looking and architecturally sound applications using OSGi and Karaf. Many people still think OSGi will make your live harder for normal applications. I hope I could show that when using the right practices and tools OSGi can even be simpler and more fun than Servlet Container or Java EE focused development.

I plan to add a little more extensive Tutorial about using Vaadin on OSGi to my Karaf Tutorial series soon so stay tuned.

Presentation: ApacheKaraf.pdf

Source Code:

Vaadin UI:

Tasklist Model and Persistence:

Achim adapted another Vaadin OSGi example from Kai Tödter to Maven and Karaf:

Nach dem Talk auf der letzten W-JAX hatte ich nun die Gelegenheit, auch auf der JAX über Apache Camel zu sprechen. Diesmal hatte ich einen grösseren Raum zur Verfügung, der mit fast 200 Zuhörern auch gut gefüllt war. Dies zeigt das grosse Interesse an Apache Camel. Die Präsentation ist direkt im Anhang verfügbar. Diesmal ging ich stärker auf OSGi und Apache Karaf als Ablaufumgebung ein. Ich hatte auch nur 20 Folien und verwendete einen größeren Teil der Zeit für Live Demos. Der Vortrag wurde auch gefilmt und sollte bald auf der JAX Website verfügbar sein. Ich werde dann eine Aktualisierung mit Link posten.

Nach dem geplanten Ende des Vortrags war ein freier Zeitblock. Viele der Zuhörer blieben noch um Fragen zu stellen und ich zeigte auch noch einige tiefer gehende Beispiele zu Bean Integration und Pojo Messaging. Als Resumé kann ich sagen, dass Apache Camel sehr beliebt ist und besonders Entwickler und Architekten den Einsatz treiben während das Management noch oft auf große Kommerzielle Frameworks setzt. Apache Karaf wird als sehr interesante Deploymentumgebung wahrgenommen. In den meisten Fällen gibt es allerdings Schwierigkeiten mit Operations bei der produktiven Einführung, da Apache Karaf und OSGi noch recht wenig verbreitet sind und damit eine zusätzliche Serverlandschaft darstellen.

Präsentation: Apache Camel JAX12.pdf