Category Archives: Uncategorized

Gamedev or Insurance? Which is best?

Interesting read from a colleague of mine over at

Preview: DDSL – Dynamic Distributed Service Locator

Have a look at DDSL – Dynamic Distributed Service Locator over at GitHub

There you can find working code, README and example with step-by-step guide.

Official post coming later when it is more complete..

Legacy app that Plays with the future..

Enabling our legacy web application for the Future using The Play Framework

Some background

At work we have an old big web application written in some strange language originally running on a mainframe web server.

It’s kind of jsp-ish with just one big script for each page – not even close to MVC.

(Related post: Taking control over legacy code)

The application is constantly being extended so it has never been realistic to stop adding features while porting the hole application to a modern platform – and many of the developers like the JSP-feeling (just dump a new file and it works…)

Some years ago I wrote an emulator for that webserver in java (using extensive precompiling and an Open Source interpreterer). First the emulator had to be 100% backward compatible but then we switched off the mainframe webserver and only used the emulator – running in Tomcat on Windows (I know: Linux is better). We also added features to the language (like include files etc… 🙂 ) using precompiling.

The only solution I saw to make it possible to introduce modern technology in the application was to make it possible to gradually rewrite parts of the application – while the old and new parts of the app could communicate and work side by side.

Over the years I’ve created many solutions to enable our “script-kidies” to stop developing in the legacy language and start using java.. But I never got them to use it..

The Solution

In the fall of 2010 after attending JavaZone I had a breakthrough: Writing the new code in Java using the Play Framework.

Earlier today we upgraded our production environment to use the new legacy-play-integrated solution.

How we did it

As I said, all parts of the application has to be able to communicate – sharing state. The legacy application is statefull (read: session stored on server) – and the play part of the app is not.

I created an “external session store” (Lets call it ESS) – when the legacy app needs to store data in Session, it stores it in the ESS and only stores an ESS-id in the Real Session. This ESS-id is writen to a cookie.

Then when the browser accesses the play part of the application, either fullframe or using “DOM-injection” using JavaScript or using iframes, Play can read the ESS-id from the cookie (since both the legacy app-part and the play-app-part is using the same domain-namespace (fixed using reverse proxy)) – Then the play app can access the ESS server-side via REST and read/store data – In this way – both parts of the app can share data realtime server-side.

The “DOM-injection”-technique was the killer feature that made it possible to convince our Team leader that we should implement all the new functionality in Play.

It works like this: The legacy app renders the full page (with menus and stuff) leaving the “main part” empy with a <div>, then it uses JQuery to fetch a page from Play and injecting the returned DOM (read: HTML with embedded JavaScript) on that <div>. When the injected DOM is rendered, its embedded JavaScript/JQuery is executed. This way the page rendered in the users browser contains no iframes and it makes the “play-part” of the application/page totally contained. This resulted in the (partial) Play app being nice and simple using some java code server side and more HTML/JQuery client-side – since most of the backend systems served data as JSON via REST.


My feeling is that the development team is so happy with what the Play Framework gave us, that the only Legacy code being written in the future – is to disable old code – or linking to new functionality written in Play.

In the Future?

Maybe they can be convinced to ditch Java for Scala? I hope so..

Maven Dependency Management (Deptools)

A very good colleague of mine wrote an excellent post about my maven plugin called ‘deptools’.

You can read it on his blog

Moved from to WordPress

I have now moved my blog away from to my own WordPress hosted by myself… Even though my posts is not that interesting, it was almost impossible to find it on – You had to specific search in “blogs” to find it…

JVM: Solving OutOfMemoryError with less Memory

At work we have 6 web applications (WAR) deployed in Glassfish v2.

In production we experienced sporadic java.lang.OutOfMemoryError: Java heap space under high load. We where sure that we did not have a classic Java memory leak since the used HEAP space decreased after some time and returned to normal. We suspected that the problem was related to our use of EHCache (which stores cached objects in HEAP space).

(Note: This blog post is a summary of two days of research – we tried many things and many numbers – on several more or less identical servers – so the numbers and values in this post is approximately correct – but you get the point)

The JVM, and therefor GlassFish, was running with max 768 MB of HEAP space and 385 MB of PermGEN space.

To reduce the possibility of getting OutOfMemoryError until we had worked out the issue, we decided to increase the maximum HEAP space the JVM could allocate (-Xmx).

GlassFish runs on a 4 core 32 bit server with Windows Server 2003 with 8 GB of RAM.

We increased the the max HEAP memory size to 1 GB (-Xmx=1024M). We started GlassFish with no applications deployed. Then deployed one application after the other – 6 WARs. All applications where deployed without problem and our Apps run fine. After some time the JVM suddenly died.

We found a crash JVM crashdump. We didn’t read it to carefully, but it talked about OutOfMemoryError. We did some more research and found out that it had died before the HEAP space had reached 1GB. We thought a solution was to set the initial HEAP size that the JVM should initialize. We told the JVM to initialize all the HEAP at startup (-Xms=1024M).

So now we had 1024MB HEAP and 385MB PermGEN which is a total of 1409MB.

When we then again started GlassFish (with no apps deployed), The JVM and GlassFish started up just fine. So we started to deploy applications – one by one. When GlassFish was in the middle of deploying the second application the JVM died.. So by allocating more memory up front, the JVM died with OutOfMemoryError earlier..

After a lot of research and reading this great post:, this is how we concluded:

We took a closer look at the JVM crash dump:

java.lang.OutOfMemoryError: requested 884680 bytes for Chunk::new. Out of swap space?

It also says that the JVM crashed in this thread:

0x5be76800 JavaThread “CompilerThread1” daemon [_thread_in_native, id=6764, stack(0x5c1a0000,0x5c1f0000)]

We had configured the JVM to use a lot of memory for HEAP and PermGen. A Windows process can use max 2 GB total. The internals of the JVM (e.g its JIT compiler) needs its own memory, so do the DLLs loaded. Since so much of those 2GB was already allocated for HEAP/Permgen, windows said NO when the JVM asked for more memory inside CompilerThread1. When this happened, the JVM crashed with java.lang.OutOfMemoryError?: requested 884680 bytes for Chunk::new. Out of swap space?


Tell the JVM to use LESS memory..

maven deptools plugin 1.1 released

Version 1.1 of maven deptools plugin now supports maven 3 and the “maven enforcer plugin”

maven deptools plugin “…gives build error if maven resolves transient dependencies in such a way that the none-newest version is chosen.”

This plugin has turned out to be very useful in the company I work for.

The plugin can be found here:

maven deptools plugin – RC1 released

I’ve just released RC1 of a maven deptools plugin. (This is beta but I need real feedback)

“…Maven 2 plugin which gives build error if maven resolves transient dependencies in such a way that the none-newest version is chosen

At work we have all kinds of different dependencies problems related to transient dependencies…

More info here:

Taking control over legacy code

The problem
Some years ago I faced a situation where a company’s main public webapplication ran on a legacy mainframe (OS/390) webserver. It was written in REXX.
The developers had to do the actual coding in a terminal window (3270).
If a developer wanted to code in a regular Text Editor (TextPad), he had to first download the sourcefile via FTP, edit it, then FTP the new version back up to the mainframe to test it. To compile the uploaded sourcefile he had to use a terminal window and navigate to the file (dataset), then disable and enable it to force a recompile of the file.
One other major problem with the FTP-solution was that different developers did overwrite each others changes when they uploaded their new files.
Since the source was not managed by any source control system, it was basically impossible to figure out who had change the code and why.
As you can see this was not an ideal situation.
The ideal solution
The ideal solution is for sure to rewrite the application from scratch with modern technology, but this was not an option for the Company. They felt that they had invested too much in the existing code and that it would take too long time to rewrite it. Not to mention that they would have been unable to create new stuff while porting the old stuff.
Taking control over the legacy code

Since it was not an option to rewrite the application we needed to make it as convenient as possible to work with it.

This is what I ended up doing:
We downloaded all the code and added it to SubVersion. Then we “defined” that that the version stored in SubVersion was the “master (correct) version” of the code, not the version stored on the mainframe.
Then I wrote a deployment tool in Java that automated the deployment-process.
Since we could not prevent other developers (in other teams) to directly edit the code on the mainframe we had to have a mechanism to prevent us from silently overwrite their code changes. This was a critical feature when selling the “idea” to my leader.
To detect this the deployment tool automatically added some metadata to the sourcefile when uploading it to the mainframe. This metadata contained a hash-value (crc, fingerprint) representing the exact state of the sourcecode when uploaded. This made it possible to validate the existing mainframe version of the file before overwriting it with new versions.
The metadata was generated inside a comment (/* metadata */) since the altered source file still needed to compile.
The deployment tool could also be used to compile the source remote on the mainframe. This was done by using a linux component called s3270 which lets you script the terminal session. Since we needed to run the deployment tool on windows, the deployment tool ran s3270 using cygwin.

Since the upload- and compileprocess was slow, we wanted to prevent us from uploading and compiling unchanged files.
To fix this we also included subversion url- and revision-info in the metadata. This made it possible to resolve which files had changed and only upload- and compile those.

By taking control over the source (adding it to SubVersion) and automating the deploy- and compile-process we ended up with an much better development environment.
This ended up only being the first step away from the mainframe. Today this old application still lives, only that it runs inside an Emulator running in Tomcat on a Windows Server.
I hope this blog post inspires someone.

Replacing Weblogic with Glassfish

At work we’re working on a rather big Java integration project. It consist of several applications, some exposing services over REST, others consuming REST services (JSF, Spring WebFlow).

In our development environment we run jetty and/or tomcat. Someone decided that we had to use Weblogic in Test- and Production-environment.
Here is a list of some of the problems Weblogic caused us:
  • It starts up/restarts extremely slow
  • The admin console is really slow
  • It takes forever to deploy to it
  • we had problems getting multiple datasources to different DB2-environments (OS390, AS400, Windows) to work at the same time.
  • Our applications ran slower than expected
  • We experienced strange problems related to ajax and richfaces which where impossible to trace down, since the problems where different on different Weblogic instances.
Today we managed to persuade our project leader that we should replace Weblogic with Glassfish v2.
Now everything is running much faster without problems on Glassfish in our Test environment. I really like what I have seen of Glassfish so far.