Reficio™ - Reestablish your software!

Long-awaited Babun 1.1.0 has been just released!

After months of hard work a new version of Babun has been finally promoted to production!
This release features an important change in the auto-update behavior.

From now on babun update will also check and update the underlying Cygwin instance with all Cygwin packages that have been already installed, apart from updating babun itself.

The main reason for this change was to make sure that the packages installed with pact are compatible with the underlying Cygwin instance. Pact always fetches the newest available package from the Cygwin repository, but it does not update Cygwin’s internal DLLs’, thus it was possible that a new version of given package like git, ruby, emacs etc. would not work on top of the older Cygwin version embedded in babun. For this reason it is important to keep pact packages and Cygwin core libraries in sync.

Babun will automatically check on startup if there is a newer Cygwin version available and prompt the user to update it. On Cygwin update babun will close itself, run the Cygwin installer in a separate cmd process and restart itself once the installation has been completed. If there is a newer version of Cygwin available pact will not allow the user to install new packages. Instead they will be prompted with the following message:

{ ~ } » pact install arj ~
——————————————————————————-
CRITICAL: The underlying Cygwin version is outdated!
It’s forbidden to install new packages as they may fail to work.
Execute ‘babun update’ and follow the instructions to update Cygwin.
If you know what you are doing add ‘–force’ flag to proceed.
——————————————————————————-

As you can see it is still possible to install packages using the –force option but it is not guaranteed that the downloaded package will work correctly. Big thanks to @v-yadli for helping us designing and developing the concept of this feature.

The auto-update improvement is not the only change in 1.1.0. This release is shipped with other new features coded up during several hackergartens, hacking sessions and long coding nights. The most important features are:

  • pact update, so that you may update a package to a newer version
  • fake sudo, so that you can run scripts containg sudo
  • soft links to harddrives, like /c, /d, so that you may forget about /cygdrive/c
  • a lot of fixed bugs; all of them are listed below this post

Many thanks to everybody who contributed to this release via pull requests, by helping other users, or even by fixing typos! Also a big thank you to the following GitHub users: @almorelle, @vanushv, @tonilampela, @v-yadli, @harijoe, @airborn and @kubamarchwicki for taking time and effort of testing the this version.

In case you have the previous version installed it’s important to invoke babun update to stay up-to-date!
We really hope that you’ll like this release. If you do, tweet about it, star babun on github or just… enjoy it :)

P.S. If somebody is interested in the full list of issues that have been squashed in 1.1.0, just have a look:

  • MD5 sum did not patch, exiting #265
  • Package git is broken after update #259
  • Cmake just returns to prompt without doing anything #252
  • Upgrading Git #242
  • versioning in pact, and pact upgrade & pact dist-upgrade (or the other way) #239
  • Updating git #232
  • Numpy not working after pact install #231
  • Command exits with 127 on Babun, but works on Cygwin #225
  • Unable to start X server after pact install xorg-server xinit #222
  • Emacs doesn’t run #210
  • g++ compilation doesn’t produce any resulting file #203
  • Upgrading with pact not available #199
  • shellshock: bash 4.1.10(4) and zsh 5.0.2 (i686-cygwin) vulnerablility #198
  • Pact Installer, Md5 checksum not matching… #257
  • Add “login” command to babun’s script #253
  • Rerun babun startup scripts after running babun update #250
  • Duplicate call of babun.zsh in /etc/zshrc #249
  • Git plugin changes my gitconfig #247
  • Cannot update oh-my-zsh on start #211
  • etc/zprofile is being called twice causing CHERE_INVOKING to fail. #205
  • .bashrc running three times #166

Babun 1.0.1 is out

Babun 1.0.1 is out!

There’s a lot of goodies that have been included in this release – the most importants ones are:

  • There is no interference with the existing Cygwin installations.
  • You may have whicespace or accent characters in your Windows username.
  • You may install to a custom folder using the install.bat /t “d:\babun_folder” switch.
  • You may install babun having the %HOME% env variable set (the user’s home folder will be reused)
  • You can easily pin babun to taskbar

If you already have babun 1.0.0 just invoke “babun update” and enjoy the newest version!
We would like to thank the community for being very active and supportive. Special thanks to Babun’s contributors:

Babun 1.0.0 released!

Would you like to use a linux-like console on a Windows host without a lot of fuzz? Try out babun!
Have a look at a 2 minutes long screencast by @tombujok: http://vimeo.com/95045348

Babun’s features in 10 seconds:

  • Pre-configured Cygwin with a lot of addons
  • Silent command-line installer, no admin rights required
  • pact – advanced package manager (like apt-get or yum)
  • xTerm-256 compatible console
  • HTTP(s) proxying support
  • Plugin-oriented architecture
  • Pre-configured git and shell
  • Integrated oh-my-zsh
  • Auto update feature

Interested? Visit babun’s website: http://babun.github.io/ or GitHub: https://github.com/babun/babun

Just another “Hack Weekend”

And they say that Software Engineering is boring… It doesn’t get more wrong than that! At least it’s like this with me… but I know a couple of other freaks (Hackergarten Basel team) that hack on a lot of interesting stuff whenever they can :) Here’s the proof:
https://twitter.com/galderz/status/409642141036838912

After the last weekend of hacking when I worked on my own open source projects, mainly on the p2-maven-plugin, this weekend I decided to contribute to the Grails Framework. I have done a couple of project using Grails, but I haven’t seen the latest releases, so it was a chance to get the gist of what has changed. And I wasn’t disappointed! The framework really rocks!

So, I have worked on 5 tickets assigned to the next release (2.3.5) – along the way I have submitted 3 pull requests. Hopefully. I have solved all of the problems. In some cases the users tried to use the mechanisms in a wrong way -> maybe doc should be supplemented? Anyway, here’s the list to the issues, in case you are interested.
http://jira.grails.org/browse/GRAILS-10753
http://jira.grails.org/browse/GRAILS-10843
http://jira.grails.org/browse/GRAILS-10875
http://jira.grails.org/browse/GRAILS-10877
http://jira.grails.org/browse/GRAILS-9041

Now, it’s time to get some sleep! Eventually! It’s 3.32 am :)

Eclipse RCP dependency management done right with p2-maven-plugin

I am happy to announce the fourth official release of the p2-maven-plugin -> version 1.1.0! I managed to find some time to work on the Maven 3.1.x compatibility issues that have been finally fixed (…hmmm actually, I’ve managed to find a lot of time, since I’ve been hacking for the last 48 hours ;) ) I also refactored the code, so it should be more clean and readable now! The test coverage is awesome… as always – it averages at around 95%. I am especially proud of the end-to-end integration tests as that’s where most of the test-coverage comes from. It was really easy to refactor the code having such great tests!

It’s been a lot of work to release four versions in the last 10 months, even though that the codebase is relatively small – but the devil is in the details – especially when it comes to OSGi and Tycho :) . It was, however,a lot of fun and satisfaction to see that people appreciate your work and that the plugin is used more and more widely! I will try to post an article about the projects that use p2-maven-plugin soon.

In the meantime grab version 1.1.0 as it is still hot:
https://github.com/reficio/p2-maven-plugin/tree/v1.1.0
http://projects.reficio.org/p2-maven-plugin/1.1.0/manual.html
http://repo.reficio.org/maven/org/reficio/p2-maven-plugin/1.1.0/

What is the p2-maven-plugin?

Are you familiar with the automated dependency management like in Maven, Gradle or any other fancy tool? You just define a project descriptor, add a bunch of dependencies and everything happens “automagically”… Piece of cake huh?! Well, there are, however, these RCP “unfortunates” for whom it is not quite that easy… Why’s that, you might think?

The following blog entry outlines the problem perfectly: http://bit.ly/PypQEy The author presents five different approaches how to configure the build and dependency management in a Tycho / Eclipse RCP project and, in the end, she couldn’t really propose a satisfactory solution!

In order to add a third-party dependency to an Eclipse RCP project the dependency has to reside in a P2 update site.
So in order to generate such site you have to do three things by-hand:

  • 1. download all required dependencies to a folder,
  • 2. recognize which dependencies are not OSGi bundles and bundle them using the ‘bnd’ tool,
  • 3. take all your bundles and invoke a P2 tool to generate a P2 update site.

Ufff, that is a mundane, cumbersome, repeatable and stupid activity that may take you a few hours – imagine now that you have to do it multiple times… That’s where p2-maven-plugin authored by me comes into play. It solves problems #1, #2, #3 and does all the hard work for you. Isn’t that just brilliant? I think it is… :)

How to use it in 2 minutes?

Using p2-maven-plugin is really simple. I have prepared a quickstart pom.xml file so that you can give it a try right away. We will generate a P2 site with a couple of OSGi bundles (bundling some of them on the fly) and and then we will expose it using the jetty-maven-plugin.

The full code of the example is located HERE. There’s more examples HERE

In order to use the plugin the following section has to be added to your pom.xml file:

The plugin hasn’t been deployed to Maven Central so far – I would appreciate help on this issue!

Here’s the pom.xml:

There are many more config options, but basically that’s the thing that you need for now. In order to generate the site invoke the following command ‘mvn p2:site’ in the folder where the pom.xml file resides. When the process finishes your P2 site is ready!

You will see the following output:

Your site is located in the target/repository folder and looks like this:

Unfortunately, it’s not the end of the story since Tycho does not support local repositories (being more precise: repositories located in a local folder). The only way to work it around is to expose our newly created update site using an HTTP server. We’re gonna use the jetty-plugin – don’t worry, the example above contains a sample jetty-plugin set-up. Just type ‘mvn jetty:run’ and open the following link http://localhost:8080/site. Your P2 update site will be there!

Now, simply reference your site in your target definition and play with your Eclipse RCP project like you were in the Plain Old Java Environment. Remember to enable the “Group items by category” option, otherwise you will not see any bundles.

Would you like to read more?: https://github.com/reficio/p2-maven-plugin/
Thanks for reading!

Hackergarten coding session at centeractive!

I am pleased to invite you to the first Hackergarten session that will be held at centeractive (Worbstrasse 170, Guemligen) on Friday – 23rd of November!

Hackergarten is a craftmen’s workshop, classroom, a laboratory, a social circle, a writing group, a playground, and an artist’s studio. Our goal is to create something that others can use; whether it be working software, improved documentation, or better educational materials. Our intent is to end each meeting with a patch or similar contribution submitted to an open and public project. Membership is open to anyone willing to contribution their time.

Our plan is to begin at 17.30 and have a lot of fun coding together – in pairs or bigger groups – eating pizza and drinking beer right from the start :)

What will we work on?

  • Hamlet D’Arcy who is a groovy commiter, mentor and expert working at the canoo ag in Basel will join us so we’re gonna have a lot of Groovy stuff to work on during the session.
  • Tom Bujok will prepare some topics on the soap-ws project that he is the leader of, so if you want to touch the void and play with some Web-Services you simply cannot miss out on this!
  • Or we can pick up any other topic and simply work on it a bit…

Please invite your friends (developers) – everybody is invited. By the way, every participant will get a free Retrospective Log Analyzer license.

Please let me know if you come and with how many people – we have to know how many people we buy beer for :) – simply contact me on LinkedIn.

EDIT: Andres Almiray (the Griffon project lead and the Java Champion) will join us, so expect some cool Griffon stuff coming up! Thanks Andres!

Getting version of the Maven project in a freestyle Jenkins build

Today, I have had to extract the version of a Maven project that is built as a part of a freestyle Jenkins build. There are some stackoverflow posts describing how to do it, but the implicit prerequisite is that the build is a native Maven build – not a freestyle build. That is not what I wanted.

Below, I enclose a script that I have come up with for a Jenkins freestyle build that contains a Maven project step. You may have to adjust the path to the pom.xml file. It is important that the script is executed as a system Groovy script, otherwise the Jenkins’ jars are not on the classpath.

The script sets an environment variable (MAVEN_VERSION) that may be used later on in the build. Enjoy!

“ESB Monitoring Framework for SOA Systems” published in IEEE TSC

I am really happy to announce that an article that I have co-authored titled “Enterprise Service Bus Monitoring Framework for SOA Systems” was published in the IEEE Transactions on Service Computing Magazine. Have a look at the abstract:

The paper presents a Monitoring Framework for the integration layer of SOA systems realized by an Enterprise Service Bus (ESB). It introduces a generic ESB Metamodel (EMM) and defines mechanisms which gather monitoring data related to the model entities. Applicability of the model is verified on the Java Business Integration (JBI) specification – available standardization of an ESB. An analysis of the JBI specification from the Metamodel perspective is presented, resulting in identification of JBI monitoring deficiencies. Then, the paper illustrates a realization of mechanisms ameliorating JBI deficiencies. The paper also defines the notion of a Monitoring Goal Metamodel which lays a foundation for a fully-featured and technologyagnostic monitoring framework established on the EMM. The Monitoring Goal Metamodel allows a declarative definition of how the framework should react to anomalies by performing drill-down monitoring to diagnose the root cause of the problems. Evaluation of the prototype implementation of the ESB Monitoring Framework that verifies its correctness and fulfillment of the non-functional requirements is presented. Related work and some important relevant projects and technologies are also briefly described. Finally, the paper is summarized with conclusions and a vision of the proposed framework usage and extensions.

Here’s the link: IEEE Magazine. Let me know if you would like to get a copy.

Minimize the response time of your blog to less than a second!

Speaking about blogs. I have noticed recently that there is one thing that I like more than blogging… it’s configuring and tweaking my blog. And what I mean here is not playing with themes, images, fonts or styles. Don’t get me wrong, I wanted my blog to look neat and slick on a plethora of browsers and mobile devices. I have simply left this part to professionals and bought a commercial theme that matched my expectations. What I meant here was doing the real man’s job of installation, configuration and maintenance. And you may ask yourself why the hell should you do it yourself if there’s a wide choice of platforms where after signing-up you can configure your shiny blog in a few seconds. That’s a reasonable question I have to admit… but doing it yourself IS FUN! OK, if you are not an IT freak it’s probably boring and tiresome, but in that case you shouldn’t be a visitor of my blog. Coming back to the topic though. Not only is that fun, but also profitable. I know, configuring Apache is not Java/Scala/Groovy programming, but it’s really beneficial sometimes to leave the everyday playground and teach yourself something new. Apache, virtual hosts, proxies, reverse proxies, rewrite rules, content substitution, caching, etc. – these are the topics that every respectable software engineer has to be at least acquainted with. It does not necessarily mean that you have to know all the config options by heart. However, if you don’t know what features are available there you are lost. I can guarantee that this knowledge can be applied to any web project that you currently work on. There’s also nothing better than a confused face of a Linux admin when you teach him how to do stuff, but that’s the next, advanced level. I have spent many hours hacking Linux stuff, and I have never regretted it as it simply helped me to developed my skills and become a better software engineer.

THE GOAL

OK. So my goal was to configure a low-cost WordPress blog with a response time of less than a second. I have chosen WordPress, but most of the stuff that I mention hear can be used with any blogging/CMS/website platform. I fixed my budget to around 99 USD a year, which was pretty tight, but let’s go for it! The second requirement was not to touch the PHP code at all. At first, it does not seem reasonable, as you may think that code optimizations are necessary in order to achieve such performance. It’s not true though, and if you customize code WordPress upgrades turn out to be a nightmare. You have to apply your changes again and again and again and, guess what, the code changes all the time. Next, I wanted to use as little WordPress plugins as possible. Why’s that, you are probably asking for the second time? So, with many plugins you easily run into compatibility issues when a certain version of a plugin does not work with some other plugin. You do an upgrade then and most of stuff stops working – I have already seen that. What is more, plugins are normally tested against a plain WordPress installation, so it’s highly possible to get an unexpected behavior when one plugin alters the functionality of another one – in an undefined way of course. I wanted to make it really simple. A really plain instance of WordPress that is lightening fast and easily upgradeable would do the trick!

HOSTING

Having bad experience with a shared-hosting I decided to find the cheapest VPS with an automatic backup-restore procedure that I can get. VPS will perform better in most of the cases and, what it’s important, you have the access to the SSH console. I tried a few options and decide to buy a 12-month subscription at biznes-host.pl. I got unlimited transfer (10Mb link), 512 MB of guaranteed RAM (1024 maximal), 2GHz processor, 10GB of disk storage and the Debian Squeeze OS on the top of it. Cost: around 50 USD a year!

WORDPRESS

How to install WordPress is not the topic here so I will redirect you to this site: WordPress on Debian It’s basically a mechanical thing, nothing fancy.

THEME + CONTENT

Here you can choose whatever you want. My requirement was to have a “responsive” theme that would graphically scale well and look good on various resolutions and mobile devices. I paid attention that the theme is “lightweight” so that the download footprint stays reasonable low.
Having the theme set up I configured the “About me” section and created a sample blog entry with a photo. Next, I opened the www.reficio.org link for the first time. It took about 7 seconds to fully load the page. All measurements were performed by gtmetrix.com Obviously, there was a space for improvement.

OPTIMIZATION 1 – GZIP COMPRESSION

Gtmetrix.com is really useful as it points out what could be improved to decrease the response time of your site. The first hint was to enable the gzip compression. I did not want to install any plugins for that though, so I decided to keep it simple and stupid (KISS). I simply enabled Apache’s mod_deflate.

Default settings fully satisfied me (cat /etc/apache2/mods-available/deflate.conf):

OPTIMIZATION 2 – HTTP EXPIRES HEADERS

Second thing was to enable browser caching of the static content. Nothing simpler with Apache2! Just invoke the following commands:

Then, go to your WordPress folder (in my case /var/www/wordpress) and edit the .htaccess file adding the following lines:

OPTIMIZATION 3 – IMAGES

Now it is time to optimize the size and quality of all media files. Smush.it offers an API that performs these optimizations automatically, and there is a plugin that seamlessly integrates Smush.it with WordPress. Simply install the plugin and go to the “Media Library” where you can invoke the “Bulk Smush.it” action.

Your images will losslessly shrink by at least 40% which means a smaller page size and a faster load time!

OPTIMIZATION 4 – CACHING

We have done a lot so far, but there’s more juice coming in. Let’s have a look how we can improve the general performance of our site. So, how often do you post a new blog entry? If you are not running “the world news blog” the page does not change every minute – what’s a perfect case for caching. The best option would be to cache generated HTML pages and rewrite requests to a folder with the cached content. It would give the PHP stack a brake and would not it the database every time the page is rendered. It sounds complicated, but we can easily do all of that with the “WP Super Cache plugin”. Install and enable the plugin, go to the “Settings” -> “WP Super Cache” site and open the “Advanced” tab. Then select options as shown in the picture below (remember not to enable the gzip compression as we have already enabled mod_deflate):

Then enable the mod_rewrite and restart apache:

Finally, approve the modification of the .htaccess file – just click on the “Update mod_rewrite rules”. It will add the following section to the file

So far so good. Let’s check the results of our work and measure the response time… The result averages at about 2.3 seconds. Not bad, but still more than two times slower than expected.

OPTIMIZATION 5 – CONTENT DELIVER NETWORK (CDN)

It’s gonna be tricky now as we cannot do much more tweaking of Apache. Let’s use Gtmetrics once more and analyze the response timeline. In the picture below we can see that the first request that fetches the HTML content takes around 750ms – which matches our expectations. There are, however, 24 additional slower requests that fetches images, stylesheets and javascripts. We could try to combine these files and limit the count of the requests, but that’s not that straightforward. All these files have a one thing in common though – they are static. OK, you may change a CSS file once in a while, but basically it’s the HTML files that change more frequently. What about using a Content Delivery Network (CDN) to mirror these files, so they can be downloaded quickly with a low-latency from any place in the world? That sounds reasonable, doesn’t it?

I had a quick look what is available on the market and shortlisted two providers: Amazon Cloudfront and MaxCDN. I have chosen the Amazon Cloudfront service as I liked its business pay-per-use model, whereas MaxCDN cost around 40 USD for the basic package. So, I signed up at Amazon and logged in to the Amazon AWS Cloudfront Management Console. The configuration is pretty straightforward. You just have to create a “distribution” and enter the origin domain name in the origins section.

Then you just configure the CDN behavior (in my case it was only HTTP/HTTPs traffic – I also wanted to use origin Cache headers that we have already configured)

Then click OK, and wait till the distribution state has been switched to “Enabled” state. Finally, jot down check the address of your shiny CDN server, in my case it was: d15618vwtt9nw5.cloudfront.net

So let’s review what we have already done. Basically, we created a CDN distribution that mirrors the content of our site. It works in such a way that whenever you hit the CDN server with a specific link it replaces the base of the URL. Then it hits the origin server, fetches the content and caches it internally using the Cache headers – after that it serves the content to the client. As long as the content does not expiry the CDN server does not have to hit your origin server to serve resources. The advantage of CDN is that the servers are distributed across the globe meaning that you can quickly access cached resources from any place. Let’s think of it as an ultra-fast and distributed HTTP cache. OK. But we still have to make our server to delegate the traffic to the CDN server. It would be perfect if we could have a fine-grained control what traffic delegate and what not. We don’t want, for example, to cache HTTP pages, since every time we post a new blog entry we would have to invalidate the old content. And remember, invalidating CDN cache entries is expensive, so we don’t want to do it too often.

There are some CDN plugins that try to do the traffic delegation, but they are limited and do not fulfill my requirements. They cannot process all WordPress files, often are limited to Media files only, and the configuration is pretty complex. Applying the KISS methodology once more, I would like to do the configuration in the simplest possible way. Gtmetrics was helpful in pointing out which files could be moved to the CDN server – see the screenshot below:

OK. So let’s try to configure the traffic delegation. First shot – let’s use the Apache’s mod_redirect and configure redirect rules, so that selected requests are redirected to the CDN server. So far, so good, but in this case, our server would still get 25 requests (out of which 24 are redirects). If the server delays handling them because of high traffic the CDN is not helpful at all. Performed tests have proven my assumption. Response time was a bit better, but not much better.

The perfect solution would be to modify the content of the page while it’s being sent to the client, so that all the selected links point to the CDN server. We could configure it using mod_proxy and ProxyHTMLURLMap, but again, proxying the site server by the same apache locally does not seem KISS and would double the apache’s load. Having done some research, I have finally found what I was looking for – it’s called mod_substitute. To enable it simply invoke:

Then edit the .htaccess file to configure the substitution rules.
Here’s my configuration:

mod_substitute will modify the content of the HTTP response on the fly, replacing the content according to the substitution rules. In my case it will modify some URLs pointing them to the CDN server. It’s flexible, easily configurable, does not depend on any WordPress plugins and can be easily disabled. So right now, all the static content will be fetched from the CDN server. It means that my VPS will be hit only once to get the HTML file, all other stuff will be downloaded from the CDN server. If you are concerned about the cost of the CDN service I will calm you down. I have paid 6 cents for the last 2 weeks. If you expect horrendous traffic you will pay more of course – but for personal blog it will never reach more than a few dollars a month. So, let’s use gtmetrics for the last time…

THE RESULT

The result is amazing. As you can see in the picture below the response time went down to 993ms. OK, If you took an average it would be a bit longer, but nevertheless I was fully satisfied! All the static content is served by the CDN server – look how quick it is – around 15ms per resource! I paid 50 USD for the VPS and a few bucks for the CDN. All in all, I fulfilled all of the requirements specified, kept the config KISS and had a lot of fun!

As you can see, long hours spent on playing with Apache were fruitful. Not only did I learn a lot but also configured a pimped-up ligthening-fast WordPress blog. I hope you enjoyed it and I am anxious to hear your testimonies!

UberConf – be ready to have your mind blown!

I hate blogging about conferences. I find it dull, unentertaining and worthless. Yet another blog entry about a few sessions that the person attended and some comments whether it was good or bad. I did it once following the “common” pattern and I still feel bad about it. I don not particularly like these posts since they rarely judge the conference as a whole and concentrate on that year’s events that, guess what, will not be there the following year. IMHO, the only noble purpose that they serve is to broadcast the message that this particular person attended that specific conference. Fair enough.

This post is supposed to be different (I hope at least) because what I would like to do is to basically focus on the fact why UberConf rocks. That being said I will give you 5 reasons to prove my point.

  • First of all, there are no sponsors (literally NULL). That is really fair since you have to pay for the ticket and there is nothing worse in the schedule than the sessions presented by sponsors. They are pretty often boring, unobjective and tiresome.
  • Next, sessions are 90 minutes long. It is the biggest advantages in comparison to other conferences (JavaZone 60 min., Devoxx 60 min., Geecon 60 min., JAX 50 min., Jazoon 50 min). 90 minutes is enough to give a proper introduction, develop the topic, present decent code samples, and wrap up in a thorough Q&A section. From now on, anything shorter than that seems just too short.
  • In addition to that, a full-day workshop preceding the conference is a brilliant idea. I am aware of the fact that many conferences offer university days, but the sessions are maximally 3 hours long which, in my opinion, is not enough to fully cover the topic on a reasonable level of abstraction.
  • Do not be suprised if it is 21.30 in the evening and you are still at a lecture, a session or a workshop. The wide variety and amount of sessions is just mind-blowing.
  • Over many years UberConf and NFJS have proven to invite the most influential java rock star speakers, like Ted Neward, Venkat Subramaniam, Mark Richards, Ken Sipe, Tim Berglund, Matthew McCullough, Neal Ford, etc.. .You cannot discuss with that.
  • Finally, the atmosphere rocks at UberConf. People are so friendly and open that you immediately have the impression that you know each other for a long time. That enthusiasm and dynamism is also so tangible that you almost feel it in the air.

Personally, I extremely enjoyed the conference and all the fun that was around it. Ken Sipe offered once 20 USD to the person that implements a cross-tab scripting example on his web-security workshop. I am neither a javascript expert nor a web-development geek, but this task was pretty enjoyable to me. You can see the full script below (works on Firefox only). The “showAllURLs” function displays all URLs opened in all browser’s windows/tabs. I enjoyed this 20 bucks as well – Long Island ice tea with Szczepan Faber and some other guys was great!

I was also happy to discuss the issues around JmsTemplate with Mark Richards, who claimed that JmsTemplate was 10 times slower than the native JMS code on sending 1000 consecutive messages. He even presented some examples and performance charts to prove this fact. That was pretty confusing to me since my internal tests performed a few years ago have proven something completely different. After a while I noticed that he simply had not compared apples with apples. In the code using the native JMS API Mark reused one JMS connection and one JMS session whereas in the code using JmsTemplate he did not use caching at all (be default JmsTemplate opens a new conncetion and a new session on every operation – it is well documented though). I sent him the following XML snippet with CachingConnectionFactory setup and he rerun the test. The result was very different – JmsTemplate was only about 2 times slower – and this could be fine-tuned as well. It was a reasonable result, though, since JmsTemplate offers you a lot of convenience stuff and eliminates boilerplate code from your app.

That is pretty much it. I hope I convinced you that UberConf is really a great show that is a bit different from the ones that we have in Europe. I have never regretted going there twice – even when I suffered from a jet lag at 4am in the morning. I hope you will not regret either. See you at UberConf!