Wednesday, 3 January 2018

Big Releases and Handovers in the JVM World

Happy New Year everyone! Hard to believe it's already 2018, and I'm already 4 months into my ThoughtWorks journey. As they say, time flies...

Anyway, the second half of 2017 saw several news I felt quite excited about in the JVM space:

August: Ceylon handed over to the Eclipse Foundation

The Ceylon programming language has intrigued me ever since I came across it several years ago: Compile-time nullsafe programming, union types, functions as first-class citizens, a thought-through module system, can be compiled for the JVM as well as different JavaScript runtimes and (for a little over a year now) native Android. It targets pretty much the same "demographic" as JetBrains' better known Kotlin. I'm hoping that this handover will help resolve what in my mind has been Ceylon's biggest issue over the years - the lack of marketing. Until this day, I cannot understand why RedHat have never given it a real commercial push.
Currently, the Ceylon community is mostly busy with reworking the codebase so that it aligns with the foundation's guidelines, but I'm looking forward to the opportunities that will arise once this phase is completed.

September: JUnit 5 officially released

I've been using the various JUnit 5 milestone releases in my projects for over a year now, and I must say I'm very happy. A revamped extension model, finally the use of Java 8's language features including the very intriguing Dynamic Test concept. If you've ever worked with JUnit's Theories: It's the same idea, but employing the full power of the JDK (and with the new extension model, you don't need to limit your test to a single runner anymore). I've used it with Maven and with Gradle, it works flawlessly. Also, Hamcrest matchers are no longer included in the standard JUnit library, acknowledging the fact that there are alternatives e.g. AssertJ (which is prominently featured in ThoughtWorks' latest Technology Radar) out there. The project has learned its lessons.
Oh, and did I mention? If you (rightfully) feel rewriting your existing tests in Java 5 style is a lot of work, you can leave them as they are - or apply some tooling help: My IDE of choice (Intellij IDEA) offers a refactoring/code inspection that will help with "simple" tests. There is also a Java-based tool on GitHub which I haven't tested myself.

September: Oracle to transfer ownership of Java EE to the Eclipse Foundation

Others must have suspected this much earlier, but I've got to admit that these news struck me out of nowhere. My beloved Java EE platform will be handed over to the Eclipse Foundation which rebrands it as Eclipse Enterprise for Java (EE4J). There has been increasing criticism of the platform in recent years: Too bulky, not enough innovation (or velocity in the same), not fit for the cloud etc. While I consider some of the arguments exaggerated, they're not totally off. There are reasons for things like the MicroProfile to emerge - which is (probably not at all coincidentally) run by the Eclipse Foundation as well. Having both of these under the same governance will hopefully benefit us guys in the trenches.

September: JDK 9 officially released

Only a year late compared to the original announcement. Well, at least we've finally got modularity that people have been talking about since IIRC Java 6 days. In fact, it's such an intriguing concept that there's actually a book only devoted to this topic. But there's more:
  • The JShell is a long overdue REPL tool which allows developers to quickly test code snippets without having to write a class wrapper. This will enable us dinosaurs to play around with statements and unfamiliar APIs much in a similar way to what JavaScript developers have been doing in the browser console for years.
  • The Collections API receives methods in the style of List.of("A", "B", "C") for creating immutable Lists/Sets/Maps/... on the fly. Until now, we had to rely on Google's Guava library for this feature.
  • I registered some excitement about the new Process API. Although I see no personal benefit on the horizon, I can imagine certain projects that will happily welcome these new capabilities.
  • The Javadoc tool can now generate HTML5 code. Starting with the next major version, this will be default behaviour.
These features are only the ones that I myself found particularly interesting. I know there's stuff "under the hood" (e.g. GC changes) and I'm also aware of the supposedly improved JavaScript support. However, I can't comment on either of these due to my lack of practical knowledge.
Compared to the Java 8 release, I feel a lot less excited. Nevertheless, it's great news that this release has been completed.

December: Effective Java, 3rd Edition published

I had been wondering about this since the Java 8 release: Will Josh Bloch go back to his invaluable piece of work one more time and incorporate the new features? Well, he did. Not only that, he even added Java 9 specific items. I haven't bought the new edition yet, but I will very likely do so pretty soon. Bloch's book should be standard material for any half-way serious Java developer. I remember reading the 2nd edition years back and experiencing quite a few "Aha" moments, and I'm expecting the same from this much-awaited update. Like only very few developer books, this is actually a good "read-only" piece - read the item and remember it when necessary. I found for myself that I didn't need to type the code examples in order to understand them.

That's all for this time. I'm wishing everyone a great start into 2018. May it be at least as good as (if not better than) 2017 for all of you!

Thursday, 24 August 2017

Paradise Lost? I don't think so.

The word is out. Roy Singham is selling the flagship of IT Consulting. The champions of agility, the pioneers of DevOps, the first company to coin the already overused term "Microservices". The company with what may very well be the highest concentration of conference speakers and book authors in the entire industry. People like Rebecca Parsons, Martin Fowler, Neal Ford, Erik Dörnenburg.
And who are the new owners? A combination of employees and Ex-ThoughtWorkers who have got enough funds? Or maybe some independent Trust? No - although the latter is what Roy had envisioned. Martin explains very well why this scenario was deemed not feasible. It's tax-related - basically this solution would have left ThoughtWorks insolvent.
Enter Apax, an investment company in the "Private Equity" space. In layman's terms, they help filthily rich people secure their wealth - and maybe increase it moderately. But unlike the stock market world, they don't look at quarterly figures. Rather, they're interested in growth over a few years time.
Not exactly the kind of thing you like to learn at any given point in time, and certainly not when you just arrived to the place less than two weeks ago like myself. Does this mean the social experiment has failed? Not at all! By all accounts, Apax have promised (in a believable manner) that they will leave ThoughtWorks continuing to be run the way the company has been running for many years. Additionally, the leadership group announced this change in a very respectable manner: We (=All employees) received an elaborate email roughly 12 hours before the press statement, there were two video conferences with the entire leadership team throughout the day (the latter probably being intended primarily for the US colleagues), and we had two Q&A sessions in the local office (one for lunch (provided!), one in the early evening intended primarily for the colleagues out on client sites in the area - again with provided dinner). In return for learning this before any of the clients and the general public, we were asked to remain silent on social media etc. until after the official announcement.
The sessions themselves proved to me even more what a special place this company is. Given my considerable time in the professional world, it's not the first time I'm witnessing this kind of deal. In fact, this was how I joined my previous employer. On every single occasion, people usually worry about their own job first. Nobody asked this kind of question at either session, because the trust is unbroken! All questions were related to the details of the deal (the disclosure of which I'll leave to official channels), the future of the company, what's happening to the infamous 3 Pillars. That kind of thing. No concerns about redundancies or the like. The slave traders recruiters will be out hunting ThoughtWorkers even more than they did before, but somehow I doubt they will be significantly more successful than they were in the past.
Yes, this kind of announcement will always incur some sadness in the staff members. I dare say it's even somewhat sad for the industry as a whole. Heck, articles about this deal are even featured on HackerNews! Even my wife who's not in the IT business at all is feeling sad about this development, because she has met several ThoughtWorkers on a few occasions and can relate to this crazy place. ThoughtWorks are extremely respected for our achievements, values and culture. Being accepted here was a bit like finding the Holy Grail. I think one of my colleagues who has been in TW for a few years put it very well yesterday when he said: "I can't really put my finger on it, but a little of the magic died today."
That may be, but I think there's still more than enough magic left! I still consider ThoughtWorks an inspiration to the industry and an inspiring workplace with amazing people. We may just have fallen a bit back into reality. It's comparable to what one of the TV people said after the men's 100 metres final at this year's IAAF World Championships in London which the undoubted superstar of the past decade didn't win: "Usain Bolt is human after all!" Well, in the same manner, ThoughtWorks are a business after all. But still an amazing workplace!

Wednesday, 2 August 2017

A new era in the journey of life

I am almost there. Less than two weeks from today I'll be joining a company that I've been admiring for many years. This dinosaur is going to be a ThoughtWorker! After nearly three insightful years at Big Blue, I'm ready to move on in order to work alongside some of the most amazing people this industry has to offer.
I've got to admit I was a bit surprised that they accepted a mere mortal like me. For those of you out there toying with the idea of trying out, my advice is: Just do it! The interview process itself is worth the effort, even if you won't make it. And if you're serious about it, I can give you the one good hint to get to know the company upfront. If you get a chance, go to one of the events they host. Talk to them. They are as open as they claim to be.
I'm hoping to publish more frequently after the company switch. Stay put...

Sunday, 5 March 2017

SSL/TLS for Dummies (including myself)

One would think that if you are in the business as long as myself, you know all this security stuff by heart. Well, I didn't really do so until recently. Until my previous project, I always had "the ops guys" making sure that the network configs, certificates and "all that magic" are in place.
However, in that project I ended up having to provide a lot of information myself and even creating certificates, deploying them in the correct places etc.
Although I had had some rough idea how these things work in general, I had to collect lots of information from different places on the web. I found it surprisingly complicated to get some basic answers, so I decided to scribble up my own guide for someone who'd find themselves in my shoes in the future.

What's the difference between SSL and TLS?

There is none! We used to talk about SSL (Secure Sockets Layer), but the official term is now TLS (Transport Layer Security). The most widely used version is TLS 1.2 which was standardised in 2008. All versions before TLS 1.0 (aka SSL 3.1) have been retired. From a developer's point of view, that's all you need to know about this question.

What goes on in there and what are these certificates for?

I won't go into detail about protocols and encryption algorithms here, but rather try to describe what I feel an application developer should know about the mechanisms:
The basic idea is to ensure that you know that in a conversation the party you're talking to is who they claim to be. If you go to a party and someone you've never met introduces themselves as "Hi, I'm Frank", how can you be sure that he's not actually Tim? You've basically got two choices: Ask him for a form of ID that you can recognise or hope that one of your trustworthy friends can confirm the name.
OK, the party example may be a bit pointless, but what about a business transaction? One example I like to use is what happens when you open a bank account. The bank official is legally required to verify your identity (and a few other things which we will ignore for the purpose of this article) in order for them to be able to process the application. Normally, you show some form of photo ID e.g. a passport. Depending on the specific requirements in the country, you may have to show some proof of residence as well.
With the advent of pure online banking, this approach became complicated. How can you show your photo ID in a branch of a bank that hasn't got any branches? Well - you can't. However, the bank is still required to verify your identity. At this point I can only explain how this works in Germany, because I underwent this procedure myself a few times over the past 10 years or so:
PostIdent procedure (please excuse the German texts)
The customer applying for the bank account fills in an online form (surprise, surprise!) which will enable the bank to make all necessary preparations for opening an account. However, instead of being able to work with the account right away, the applicant will have to print a personalised form and carry this to the post office - together with their identification document. They present this to the post office clerk who will then perform the actual ID check and send the form directly to the bank. Only after the bank receives this form will they complete the application process and activate the applicant's account.
So what's happening here? Effectively, the bank outsources the act of identity verification to the post office (who acts as the trustworthy friend from the party example). This process is known as Postident in Germany and used by other services than banks as well.
What does all this have to do with TLS and certificates? Well, TLS works by the same principle: "I don't know you, but if someone I trust can confirm your identity, then I'm happy."
This confirmation is what certificates are for. When the web browser accesses a site that uses TLS, the server presents a certificate to the browser. This certificate carries the server's name and the information required to verify it with an authority.
In this context, authorities are institutions which authorise other institutions' certificates by signing them. In the Postident comparison, this would be the institution that issued the customer's passport - meaning the national government. In the world of the internet, the situation is actually a bit messier than one would think. The "authorities" in the most widely known TLS use case - the HTTPS protocol - are companies whose signatures are known to the web browser (or e.g. the JVM). You may have heard of VeriSign who were bought by Symantec a while back; other well-known ones are Comodo or GlobalSign; however, even companies like Verizon or the German Telekom are in the list. Basically whoever makes the browser decides whom to trust (at least initially; once they're out in the wild, every user can add whichever authority they wish). When you access a website using a URL starting with https://, the browser applies some "crypto-magic" (which we won't explore in detail here) in order to check whether the site has been signed with a certificate from one of the authorities in the browser's trust list. Usually there is a distinction between actual "root certificates" and so-called "intermediate certificates". The former are held in the browser's (or JVM's) list of trusted authorities, the latter are held by the different authorities and signed using one of their own roots. These are usually the ones that the authority in turn uses to issue certificates to the clients. However, the signing chain is included in the certificate presented to the browser so that it can always backtrack up the ancestry far enough until it reaches a certificate in its trusted list.
So compared to the "real-life" example with PostIdent, the certificate presented by the website is the ID itself, while the certificates it's been signed with are the security features on it (the general shape, the authority's stamp, watermarks etc.).

OK. I got the idea, what about the mechanics?

As mentioned above, I'm not looking at encryption algorithms etc., but rather at how to create and use certificates.

The fundamental decision: Java Keystore or "regular" PEM files?

OK, calling this "fundamental" may be slightly exaggerated, as there are ways for converting one format into the other. On the last project we had both formats; which one was used for which service depended on the technology stack. It's unsurprisingly simple: Whatever runs on the JVM will normally use a Java Keystore, everything else won't. From here onwards, I'll describe the use of both for the needed functionality.

Creating a keypair and a Certificate Signing Request

The keypair is the basis for any certificate. There is a public and a private key. I still have to hear of a good real-life analogy for these, but they may be viewed like a mailbox with a lock. The mailman (or anyone else) can drop letters through the mailslot (= the public key), but only the owner has got the (private) key to actually access the mail once it's been delivered. Not really the same thing, but I'm open to suggestions for better examples.
Anyway, let's get down to business:

The PEM version

In the "regular" world, the certificate and private key are stored in two separate files:
openssl req -out sign_this.csr -new -newkey rsa:2048 -nodes -keyout privateKey.key
This will ask you a few questions and generate a certificate signing request and a private key file. The values for the fields will depend on your specific organisation, but one that you will always need to fill in is the so called "Common Name" which should be the full domain name to be identified by the certificate (e.g. www.blogger.com).
The CSR file will require signing by a recognised certificate authority. How to achieve this will depend on your particular organisation and the authority they use. There may be an email exchange or a direct upload/download through a special website.
The result of a signing request is a certificate file (usual endings are .pem and .crt). What you do with this file, depends on the service that uses them. Normally you will place them together with the private key somewhere on the file system and add this location to a configuration file.

The JKS version

In the Java Keystore world, creating the keypair and the CSR are two separate steps:
  1. Create the keypair:
    keytool -genkey -alias demo -keyalg RSA -keystore keystore.jks -keysize 2048
  2. Use the keypair to create a signing request:
    keytool -certreq -alias demo -keystore keystore.jks -file sign_this.csr
As above, you will have the CSR file signed by the appropriate authority for your organisation. Once you have obtained the certificate, you will import it into the keystore like this:
keytool -import -trustcacerts -alias demo -file demo.crt -keystore keystore.jks
As above, what you do with the keystore now depends on your particular service. Again, as above, you will place it somewhere in the file system and add the location to some configuration file.

My organisation is very slow in everything./I just want to set something up for early development purposes.

Great news! You can sign certificates yourself! (Just don't expect anyone outside your organisation to accept them.) I could list all the necessary steps for creating and running a certificate authority here - or I can just refer you to Jamie Nguyen's excellent tutorial. This really answered all the questions for me.
As for the client side, in the browser you will have to register a security exception in order for it to accept such a certificate. As for the JVM, very often you will find advise such as this one which tells you to import the self-signed root (or intermediate) certificate into the JVM's own trusted list. This will work, but in my opinion this shouldn't be your way of handling things if you can avoid it. Every time you update your Java installation, you will have to keep this step in mind. Not only that, but you are really undermining Java's security architecture which is intended to protect your system. So you should avoid this by all possible means. Rather, create an isolated keystore for the software that really must accept the certificate and import it into this one:
keytool -import -trustcacerts -alias root -file intermediate.crt -keystore keystore.jks
Again, there are various ways of using this keystore in your code, depending on the libraries and programming environment. I will explore a few that I've come across in a later post.

Surely, there are more commands than in this post.

Of course. The extremely helpful SSL Shopper has got one page for OpenSSL and another one for the Java keytool.

Wednesday, 20 April 2016

The Facebook Experiment

So after years of denying my need for this little website, I finally joined Facebook. The main reason was that here in the UK it seems to be an even more integral part of society than it already is in Germany. Particularly smaller organizations seem to use it as their primary communications channel. One of these is our daughter's pony club. For the first few months I denied a need to open a Facebook account - after all, my beloved wife already had one and was making good use of it. However, at some point I realised there may be a benefit in being notified directly without her having to act as relay.
So now I'm on Facebook. I already added a few people I came across in my life - in some cases they had sent me invitations years ago, and FB had still stored these!
My first impression is not to good though.  I've been on LinkedIn and GooglePlus for several years now, and I must say that I absolutely miss a practical way of categorizing my connections. When it started, Plus war praised for its "Circles". LinkedIn offers the option of associating connections with "tags" - not quite as user friendly as the circles, but still works quite well. And this makes a lot of sense. All of us are part of different groups that most often share only one individual as their common denominator - yourself. I think the sociological term for this is "peer groups". I wouldn't want to share a pony club story with everyone I went to school with 20 years ago.
Plus (maybe this is a German thing) I don't think of all of my connections as friends. Many are really only acquaintances that I still like to stay on contact with - not the same as a friend with whom I share more personal things. When you share something with all your Facebook "friends", you may as well just share it publicly. It makes no practical difference.
I know that Facebook has friends lists (and I already started filling them), but these are by far not as easy to use as the Plus circles or the LinkedIn tags. Most of the time, when you think of a person, you associate them with a group through which you know them - not the other way around. On to of that, many of your connections belong to several categories. One example: Over the course of my life, I've been a member of several chess clubs. So, naturally, I met a lot of different people in this way. I have a category for the Werder Bremen Chess department, another one for the Hamburg Chess Club of 1830, one for SC Königsspringer Alzenau and so on. All of these connections are part of the overall category "chess", some are friends, some are close friends, many rather count as acquaintances. So in technical terms your personal connections form an "m:n relationship". Plus and LinkedIn reflect this reality quite closely. Facebook still doesn't - years after Plus went live. I remember that at that time the circles were absolutely hyped - now I understand why. This is really a very basic functionality.
In terms of other aspects, let's just say that I only started over the weekend. However, so far I haven't found anything which makes Facebook superior to Plus - speaking from a functional point of view. (I know the user base is supposedly larger.)
So apart from a decent categorization functionality, I haven't found real difference between Facebook and Plus. Please let me know your opinions on this. And please try to be objective. No fanboy flamewars...

Rediscovering Maven's assembly plugin

Our team was faced with the following (actually quite simple) task: Extract all configuration files from the WAR (or rather the jars inside the war) and deploy them separately, while at the same time allowing the lazy efficient developers to just continue using mvn tomee:run like nothing changed. The reasoning behind this separate deployment was that we want to be able to change the configuration without having to rebuild the war.

Since our project is fully Maven-based, it seemed quite obvious that the solution to this must involve the assembly plugin. Personally, I had only one experience with it from a few years back, and it sure has evolved since then. Effectively, all goals except for one (+ help) have been deprecated. WOW! Nevertheless, it's still extremely powerful. So powerful that reading the documentation can lead to quite a bit of confusion and headache.

Don't get me wrong, I haven't found anything that's really missing. It's just overwhelming, because the assembly descriptor can do so many useful things.
We came up with a solution which I have outlined in a simple demo project in my GitHub account. The basic steps were:

  1. Extract the config files into their own separate maven module.
  2. Add this maven module as dependency with scope test where needed (this will normally be the module they were extracted from). In the example project, this is happening in modularised-webapp-library.
  3. Don't add the configuration as dependency to the war project. Instead, add it as a server library to the tomee plugin as demonstrated here.
  4. Create another maven module only for the assembly which has dependencies onto the war and the configuration module.
  5. In the assembly descriptor, define two separate dependency sets. The one for the configuration module unpacks the configuration module and removes the maven metadata, while the one for the war only copies the war directly.
That's really it. Took us quite some effort to find this solution, but I really like the overall simplicity.

The architect is happy, because he can edit the files as he wishes without redeployment. The developers are happy, because the tomee plugin is still running the way we're used to.

(And I'm happy, because I can show off some knowledge again. ;-) )

What is your experience with building assemblies?

Friday, 26 February 2016

JavaFX with Spring (and a few other technologies...)

A few weeks ago I was asked to develop a little software that would solve a real-life issue: A teacher with a worldwide audience, who regularly sends out standard emails reminding his listeners of the next lesson which includes dates and times in different places around the world, asked for a way of reducing the time he spends on creating these emails. So far, he had used one of the many websites out there to convert the date and time in his timezone into the respective dates and times around the world, but always only one at a time - and then copied the results into a standard text (again, obviously, one by one). Due to the mass email service he uses, calendar invites were not an option.
So I spent some time researching and to my own surprise didn't find a free service or software which would help reduce his worktime. Thus, I wrote one myself. The result is visible on my GitHub account. In this post, I'd like to give an overview of how I used what I consider the main technologies for this little project:

JavaFX

No big surprise here, this is the only choice you should make nowadays when using Java for desktop development. Maybe it's the fact that I'm old-school, but I just did everything in pure Java without any FXML. Comes more natural to me. The UI does the job, but can certainly use some improvement - in particular the ZoneIdSelectionDialog. It's way too large and lacks a nice structure. If anyone has some ideas how to improve this, please go ahead and fork me.

Spring

It's a Java SE project, so there's no native CDI. Therefore I added Spring into the mix. The annotations work like a charm. The aforementioned ZoneIdSelectionDialog is also a Spring component - which shows the (from a developer's point of view) only downside of using Spring (or any other Dependency Injection framework) within a JavaFX application:
The lovely Application class follows a very straightforward pattern when launched:
  1. Constructs an instance of the specified Application class.
  2. Calls the init() method.
  3. Calls the start(javafx.stage.Stage) method.
  4. Waits for the application to finish, which happens when either of the following occur:
    • the application calls Platform.exit()
    • the last window has been closed and the implicitExit attribute on Platform is true
  5. Calls the stop() method
So far, so good. So think for a moment: Where would you want to instantiate your Spring beans? Sounds like it should be part of the init(), doesn't it?
The bad news is: This doesn't work when you use Spring (or any other DI framework) to initialize a JavaFX component, because these are only allowed to be constructed on the so-called JavaFX Application Thread. If you've never heard of it, this is where all the UI beauty is actually executed (for Swing veterans like me: It's the equivalent of the Event Dispatch Thread).
So this leaves us no choice but to initialize the Spring context first thing in the start(javafx.stage.Stage) method which seems unnatural, but is a technical restriction. Nevertheless, the gain outweighs this.

The Java Time API

A great addition to Java 8 and probably the one which generated the biggest buzz aside from Lambdas and Streams, I found it nice to use. This application only shows a small portion, but it already highlights a few key advantages over the much older Date and Calendar classes:
  • Instant and the other "value" classes in the package are immutable. This eases their usage and automatically makes them thread safe.
  • The same holds for the DateTimeFormatter - which I consider even more valuable. I've stopped counting how often in my career I stumbled over a strange behavior which I traced back to an unsafe usage of SimpleDateFormat, e.g. as a constant. This is bound to break in a multithreaded environment, as this class isn't thread safe.

Apache Velocity

OK, this library's usage in here is really a no-brainer. I'm using it in a very basic way to replace two tokens with values which are computed in the application. Probably smarter usage is possible, but I wanted to give the user an option of not using a template at all. I know that Velocity can do a lot more, but I really had no need for anything else here.

Lessons learned

  1. Use JavaFX! It's even better than you think.
  2. DI fits in nicely with FX if you keep in mind that you must use it in a slightly "unnatural" way.
  3. The Java Time API is just great. If you haven't done so yet, get acquainted with it. In my view, it's now about time (please excuse the pun) to deprecate Calendar and SimpleDateFormat.
Questions? Criticism? Ideas for improvement? Please feel free to leave any of these in the comments section or as mentioned, fork me on GitHub.