Sunday, June 28, 2009

Performance of Synchronization primitives in Jython

As I pointed out in my previous blog post about JavaOne there are a few points where the performance of Jython is much worse than that of the “competitors”. In this post I provide my analysis of the the performance of Jython in the micro benchmark I used in my presentation. I will compare Jython to JRuby here for three reasons. The first reason is the obvious, Python and Ruby are quite similar languages. This takes us to the second reason, the performance of JRuby and Clojure was not that different in comparison, and it's therefore better to choose the most similar language for comparison. Finally, the third reason, JRuby is a darn good, highly optimized dynamic language implementation. That makes it a good baseline for comparison.

The first diagram shows Jython and JRuby with no contention at all. We can clearly see, as early as in this picture that Jython needs a bit more work in the performance department. I still feel excited about this though. The recent release of Jython, version 2.5, focused heavily on Python compatibility. The next versions will be where we start focusing on performance. The work towards Jython 3.0 is even more exciting here. But that excitement will be saved for a later post.

The second slide shows another version of pretty much the same code, but with a shared, contended resource. Here we find that Jython performs much worse than JRuby. While the JRuby version is about 10 times slower with a contended shared resource, Jython is almost 100 times slower. Why is this?

The effect of the implementation

I have already mentioned the first reason for JRuby performing better than Jython. It applies here as well. The JRuby team has spent more time on performance tuning than the Jython team, this is something we will be able to improve. In fact this plays a bigger part than one might think.

The JRuby Mutex implementation is written in about a page of plain old Java (not that important) that gets compiled to a small number of simple bytecodes (more important). Because a closure or block in JRuby is just an object with a method containing the compiled bytecode (compared to a function that contains argument parsing logic before that), the dispatch from the synchronization code to the guarded code is just a simple virtual call. For the call into the synchronization code there are two VM level calls, since this is a JRuby method. First there is the call to the dispatch logic and argument parsing, then the call to the actual code for the synchronization code. Much of the work of the first call is cached by JRubys call site cache from the first invocation so that subsequent invocations are much faster.

The Jython implementation on the other hand has no call site caching. So each call does full dispatch and argument parsing. The call structure is also different. A with statement in Python is just a try/except/finally block with simplified syntax, where the context manager (a lock in this case) is called for setup before the block inside the with statement and then invoked again after the with statement block completes.

In shorter terms: JRuby has much lower call overhead on the VM level. This makes a difference because a call is a call, and even when in-lined it imposes some overhead. It is also important because the JVM is better at making optimizations across few levels of calls than across several. Still, this only explains a small part of the difference seen in the chart.

The incredible benefit of closures

Most JVMs has built in support for analyzing and optimizing locks. In order to be able to do that it needs to have the the entire synchronized region in one compilation unit, i.e. both the lock instruction and the unlock instruction needs to be in the same compilation unit. Due to the fact that code analysis is super-linear there is a limit to how big a compilation unit is allowed to be. Initially a compilation unit corresponds to a Java bytecode method, but may grow as an effect of code in-lining (up to the compilation unit size limit). The key to get good performance from synchronized regions is therefore to either have both the lock and unlock instructions in the same method, or at least in a shallow call hierarchy from the same method.

Compare the two following snippets of pseudo bytecode (indented regions are the body of the invoked methods).

JRuby:

lock()
invoke closure_body
    // do stuff...
unlock()

Jython:

invoke __enter__
    Do Jython dispatch
        Do Jython to Java dispatch
            Do Java reflection dispatch
                lock()
// do stuff...
invoke __exit__
    Do Jython dispatch
        Do Jython to Java dispatch
            Do Java reflection dispatch
                unlock()

The key insight here is that in the first code snippet the lock and unlock instructions are in the same compilation unit. In the second example they are in two different call paths. The Jython dispatch logic is three levels of calls, and the Jython to Java dispatch logic is two levels, then there is the reflection dispatch that is a number of calls as well. Not only that, but there is quite a lot of code in those call paths as well: parsing of arguments, setting up call frame reflection objects, and more. Add all this together and there is no chance for the JVM to see both the lock and unlock instructions in the same compilation unit. Compared to the situation in the JRuby implementation where they are in the same compilation unit before any in-lining.

Having un-optimized locks make a huge difference for applications running on the JVM. This together with the fact that JRuby is more optimized in general, accounts for most of the difference in execution time for these examples. If we could fix this, we would get a substantial improvement of performance in Jython.

For further details on how we intend to improve this situation in Jython, you are very welcome to attend my presentation on "A better Python for the JVM" at EuroPython, this Tuesday (June 30, 2009) at 15:30. I will also continue posting here on specific improvements and plans, so stay tuned on Twitter or subscribe to my feed.

Disclaimers: The understanding of a JVM in this post is mostly based on the Hotspot JVM. Other JVMs might work slightly different, but the basic understanding should be at least similar.
The descriptions of both Jython and JRuby are somewhat simplified, the synchronization in JRuby is for example even slightly better optimized than what I have outlined here, but the full description would make the post overly complicated. The essentials are still the same.
In my presentation at JavaOne some numbers suffered from classic micro benchmark errors, ask me if you want to know more about that.

Friday, June 26, 2009

Five years later

Me with a large beard and a musketeers outfit

Five years ago I was in a committee responsible for greeting new students in computer science and engineering at my university. We organized activities and parties, and made sure that the new students got their studies up to a pace that would help them through their education. We did all of this while putting on a theater act lasting throughout the entire greeting period of two weeks, for which I grew a large beard.

At about this time of the year there was a team building exercise where we were supposed to draw a picture of how we imagined our lives five years from then. I drew the view of a city skyline from a high floor hotel room. My description was that I imagined myself traveling a lot for work. That I worked on interesting, cutting edge software projects around the world. And that I made good money doing so, not really spending that much, since I didn't really see anything I needed to spend money on when most of my time would be spent working.

Three weeks ago when I looked out towards the San Francisco bay from my 25th floor hotel room at the Intercontinental where I stayed during this years JavaOne, I remembered that drawing. The view from my room was a lot like the view in the drawing. The more I thought about it, the more I realized that the vision I had five years ago was reality now.

The thing I love the most about this is of course that the part about working on cool, exiting, cutting edge projects is true. Both of the projects that get the main part of my attention fit into this category. Neo4j is an up and coming database where I have the privilege to be doing tool and integration work, as well as advanced data persistence programming. Jython gives me an opportunity to work on advanced JVM technology and cutting edge techniques for optimizing code. All together a good mix of technologies, challenges and people.

One part of my vision, I'm sad to say, has yet to come true. I am not making as much money as I expected. In fact trips and conferences are rather expensive. But apart from that I spend about what I expected on other things. This is a minor detail however, I'd much rather work on projects I enjoy than be bored with a high salary.

I also expected that I would be single, since I didn't expect anyone to put up with my traveling, working and being away a lot of the time. I am very happy that I got this prediction wrong. My girlfriend Sara is wonderful, and I am blessed to be sharing my life with her.

Thursday, June 25, 2009

Jython 2.5 - Why you should upgrade

Last weekend I spent some time with the rest of the Neo4j team at Neo Technology for our Code Camp weekend. Among the things I worked on was our Python bindings for Neo4j. They obviously work with the recently released Jython 2.5, but also fairly well with CPython 2.5 (or later), with the current caveat that you have to install JPype on your own (I am working on replacing JPype with something that is maintained and more light weight, and works with Python 3.x). I also want these Python bindings to work with the previous version of Jython (2.2), for those who for some reason cannot update. So for the first time in a while I used a version of Jython that was not the very latest. This is my summary of that experience.

When comparing Jython 2.5 with Jython 2.2 it suddenly becomes obvious how much work the Jython team (including myself) have put into making this a fully compatible implementation of the Python programming language. The fact that Jython now runs Django, Pylons, SQLAlchemy, and a ton of other things is genuinely impressive. And I am more than a little proud for the part I played in making this happen, even if it was only a small part. Jython has, with the step from 2.2 to 2.5, graduated from being a system for scripting Java applications with Python to a system on which you can build full applications.

The first thing I missed when testing Neo4j with Jython 2.2 was virtualenv. I run so many different unstable applications and libraries on different versions of Python and Jython that it's inevitable to mess up on a daily basis. A lot of these have different dependencies, and dependencies on conflicting versions of the same thing and other horrible constraints. And of course I patch and rebuild my Jython at the same time as well, meaning that all libraries I've installed directly into the Jython dist directory gets wiped (I've lost a lot of work this way). In all these situations virtualenv is a saving angel, and it doesn't work with Jython 2.2. (If you have not tried virtualenv yet, do so now! I was a late user, but it was love at first use.) This is a pain - Upgrade to Jython 2.5.

The second thing I was screaming after in Jython < 2.5 was of course easy-install. If not for any other reason to install virtualenv. How about distutils? - Nope. Jython 2.2 has no pleasant way of installing libraries for you - you are on your own. Fail. The same is true for the Python bindings for Neo4j, if you use Jython 2.2 it will still download the required Java libraries for you, and you will be able to use it, but installing it into your sys.path is up to you. This is a pain - Upgrade to Jython 2.5.

I lied before. The first thing I missed was the Python 2.5 syntax. In particular the with statement. This was in fact the reason I joined the Jython project in the first place, to get Python 2.5 syntax support for Jython (and I succeeded in prototyping it during that first Google Summer of Code in 2007). Just consider the following two pieces of code, and I think you will get my point:

with neo.transaction:
    with_node = neo.node(language="Python", feature="with statement")
    example__node = neo.node(language="Python", example="Using the with statement")
    example_node.USES(with_node)
tx = neo.transaction.begin()
try:
    try:
        try_node = neo.node(language="Python", feature="try block")
        example__node = neo.node(language="Python", example="Not using the with statement")
        example_node.USES(try_node)
    except:
        tx.failure()
        raise
    else:
        tx.success()
finally:
    tx.finish()

It should be obvious which one is beautiful and which one is not. The with statement is one nice piece of syntactic sugar. Not having it is a pain - Upgrade to Jython 2.5.

Ok, so there is still a lot of work to be done on Jython. Mainly on performance. The focus up until now has been compatibility, and we've done a good job at that. In the next major version of Jython you should expect a substantial improvement of performance. Not that performance is bad now. But we know that we can do better. More on that in a later post.

Thursday, June 18, 2009

JavaOne 2009 from my perspective

It has been a week since I got back to Sweden after my trip to San Francisco for what I am sad to say was my worst JavaOne so far. Don't get me wrong, JavaOne was good. There were a lot of good sessions, and as always it was nice to meet my friends in the community. But the two previous years were better. Much of it can be blamed on the economy. And the rest can be blamed on me. To a large extent the reason I didn't get the JavaOne experience I would have liked to was because I was too busy. I had a deadline waiting for me on a project at work when I got home, and had to work quite a lot during my JavaOne week. But the fact stands, JavaOne was a lot smaller this year, only about 9000 attendees, and you notice that.

These are the highlight from the sessions I attended at CommunityOne and then JavaOne:

  • Beyond impossible: How JRuby has evolved the Java Platform by Charles Nutter. Charles is a kickass developer and has become a really good speaker as well. It's amazing what he has done with JRuby, and this talk was a summary of the highlights from that. Including things such as the fact that the JRuby team have implemented their own regular expressions library and posix layer for doing system calls from the JVM. All of which adds up to making JRuby the fastest complete implementation of Ruby.
  • Return of the Puzzlers: Schlock and Awe by Joshua Bloch and Neal Gafter. All I can say is that all the digging around on the low levels of the JVM that I have done has payed off, I had the right answer for 7 out of 7 Puzzlers, much better than the first time I attended a Java Puzzlers session.
  • Toward a Renaissance VM by John Rose. John talked about Invokedynamic and Method Handles. It was all familiar stuff, but a good introduction for the odd people who do not spend most of their time implementing languages for the JVM.
  • Meet the Java Posse. I've known the guys of the Java Posse for a while, but this is the first time that I was not doing something completely different during their BoF at JavaOne. Not much to say, listen to the podcast.
  • JSR 292 Cookbook mostly by John Rose. This is what I think was the eye opening session about Invokedynamic and MethodHandles for most people. Hands on examples, showing all of the JSR 292 goodness in action. More people should have attended. Everyone not familiar with JSR 292 on beforehand that I talked to after the session realized that it is useful for far more things than just implementing languages.
  • How to write a distributed garbage collector by Ari Zilka and Saravanan Subbiah from Terracotta. This was the last session of the last day and probably the best session at JavaOne this year. Ari and Saravanan talked about how garbage is detected and collected in the distributed environment that is Terracotta when all references to an object could be from a different node. Very interesting stuff, I need to take a deeper look at Terracotta at some point soon.

There were also some noteworthy sessions that I missed while I was working:

  • Java Platform Concurrency Gotchas by Alex Miller. A smart guy talking about interesting stuff, too bad I missed that one.
  • The Ghost in the Virtual Machine: A Reference to References by Bob Lee. I have a feeling that there are still a few things that can be done using Weak, Soft and Phantom references, that I don't know about yet. I would also have liked to hear what he had to say about Ephemerons. Are they coming to the JVM soon btw?

Then there are the two presentations I gave. A BoF on Interface Injection entitled "Language Interoperability on the JVM Made Simple" and a Technical session entitled "Exploiting Concurrency with Dynamic Languages".

Language Interoperability on the JVM Made Simple

Since this was a Birds of a Feather session I divided it into two main sections: The first half I gave an introduction to interface injection, the status of the implementation and some ideas as to what you can do with it. The second half of the session was spent on discussions about what people want from interface injection, how they want to use it and how it should work.

The short summary is:

  • I have implemented the injection mechanism.
  • I have designed a proposal API for how the injection process should work on the Java level.
  • I have wired up interface injection to invokeinterface, there are still a few more places that would need to be able to call the injection mechanism.
  • The Reflection API for injected interfaces is still unspecified.
  • A lot of people want only explicit injection of interfaces. That is for interfaces to only be injected when an inject() method is invoked in some API. This can be emulated by setting a flag in the injector when this method is invoked, and unset it at the end of this method.
  • There was also some concern about the fact that methods on an injected interface gets implemented automatically by methods with the same name and signature that exist in the target class. I agree that it would be better if interface implementations could be separate for each interface in the class file format. But I don't think this problem will be as big as people might fear since the methods have to match exactly on return type as well as parameter types.

The staff in room 102/103 was really helpful, they offered to rearrange the chairs and tables to better host the discussion I wanted to encourage about interface injection. This can not be said about the staff in room 104 where I had my second presentation. Although being very friendly, they were not very professional. First of all my session was supposed to be recorded, but after the session I was informed that they had forgotten to record the slides, and transitions. I gave them my slides on a memory stick and they said that they were going to match it to the best of their ability then send it to me for review, I have yet to hear from them. On top of that some members of the staff were constantly talking behind the stage, leaving me very distracted. Highly unprofessional of them. I was so surprised by this that I didn't even tell them to shut up.

Exploiting Concurrency with Dynamic Languages

I spent too much time at JavaOne preparing for my second talk about how languages other than the Java Programming Language on the JVM lend themselves better to expressing concurrent programs. I really should have prepared the talk much more before I left Sweden. I believe that I could have done a better presentation. The staff in the room is to blame to a large extent for the fact that I didn't feel satisfied by my performance, but half of the blame is on me. Although, the few reviews that I've heard have been good. Basically that the topic was good, that I had interesting ideas and made good points, but that it was too short, although with good Q&A. A fair review in my opinion, it will be interesting to see the review cards when they get sent out to the speakers.

What is more interesting however is what I was able to conclude in my talk. It should come as a surprise to no one that closures make it much easier to express concurrent programs, and to encapsulate best practices and patterns in libraries. Other higher level constructs are helpful here as well, please have a look at my slides for reference. I also found that Jython has a lot of room for improvement, but more on that in another blog post.