As I pointed out in my previous blog post about JavaOne there are a few points where the performance of Jython is much worse than that of the “competitors”. In this post I provide my analysis of the the performance of Jython in the micro benchmark I used in my presentation. I will compare Jython to JRuby here for three reasons. The first reason is the obvious, Python and Ruby are quite similar languages. This takes us to the second reason, the performance of JRuby and Clojure was not that different in comparison, and it's therefore better to choose the most similar language for comparison. Finally, the third reason, JRuby is a darn good, highly optimized dynamic language implementation. That makes it a good baseline for comparison.
The first diagram shows Jython and JRuby with no contention at all. We can clearly see, as early as in this picture that Jython needs a bit more work in the performance department. I still feel excited about this though. The recent release of Jython, version 2.5, focused heavily on Python compatibility. The next versions will be where we start focusing on performance. The work towards Jython 3.0 is even more exciting here. But that excitement will be saved for a later post.
The second slide shows another version of pretty much the same code, but with a shared, contended resource. Here we find that Jython performs much worse than JRuby. While the JRuby version is about 10 times slower with a contended shared resource, Jython is almost 100 times slower. Why is this?
The effect of the implementation
I have already mentioned the first reason for JRuby performing better than Jython. It applies here as well. The JRuby team has spent more time on performance tuning than the Jython team, this is something we will be able to improve. In fact this plays a bigger part than one might think.
The JRuby Mutex implementation is written in about a page of plain old Java (not that important) that gets compiled to a small number of simple bytecodes (more important). Because a closure or block in JRuby is just an object with a method containing the compiled bytecode (compared to a function that contains argument parsing logic before that), the dispatch from the synchronization code to the guarded code is just a simple virtual call. For the call into the synchronization code there are two VM level calls, since this is a JRuby method. First there is the call to the dispatch logic and argument parsing, then the call to the actual code for the synchronization code. Much of the work of the first call is cached by JRubys call site cache from the first invocation so that subsequent invocations are much faster.
The Jython implementation on the other hand has no call site caching. So each call does full dispatch and argument parsing. The call structure is also different. A with statement in Python is just a try/except/finally
block with simplified syntax, where the context manager (a lock in this case) is called for setup before the block inside the with statement and then invoked again after the with statement block completes.
In shorter terms: JRuby has much lower call overhead on the VM level. This makes a difference because a call is a call, and even when in-lined it imposes some overhead. It is also important because the JVM is better at making optimizations across few levels of calls than across several. Still, this only explains a small part of the difference seen in the chart.
The incredible benefit of closures
Most JVMs has built in support for analyzing and optimizing locks. In order to be able to do that it needs to have the the entire synchronized region in one compilation unit, i.e. both the lock instruction and the unlock instruction needs to be in the same compilation unit. Due to the fact that code analysis is super-linear there is a limit to how big a compilation unit is allowed to be. Initially a compilation unit corresponds to a Java bytecode method, but may grow as an effect of code in-lining (up to the compilation unit size limit). The key to get good performance from synchronized regions is therefore to either have both the lock and unlock instructions in the same method, or at least in a shallow call hierarchy from the same method.
Compare the two following snippets of pseudo bytecode (indented regions are the body of the invoked methods).
JRuby:
lock()
invoke closure_body
// do stuff...
unlock()
Jython:
invoke __enter__
Do Jython dispatch
Do Jython to Java dispatch
Do Java reflection dispatch
lock()
// do stuff...
invoke __exit__
Do Jython dispatch
Do Jython to Java dispatch
Do Java reflection dispatch
unlock()
The key insight here is that in the first code snippet the lock and unlock instructions are in the same compilation unit. In the second example they are in two different call paths. The Jython dispatch logic is three levels of calls, and the Jython to Java dispatch logic is two levels, then there is the reflection dispatch that is a number of calls as well. Not only that, but there is quite a lot of code in those call paths as well: parsing of arguments, setting up call frame reflection objects, and more. Add all this together and there is no chance for the JVM to see both the lock and unlock instructions in the same compilation unit. Compared to the situation in the JRuby implementation where they are in the same compilation unit before any in-lining.
Having un-optimized locks make a huge difference for applications running on the JVM. This together with the fact that JRuby is more optimized in general, accounts for most of the difference in execution time for these examples. If we could fix this, we would get a substantial improvement of performance in Jython.
For further details on how we intend to improve this situation in Jython, you are very welcome to attend my presentation on "A better Python for the JVM" at EuroPython, this Tuesday (June 30, 2009) at 15:30. I will also continue posting here on specific improvements and plans, so stay tuned on Twitter or subscribe to my feed.
Disclaimers:
The understanding of a JVM in this post is mostly based on the Hotspot JVM. Other JVMs might work slightly different, but the basic understanding should be at least similar.
The descriptions of both Jython and JRuby are somewhat simplified, the synchronization in JRuby is for example even slightly better optimized than what I have outlined here, but the full description would make the post overly complicated. The essentials are still the same.
In my presentation at JavaOne some numbers suffered from classic micro benchmark errors, ask me if you want to know more about that.