Tuesday, December 15, 2009

Seamless Neo4j integration in Django

About a year ago I gave a presentation at Devoxx where I showed off how easy it was to use any Java library with Django in Jython. The library I demonstrated this with was of course Neo4j. I had written some code for using Neo4j to define models for Django, and now it is ready to be released for you to use it.

The way that the integration between Django and Neo4j is implemented is in the Model layer. Since Neo4j does not have a SQL engine it would not have been efficient or practical to implement the support as a database layer for Django. Google did their implementation in the same way when they integrated BigTable with Django for App Engine. This means that there will be some minor modifications needed in your code compared to using PostgreSQL or MySQL. Just as with BigTable on App Engine, you will have to use a special library for defining your models when working with Neo4j, but the model definition is very similar to Djangos built in ORM. With persistence systems that integrate on the database layer the only difference is in configuration, but that requires the database to fit the mold of a SQL database.

Why the **** has this taken a year to finish?

Short answer: The cat ate my source code.

A mess of symlinks that stemmed from the fact that Jython didn't have good support for setuptools when I started writing this code actually lead to the complete loss of my source code. But to be honest the code wasn't that good anyways. I wanted to add support for Django's administration interface, and I knew that undertaking would require a complete rewrite of my code. A complete rewrite is done and now it will be possible for me to support the administrative interface of Django in the next release. So why not until now, a year after the first prototype? I was working on other things, it's that simple.

Getting started

While the demonstration I gave a year ago was geared towards Jython, since that was the topic of the presentation, the Python bindings for Neo4j work equally well with CPython. That is all you need, Neo4j and Django, the Python bindings for Neo4j comes with a Django integration layer built in as of the most recent revisions in the repository. The source distribution also contains a few sample applications for demonstrating how the integration works. The Django integration is still in a very early stage of development, but the base is pretty solid, so new features should be much easier to add now. Since the state is pre-alpha, installation from source is the only option at the moment. Let me walk you through how to get things up and running:

  • Set up and activate a virtualenv for your development. This isn't strictly necessary, but it's so nice to know that you will not destroy your system Python installation if you mess up. Since we got Jython to support virtualenv I use it for everything. If you use CPython your virtualenv will contain a python executable, and if you use Jython it will contain a jython executable, I will refer to either simply as python from here on, but substitute for that for jython if you, like me, prefer that implementation.
  • If you are using CPython: Install JPype, it is currently a dependency for accessing the JVM-based core of Neo4j from CPython:
    $ unzip JPype-
    $ cd JPype-
    $ python setup.py install
  • Check out the source code for the Python bindings for Neo4j, and install it:
    $ svn co https://svn.neo4j.org/components/neo4j.py/trunk neo4j-python
    $ cd neo4j-python
    $ python setup.py install
  • Install Django:
    $ easy_install django
  • Create a new Django project:
    $ django-admin.py startproject neo4django
  • Create a new app in your Django project:
    $ python neo4django/manage.py startapp business
  • Set up the configuration parameters for using with Neo4j in Django by adding the following configurations to your settings.py:
    NEO4J_RESOURCE_URI = '/var/neo4j/neo4django'
    # NEO4J_RESOURCE_URI should be the path to where
    #    you want to store the Neo4j database.
        # this is optional and can be used to specify
        # extra startup parameters for Neo4j, such as
        # the classpath to load Neo4j from.
    You can ignore the default Django configurations for RDBMS connections if you only plan to use Neo4j, but if you want to use Djangos built in Admin interface (not supported with Neo4j quite yet) or authentication module you will need to configure this.
  • You are now ready to create your first Neo4j backed domain objects for your Django application, by editing business/models.py. Let's create a simple model for companies with owners and employees:
    from neo4j.model import django_model as model
    class Person(model.NodeModel):
        first_name = model.Property()
        last_name = model.Property()
        def __unicode__(self):
            return u"%s %s" % (self.first_name, self.last_name)
    class Company(model.NodeModel):
        name = model.Property(indexed=True)
        owners = model.Relationship(Person,
        employees = model.Relationship(Person,
            related_single=True, # Only allow Persons to work at one Company
        def __unicode__(self):
            return self.name
  • That's it, you've created your first Django domain model using Neo4j, let's try it out:
    $ python neo4django/manage.py shell
    >>> from neo4django.business import models
    >>> seven_eleven = models.Company.objects.create(name="Seven Eleven")
    >>> seven_eleven.employees.add(
    ...     models.Person.objects.create(
    ...         first_name="Sally", last_name="Getitdone"),
    ...     models.Person.objects.create(
    ...         first_name="John", last_name="Workerbee"))
    >>> seven_eleven.save() # store the newly created relationships
    >>> people = list(seven_eleven.employees.all())
    >>> someone = people[0]
    >>> print someone, "works at", someone.employer

Notice how the model objects are compatible with model objects created using Djangos built in ORM, making it easy to port your existing applications to a Neo4j backend, all you need to change is the model definitions. For more examples, see the example directory in the repository: https://svn.neo4j.org/components/neo4j.py/trunk/src/examples/python/.

Future evolution

There is still more work to be done. As this is the first release, there are likely to be bugs, and I know about a few things (mainly involving querying) that I have not implemented support for yet. I also have a list of (slightly bigger) features that I am going to add as well, to keep you interested, I'll list them with a brief explanation:

  • Add support for the Django admin interface. You should be able to manage your Neo4j entities in the Django administration interface, just as you manage ORM entities. To do this I need to dig further into the internals of the admin source code, to find out what it expects from the model objects to be able to pick up on them and manage them. The hardest part with this is that the admin system has a policy of silent failure, meaning that it will not tell me how my code violates its expectations.
  • Add support for Relationship models. Currently you can only assign properties to nodes in the domain modeling API, you should be able to have entities represented by relationships as well. The way you will do this is by extending the Relationship-class.
  • Add a few basic property types. I will add support for creating your own property types by extending the Property-class (this is implemented already, but not tested, so if it works it's only by accident). I will also add a few basic subtypes of Property, a datetime type at the very least. I will also add support for choosing what kind of index to use with each indexed property, in the case of datetime a Timeline-index seems quite natural for example... Supporting enumerated values for Properties is also planned, i.e. limiting the set of allowed values to an enumerated set of values.
  • Tapping in to the power of Neo4j. By adding support for methods that do arbitrary operations on the graph (such as traversals), and where the returned nodes are then automatically converted to entity objects. I think this will be a really cool and powerful feature, but I have not worked out the details of the API yet.

Report any bugs you encounter to either the Neo4j bug tracker, or on the Neo4j mailing list. Suggestions for improvements and other ideas are also welcome on the mailing list, to me personally, or why not as a comment on this blog.

Happy Hacking

Friday, August 07, 2009

Java integration in future Jython

I've seen a lot of this lately, so I thought that it was time for an actual Jython developer (myself) to share some ideas on how Java integration in Jython could be improved. At the same time I'd like to propose some changes that could make the different Python implementations more unified, and even could lead to a common Java integration API in all of them.

The most basic part of the Java integration in Jython is the ability to import and use Java classes. This is impossible for other Python implementations to do in the same way, and thus breaks compatibility fundamentally. I therefore propose that we remove this functionality as it is in Jython today (!). Instead we should look at how IronPython enables using CLR (.NET) classes. In IronPython you first need to import clr before you can access any of the CLR types. The same is done in other languages on the JVM as well, for example JRuby where you need to require 'java' before using any Java libraries. I propose we require something similar in Jython, and what better package to require you to import than java?

An observation: The java package in Java does not contain any classes, only sub-packages. Furthermore all the sub-packages of the java package follow the Java naming conventions, i.e. They all start with a lowercase letter. This gives us a name space to play with: anything under the java package that starts with an uppercase letter.

What happens when you import java? The java Python module is a "magic module" that registers a Python import hook. This import hook will then enable you to import real Java packages and classes. In Jython many of the builtin libraries will of course import java, which means that this will be enabled by default in Jython. But writing code that is compatible across Python implementations would now be possible, by simply ensuring that you import java before any other Java packages.

The content of the java module

Most if not all of what is needed to utilize Java classes from Python code is provided by the import hook that the java module registers when it is loaded. This means that the content of the java module needs to deal with the other direction of the interfacing: defining and implementing APIs in Python that Java code can utilize. I propose that the Python java module contain the following:

A class decorator that exposes the decorated class as a class that can be accessed from Java. Accepts a package keyword argument for defining the Java package to define the class in, if omitted it is derived from the __module__ attribute of the class.
Possibly JavaClass should also be the Python type of imported Java classes.
An object for defining Java fields in classes. Takes a single argument, the type of the field. Example usage:
class WithAField:
    data = java.Field(java.lang.String)
An object for defining Java arrays. This is used to define Java array types. Examples:
  • Array[java.Primitive.int] corresponds to the Java type int[]
  • Array[java.lang.String] corresponds to the Java type java.lang.String[]
  • Array corresponds to the Java type java.lang.Object[]
  • Array[Array[java.lang.String]] corresponds to the Java type java.lang.String[][]
A set of Java access definition decorators. Contains:
  • Access.public
  • Access.package - this needs to be explicitly available since it does not make sense as the default in Python code.
  • Access.protected
  • Access.module - for the new access modifier in the upcoming module system (a.k.a. Project Jigsaw) for Java.
  • Access.private
  • The default access modifier should either be public or the absence of an access modifier decorator would mean that the method is not exposed in the Java class at all. This needs further discussion.
The set of primitive types in Java:
  • Primitive.void
  • Primitive.boolean
  • Primitive.byte
  • Primitive.char
  • Primitive.short
  • Primitive.int
  • Primitive.long
  • Primitive.float
  • Primitive.double
  • These can be used as type parameters for Array but not for Generic types (Since primitives are not allowed as generic type parameters in Java).
Used to implement (and define) overloaded methods, several different methods with the same name, but different type signatures. Example usage:
class WithOverloadedMethod:
    def method(self, value:java.lang.String) -> java.util.List[java.lang.String]:
    def method(self, value:java.lang.Integer) -> java.lang.String:
    def method(self, value:java.lang.Iterable[java.lang.String]) -> java.Primitive.void:

Java classes and interfaces, when imported, are Pythonized in such a way that they can be used as bases for Python classes. Generics are specified by subscripting the generic Java class. Java annotations are Pythonized in a way that turns them into decorators that add a special attribute to the decorated element: __java_annotations__. Annotations on imported Java classes and methods would also be exposed through the __java_annotations__ property for consistency. Access modifiers would similarly add a __java_access__ property to the object they decorate.

Kay Schluer also suggested allowing decorators on assignments, to be able to support annotations on fields. I don't really have an opinion on this. Since I don't think fields should be exported in any public API anyway it's a bit useless, and for the the cases where fields are used (such as dependency injection systems) I think it suffices to have it all in the same assignment: dependency = javax.inject.Inject(java.Access.private(java.Field(JavaClassIDependOn))), the name will be extracted to be "dependency" when the class is processed by the JavaClass class decorator. But if others find assignment decorators useful, I am not opposed to them. If assignment decorators are added to Python, it might be worth considering having a slightly different signature for these decorator function, so that the name of the target variable is passed as a parameter as well. Then my example could look like this:

class WithInjectedDependency:
    @javax.inject.Inject # This is JSR 330 by the way
    dependency = JavaClassIDependOn
    # could expand to: dependency = javax.inject.Inject(
    #     "dependency", java.Access.private(
    #         "dependency", java.Field(
    #             "dependency", JavaClassIDependOn)))
    # or to the same thing as above, depending on how
    # assignment decorators were implemented...

When defining methods in Java integration classes we use Python 3 function annotations to define the method signatures. These can be omitted, the default types in that case would of course be java.lang.Object. It is important that we support exposing classes that don't have any Java integration added to them from Jython, since we want to enable importing existing Python libraries into Java projects and use them without having to port them. These classes will not have the JavaClass decorator applied to them. Instead this will be done automatically by Jython at the point when the Python class first need to expose a class to Java. This is not something that the java module need to deal with, since it doesn't fit with other Python implementations.

Outstanding issues

There are still a few Java integration issues that I have not dealt with, because I have not found a solution that I feel good about yet.

Defining Java interfaces
Is this something we need to be able to do? If so, the proper approach is probably to add a JavaInterface decorator to the java module, similar to the JavaClass decorator.
Defining Java enums
This might be something that we want to support. I can think of two options for how to declare the class. Either we add a JavaEnum decorator to the java module, or we add special case treatment for when a class extends java.lang.Enum (I am leaning towards this approach). Then we need to have some way to define the enum instances. Perhaps something like this:
class MyEnum(java.lang.Enum):
    ONE = java.EnumInstance(1)
    TWO = java.EnumInstance(2, True)
    THREE = java.EnumInstance(3, True)
    FOUR = java.EnumInstance(4)
    def __init__(self, number, is_prime=False):
        self.number = number
        self.is_prime = is_prime
    def __str__(self):
        return self.name()
    class SEVENTEEN(java.EnumInstance):
        """This is an enum instance with specialized behavior.
        Will extend MyEnum, but there will only be one instance."""
        def __init__(self):
            """This class gets automatically instantiated
            by the __metaclass__ of Enum."""
            self.number = 17
            self.is_prime = True
        def __str__(self):
            return "The most random number there is."
Defining generic types
I have discussed how to specify type parameters for generic types, but how would you define a generic Java type in Python? How about something like this:
class GenericClass:
    T = java.TypeParameter() # default is "extends=java.lang.Object"
    C = java.TypeParameter(extends=java.util.concurrent.Callable)
This gets complicated when wanting to support self references in the type parameters, but the same is true for implemented interfaces, such as:
class Something implements Comparable<? extends Something> {
Defining Java annotations
I have dealt with supporting the use of Java annotations, but what about defining them? I highly doubt that defining Java annotations in Python is going to be useful, but I prefer to not underestimate what developers might want to do. I do however think we could get far without the ability to define Java annotations in Python, but if we were to support it, what would it look like? Defining the class would probably be a lot like how enums are defined, either by special casing java.lang.annotation.Annotation or providing a special java.Annotation decorator.
class MyAnnotation(java.lang.annotation.Annotation):
    name = java.AnnotationParameter(java.lang.String)
    description = java.AnnotationParameter(java.lang.String, default="")

java for other Python implementations

I mentioned that requiring the user to explicitly import java to make use of Java classes would make it possible for other Python implementations to support the same Java integration API. So what would the default implementation of the java module look like? There is a very nice standardized API for integrating with Java from other external programming languages: JNI. The default java module would simply implement the same functionality as the Jython counterpart by interacting with JNI using ctypes. Since ctypes is supported by all Python implementations (Jython support is under development) the java integration module would work across all Python implementations without additional effort. Right there is a major advantage over JPype and JCC (the two major Java integration modules for CPython today).

Integration from the Java perspective

I have not given as much thought to the area of utilizing Python code from Java. Still this is one of the most important tasks for Jython to fulfill. This section is therefore just going to be some ideas of what I want to be able to do.

Use Python for application scripting
This is possible today, and a quite simple case, but I still think that it can be improved. Specifically the problem with Jython today is that there is no good API for doing so. Or to be frank, there is hardly an API at all. This is being improved upon though, the next update of Jython will include an updated implementation of the Java Scripting API, and the next release will introduce a first draft of a proper Jython API, something that we will support long term after a few iterations, and that you can build your applications against.
Use Jython to implement parts of your application
We want to be able to write an polyglot applications, where parts of it is implemented in Python. This is more than just scripting the application. Applications generally work without scripts. We want to be able to write the implementation of parts of an application in Python with Jython. This is possible today, but a bit awkward without an official Jython API. This is being worked on in a separate project called PlyJy, where we are experimenting with an API for creating object factories for Jython. Jython object factories are objects that call into a Python module, instantiate a Python class, conforms it to a Java interface and returns it. So far this project is looking good and there is a good possibility that this will get included in the Jython API.
Directly link (Java) applications to Python code
This is where things are starting to get advanced. It would be nice if you could write a library in Python (or import an existing one) and link your Java code with the classes and functions defined in that library directly. This would require Jython to generate Java proxies, actual Java classes where the methods correspond to the actual signatures, with proper constructors and the things you would need to use it like any other Java code, while hiding away the dynamic aspects that make it Python. This could either be done through a compilation step, where some Jython proxy compiler generates the proxies that the Java code can link with, or through utilizing a ClassLoader that loads a Python module and inspects the content, automatically generating the required proxies. With the ClassLoader approach javac would need to know about and use it to load signatures from Python code. This is of course where the Java integration decorators described above fits in.

What do you think?

I would love to get feedback on these ideas. Either through comments to this entry, via Twitter or on the Jython-dev mailing list.

Please note that the ideas presented in this blog post are my own and does not reflect any current effort in the Jython project.

Friday, July 31, 2009

"Social networking" killed productivity

Twitter has become work. Not acceptable for work, i.e. something that is not frowned upon to do at work, but actual work, something you are required to do at work. At least this is the case if you are involved somewhere where the development team is the marketing team, like a startup or an open source project. For the record my involvement in Neo4j qualify to both categories, and Jython is most certainly an open source project, and quite a high profile such as well.

In order to stay on top of things in this situation you easily find yourself with push based twitter notification or at least reading a lot of material on a regular basis. I for example get about 150 to 200 tweets per day from the people I follow. Combine this with the expectation to stay on top of email (yet again yo go for push), and you've got a constant stream of interrupts, and this really kills productivity.

Just the other day I read the Life Offline posts by Aaron Swartz, and found that very much recognize myself in how he describes the problems of the constant online presence. It would be wonderful if I, like he was able to do, could take a long stretch of time away from being connected, but I don't think that is possible, at least not now or in a near future. The problem stands though, I am not being productive. And some things don't get done in time. And this is a problem.

I've tried shifting my email and twitter use to only do processing of these things once per day, but it still takes two hours or more from my day to simply process the incoming stream. By processing I mean:

  • Read all the email and define actions.
  • Read all tweets, open tabs for links that seem interesting, skim those pages and define actions.
  • Read feeds and define actions.

That takes two hours. Then I still have to perform the actions that I have defined. Which could take up to the rest of the day.

I noticed already about twelve years ago how destructive online communities and social networks could be, and how much time they consume. I have thus tried to stay away from them, which is why I don't use my facebook account. But when social networking has become part of work it is much harder to avoid. In the case of Twitter it is also difficult to ignore because of how hugely influential it is. Twitter is the de facto way to find out about new things and interesting articles.

I am starting to believe that perhaps Donald Knuth made a wise decision in not having an email address, but as he points out having an email address is for people who need to be on top of things, and that he does not have an email address because he does not have to be on top of things anymore. I will agree with that, Donald Knuth has contributed a lot to the field of computer science, but he is definitely not on top of things anymore. So how do you cope with both being on top of things while still being productive? Is it possible? I would love to get any insight into the secrets that I am obviously unaware of.

Wednesday, July 15, 2009

Improving performance in Jython

About two weeks ago I published a writeup about my findings about the performance of synchronization primitives in Jython from my presentation at JavaOne. During the presentation I said that these performance issues were something that I was going to work on, and improve. And indeed I did. I cannot take full credit for this, Jim Baker played a substantial part in this work as well. The end result is still something I'm very proud of since we managed to improve the performance of this benchmark as much as 50 times.

The benchmarks

The comparisons were performed based on the execution of this benchmark script invoked with:

  • JAVA_HOME=$JAVA_6_HOME jython synchbench.py
  • JAVA_HOME=$JAVA_6_HOME jython -J-server synchbench.py
# -*- coding: utf-8 -*-
from __future__ import with_statement, division

from java.lang.System import nanoTime
from java.util.concurrent import Executors, Callable
from java.util.concurrent.atomic import AtomicInteger

from functools import wraps
from threading import Lock

def adder(a, b):
    return a+b

count = 0
def counting_adder(a, b):
    global count
    count += 1 # NOT SYNCHRONIZED!
    return a+b

lock = Lock()
sync_count = 0
def synchronized_counting_adder(a, b):
    global sync_count
    with lock:
        sync_count += 1
    return a+b

atomic_count = AtomicInteger()
def atomic_counting_adder(a,b):
    return a+b

class Task(Callable):
    def __init__(self, func):
        self.call = func

def callit(function):
    def callable():
        timings = []
        for x in xrange(5):
            start = nanoTime()
            for x in xrange(10000):
            timings.append((nanoTime() - start)/1000000.0)
        return min(timings)
    return callable

def timeit(function):
    futures = []
    for i in xrange(40):
    sum = 0
    for future in futures:
        sum += future.get()
    print sum

all = (adder,counting_adder,synchronized_counting_adder,atomic_counting_adder)
all = [callit(f) for f in all]

WARMUP = 20000
print "<WARMUP>"
for function in all:
for function in all:
    for x in xrange(WARMUP):
print "</WARMUP>"

pool = Executors.newFixedThreadPool(3)

for function in all:
    print function.call.__name__

glob = list(globals())
for name in glob:
    if name.endswith('count'):
        print name, globals()[name]

And the JRuby equivalent for comparison:

require 'java'
import java.lang.System
import java.util.concurrent.Executors
require 'thread'

def adder(a,b)

class Counting
  def initialize
    @count = 0
  def count
  def adder(a,b)
    @count = @count + 1

class Synchronized
  def initialize
    @mutex = Mutex.new
    @count = 0
  def count
  def adder(a,b)
    @mutex.synchronize {
      @count = @count + 1
    a + b

counting = Counting.new
synchronized = Synchronized.new

puts "<WARMUP>"
10.times do
  10000.times do
    adder 5, 10
    counting.adder 5, 10
    synchronized.adder 5, 10
puts "</WARMUP>"

class Body
  def initialize
    @pool = Executors.newFixedThreadPool(3)
  def timeit(name)
    puts name
    result = []
    40.times do
      result << @pool.submit do
        times = []
        5.times do
          t = System.nanoTime
          10000.times do
          times << (System.nanoTime - t) / 1000000.0
    result.each {|future| puts future.get()}
  def done

body = Body.new

body.timeit("adder") {adder 5, 10}
body.timeit("counting adder") {counting.adder 5, 10}
body.timeit("synchronized adder") {synchronized.adder 5, 10}


Where we started

A week ago the performance of this Jython benchmark was bad. Compared to the equivalent code in JRuby, Jython required over 10 times as much time to complete.

When I analyzed the code that Jython and JRuby generated and executed, I came to the conclusion that the reason Jython performed so badly was that the call path from the running code to the actual lock/unlock instructions introduced too much overhead for the JVM to have any chance at analyzing and optimizing the lock. I published this analysis in my writeup on the problem. It would of course be possible to lower this overhead by importing and utilizing the pure Java classes for synchronization instead of using the Jython threading module, but we like how the with-statement reads for synchronization:

with lock:
    counter += 1

Getting better

Based on my analysis of the how the with-statement compiles and the way that this introduces overhead I worked out the following redesign of the with-statement context manager interaction that would allow us to get closer to the metal, while remaining compatible with PEP 434:

  • When entering the with-block we transform the object that constitutes the context manager to a ContextManager-object.
  • If the object that constitutes the context manager implements the ContextManager interface it is simply returned. This is where context managers written in Java get their huge benefit by getting really close to the metal.
  • Otherwise a default implementation of the ContextManager is returned. This object is created by retrieving the __exit__ method and invoking the __enter__ method of the context manager object.
  • The compiled code of the with-statement then only invokes the __enter__ and __exit__ methods of the returned ContextManager object.
  • This has the added benefit that even for context managers written in pure Python the ContextManager could be optimized and cached when we implement call site caching.

This specification was easily implemented by Jim and then he could rewrite the threading module in Java to let the lock implementation take direct benefit of the rewritten with-statement and thereby get the actual code really close to the locking and unlocking. The result were instantaneous and beyond expectation:

Not only did we improve performance, but we passed the performance of the JRuby equivalent! Even using the client compiler, with no warm up we perform almost two times better than JRuby. Turn on the server compiler and let the JIT warm up and perform all it's compilation and we end up with a speedup of slightly more than 50 times.

A disclaimer is appropriate here. With the first benchmark (before this was optimized) I didn't have time to wait for a full warmup. This because of the fact that the benchmark was so incredibly slow at that point and the fact that I was doing the benchmarks quite late before the presentation and didn't have time to leave it running over the night. Instead I turned down the compilation threshold of the Hotspot server compiler and ran just a few warmup iterations. It is possible that the JVM could have optimized the previous code slightly better given (a lot) more time. The actual speedup might be closer to the speedup from the first code to the new code using the client compiler and no warmup. But this is still a speedup of almost 20 times, which is still something I'm very proud of. There is also the possibility that I didn't run/implement the JRuby version in the best possible way, meaning that there might be ways of making the JRuby version run faster that I don't know about. The new figures are still very nice, much nicer than the old ones for sure.

The current state of performance of Jython synchronization primitives

It is also interesting to compare how the current implementation compares to the other versions in Jython that I included in my presentation:

Without synchronization the code runs about three times as fast as with synchronization, but the counter does not return the correct result here due to race conditions. It's interesting from the point of view of analyzing the overhead added by synchronization but not for an actual implementation. Two times overhead is quite good in my opinion. What is more interesting to see is that the fastest version from the presentation, the one using AtomicInteger, is now suffering from the overhead of reflection required for the method invocations compared to the synchronized version. In a system with more hardware threads (commonly referred to as "cores") the implementation based on AtomicInteger could still be faster though.

Where do we proceed from here?

Now that we have proven that it was possible to get a nice speedup from this redesign of the code paths the next step is to provide the same kind of optimizations for code written in pure Python. Providing a better version of contextlib.contextmanager that exploits these faster code paths should be the easiest way to improve context managers written in Python. Then there are of course a wide range of other areas in Python where performance could be improved through the same kind of thorough analysis. I don't know at this point what we will focus on next, but you can look forward to many more performance improvements in Jython in the time to come.

Monday, July 06, 2009

My view of EuroPython 2009

I was not very enthusiastic going to EuroPython this year. I don't like it when all I do is fly in, give a talk then fly out again, but that was all I could afford to do for EuroPython. I go to a few conferences in a year, and all of them can be filed under expenses for me, since I don't have an employer that pays for my conference trips. With EuroPython being in Birmingham and the UK not being the cheapest country there is there was no chance for the to afford staying more than two days.

My plane landed Tuesday at lunch time, and my talk was the first talk after the afternoon keynote. I arrived to the venue with about two and a half hours to take care of registration and payment. I had requested to not pay when I registered online and was told that the best solution would be to pay when I arrived. My reason was of course that I didn't have any money when they required my online registration, freelance open source development does not pay every month. The only problem with this was that when I arrived there was no one there to take care of registration, it was only open in the morning. I tried asking a few guys in staff t-shirts, but they were not very helpful, and seemed like they just wanted me to go away. I decided to wait until the next day with my registration and went to see the keynote.

It was nice to get to see Cory Doctorow in person. His keynote was about the copyright war and why it matters to us as developers. Scary stuff. The world that the media industry is forcing on us is all but pleasant. He could even provide examples of actual cases that have already happened where large music publishers have forced open source developers working on projects they didn't like to change profession or face millions of dollars in fines. Not on the basis of the software being illegal (it wasn't), but with a lawsuit on copyright infringement from downloaded MP3 files. He also talked about how the media industry seem to prefer the entire internet to be illegal, along with open source software all together, something that Reinout van Rees has a more complete summary of.

My presentation on what would make Jython a better Python for the JVM felt like the best presentation I have given to this day. The material was an updated version of the talk I gave at PyCon earlier this year. Experience really does make you a better speaker and this was the 11th presentation of my speaker career so far (in less than two years). It really feels good to stand in front of people and talk when a large number of people are nodding and smiling at what you are saying throughout most of the presentation. And the audience was great here, they asked a lot of good questions, both during the presentation and afterwards. I didn't go through all of my slides but I covered all of my material. Some things came more natural with other slides with this crowd, so I altered the talk slightly while doing it. This was a scenario I had prepared for, I was prepared to use the slides in either way, and with this audience I felt more comfortable doing it this way. The only negative aspect of the presentation was the room, it was laid out in such a way that if I stood by my computer I would block parts of the projected screen. I had to switch to my remote control and step away from the computer to allow everyone to see the material, which meant that I could not read my speaker notes on my screen. Fortunately I knew my material well enough.

The rest of the day I went around to a few different talks in the venue, still without badge. Then me and the rest of the Jython team that were there rounded off the day at a chinese restaurant with Michael Foord and some fifteen other attendees, followed by a pint at an outdoor bar before me and Frank headed to our hotel. The food and beer was great but the conference venue and the hotel were not. As I mentioned the venue had a problem with the room I presented in, but being in the basement of the Birmingham Conservatoire it was dark and small, with corridors to all the presentation rooms. Not an ideal place for a conference. The hotel (Premier Inn) was probably the worst I've experienced at a conference so far. When we checked in they were out of twin rooms even though that was what we had ordered, so we had to share a queen size bed. There were not enough towels, no AC, breakfast not included, and only one of our key cards worked. When showering one could choose between scolding hot drizzle, freezing cold shower, or loudly howling pipes. All of this at a rate of £65 per night.

The next morning I made a new attempt at registering. This failed again, this time due to the fact that my registration had not been paid. At this point I felt rather put off by the conference, I had payed for the plane and the hotel, and was expected to pay for entrance to a conference where my talk had been the only rewarding thing. I sent an e-mail to the organizers expressing my disappointment and asking them if they wanted my money and what I should do in order to give it to them. In the response I got they said that if I was not happy then maybe I should not have proposed a talk. How was I supposed to know that I was going to be unhappy about the conference before I got there? EuroPython 2008 in Vilnius was great, I had no reason to expect it to not be good in Birmingham 2009. It took me until the afternoon before I was able to track down the organizers, pay for my registration and get my badge. I did not get a t-shirt since they had made a mess of them and were unable to find a shirt in the size I had ordered.

Now that I was finally registered and all I felt that I had earned the EuroPython dinner that night. This was a very nice addition to the conference, a purely social event with the other attendees. After dinner and a pint at a nice pub I got about four hours of sleep before I headed to the airport. My plane took off at 6:30 am, so I got to the train station at 4 am to catch a sufficiently early train to the airport only to find out that the first train of the day left at 5:30. I had to run when I arrived at the airport with only 20 minutes until final boarding call. With check in luggage I would have never made it, but since I like to travel lightly and only had a small backpack I made it to the gate just after priority boarding had finished. It still would have been much better if the flight had departed an hour or so later, because I had to wait 160 minutes for the bus after I landed in Sweden.

Despite all the problems I think it was worth my time and money to go to EuroPython, the community is great, I love the Python people! But I don't think I'll attend next year unless I get the entire conference payed for. When it's not in Birmingham anymore I'll be more interested in attending again. There were also a number of interesting talks that I will now summarize.

Frank Wierzbicki's talk about web frameworks and Jython
Franks talk was right after mine, in the same room. Frank has also become a much better speaker in the past year, and I enjoyed seeing all the web frameworks that we now support with Jython. And the fact stands that for normal applications Jython performs about the same as CPython, which is nice for all of these web frameworks. I tend to forget this since I spend most of my time looking at the places where Jython performance should improve.
Mark Shannon's talk about his Python VM "HotPy"
Python is now starting to see the phenomenon that Charles Nutter has been talking about in the Ruby community: a number of new implementations and VMs are popping up and claiming to have better performance on a lot of things for different reasons. HotPy is a research Python VM that builds on a research VM toolkit by Mark Shannon from University of Glasgow. It is not complete but has a good approach to optimization, optimizing the code yields better result than compiling it.
Christian Tismer's talk about the status of Psyco
This was really impressive to me. Psyco V2 is being released this weekend. An heroic upgrading effort done by Christian alone. Everything about Psyco V2 is impressive. It yields enormous performance boosts to Python, to the point where Christian has started replacing builtin functions that are written in C in CPython with corresponding versions written in Python to improve performance. And properties get about a 100 times speedup with Psyco. I need to look at the source for Psyco and find out which of these optimization techniques we can apply to Jython, and perhaps even in the JVM.
Thursday afternoon keynote about Bletchley Park
Sue Black of the Saving Bletchley Park project and Simon Greenish gave a keynote presentation about Bletchley Park and how they are using social media to attract more visitors. The presentation featured an actual working Enigma machine on stage.
Thursday evening keynote by Sir Charles Antony Richard Hoare
Tony Hoare talked about the difference between science and engineering. Not the most entertaining talk, I had hoped for some interesting and motivating story on what had been driving his career. It would for example have been great to hear how he came up with the idea for quicksort, or hear about what interesting projects he is working on now. I did like his conclusion that Computer science and Software engineering are still allowed to be imperfect, being a much younger science than for example physics or biology, both of which were just studies of simple observable phenomena when they were at the age that computer science is today. His vision was that at some point in the future software engineering will be the most reliable form of engineering, since software never degrades. I like this vision, but I agree that computer science has to mature and evolve before we can develop zero fault software. Finally his response during Q&A to the question "Is there research that can help marketing come up with specifications so that we engineers can build the software?" was very entertaining: "That's the engineer's job. Marketing doesn't understand programming. Neither does marketing understand the customers.".

Sunday, June 28, 2009

Performance of Synchronization primitives in Jython

As I pointed out in my previous blog post about JavaOne there are a few points where the performance of Jython is much worse than that of the “competitors”. In this post I provide my analysis of the the performance of Jython in the micro benchmark I used in my presentation. I will compare Jython to JRuby here for three reasons. The first reason is the obvious, Python and Ruby are quite similar languages. This takes us to the second reason, the performance of JRuby and Clojure was not that different in comparison, and it's therefore better to choose the most similar language for comparison. Finally, the third reason, JRuby is a darn good, highly optimized dynamic language implementation. That makes it a good baseline for comparison.

The first diagram shows Jython and JRuby with no contention at all. We can clearly see, as early as in this picture that Jython needs a bit more work in the performance department. I still feel excited about this though. The recent release of Jython, version 2.5, focused heavily on Python compatibility. The next versions will be where we start focusing on performance. The work towards Jython 3.0 is even more exciting here. But that excitement will be saved for a later post.

The second slide shows another version of pretty much the same code, but with a shared, contended resource. Here we find that Jython performs much worse than JRuby. While the JRuby version is about 10 times slower with a contended shared resource, Jython is almost 100 times slower. Why is this?

The effect of the implementation

I have already mentioned the first reason for JRuby performing better than Jython. It applies here as well. The JRuby team has spent more time on performance tuning than the Jython team, this is something we will be able to improve. In fact this plays a bigger part than one might think.

The JRuby Mutex implementation is written in about a page of plain old Java (not that important) that gets compiled to a small number of simple bytecodes (more important). Because a closure or block in JRuby is just an object with a method containing the compiled bytecode (compared to a function that contains argument parsing logic before that), the dispatch from the synchronization code to the guarded code is just a simple virtual call. For the call into the synchronization code there are two VM level calls, since this is a JRuby method. First there is the call to the dispatch logic and argument parsing, then the call to the actual code for the synchronization code. Much of the work of the first call is cached by JRubys call site cache from the first invocation so that subsequent invocations are much faster.

The Jython implementation on the other hand has no call site caching. So each call does full dispatch and argument parsing. The call structure is also different. A with statement in Python is just a try/except/finally block with simplified syntax, where the context manager (a lock in this case) is called for setup before the block inside the with statement and then invoked again after the with statement block completes.

In shorter terms: JRuby has much lower call overhead on the VM level. This makes a difference because a call is a call, and even when in-lined it imposes some overhead. It is also important because the JVM is better at making optimizations across few levels of calls than across several. Still, this only explains a small part of the difference seen in the chart.

The incredible benefit of closures

Most JVMs has built in support for analyzing and optimizing locks. In order to be able to do that it needs to have the the entire synchronized region in one compilation unit, i.e. both the lock instruction and the unlock instruction needs to be in the same compilation unit. Due to the fact that code analysis is super-linear there is a limit to how big a compilation unit is allowed to be. Initially a compilation unit corresponds to a Java bytecode method, but may grow as an effect of code in-lining (up to the compilation unit size limit). The key to get good performance from synchronized regions is therefore to either have both the lock and unlock instructions in the same method, or at least in a shallow call hierarchy from the same method.

Compare the two following snippets of pseudo bytecode (indented regions are the body of the invoked methods).


invoke closure_body
    // do stuff...


invoke __enter__
    Do Jython dispatch
        Do Jython to Java dispatch
            Do Java reflection dispatch
// do stuff...
invoke __exit__
    Do Jython dispatch
        Do Jython to Java dispatch
            Do Java reflection dispatch

The key insight here is that in the first code snippet the lock and unlock instructions are in the same compilation unit. In the second example they are in two different call paths. The Jython dispatch logic is three levels of calls, and the Jython to Java dispatch logic is two levels, then there is the reflection dispatch that is a number of calls as well. Not only that, but there is quite a lot of code in those call paths as well: parsing of arguments, setting up call frame reflection objects, and more. Add all this together and there is no chance for the JVM to see both the lock and unlock instructions in the same compilation unit. Compared to the situation in the JRuby implementation where they are in the same compilation unit before any in-lining.

Having un-optimized locks make a huge difference for applications running on the JVM. This together with the fact that JRuby is more optimized in general, accounts for most of the difference in execution time for these examples. If we could fix this, we would get a substantial improvement of performance in Jython.

For further details on how we intend to improve this situation in Jython, you are very welcome to attend my presentation on "A better Python for the JVM" at EuroPython, this Tuesday (June 30, 2009) at 15:30. I will also continue posting here on specific improvements and plans, so stay tuned on Twitter or subscribe to my feed.

Disclaimers: The understanding of a JVM in this post is mostly based on the Hotspot JVM. Other JVMs might work slightly different, but the basic understanding should be at least similar.
The descriptions of both Jython and JRuby are somewhat simplified, the synchronization in JRuby is for example even slightly better optimized than what I have outlined here, but the full description would make the post overly complicated. The essentials are still the same.
In my presentation at JavaOne some numbers suffered from classic micro benchmark errors, ask me if you want to know more about that.

Friday, June 26, 2009

Five years later

Me with a large beard and a musketeers outfit

Five years ago I was in a committee responsible for greeting new students in computer science and engineering at my university. We organized activities and parties, and made sure that the new students got their studies up to a pace that would help them through their education. We did all of this while putting on a theater act lasting throughout the entire greeting period of two weeks, for which I grew a large beard.

At about this time of the year there was a team building exercise where we were supposed to draw a picture of how we imagined our lives five years from then. I drew the view of a city skyline from a high floor hotel room. My description was that I imagined myself traveling a lot for work. That I worked on interesting, cutting edge software projects around the world. And that I made good money doing so, not really spending that much, since I didn't really see anything I needed to spend money on when most of my time would be spent working.

Three weeks ago when I looked out towards the San Francisco bay from my 25th floor hotel room at the Intercontinental where I stayed during this years JavaOne, I remembered that drawing. The view from my room was a lot like the view in the drawing. The more I thought about it, the more I realized that the vision I had five years ago was reality now.

The thing I love the most about this is of course that the part about working on cool, exiting, cutting edge projects is true. Both of the projects that get the main part of my attention fit into this category. Neo4j is an up and coming database where I have the privilege to be doing tool and integration work, as well as advanced data persistence programming. Jython gives me an opportunity to work on advanced JVM technology and cutting edge techniques for optimizing code. All together a good mix of technologies, challenges and people.

One part of my vision, I'm sad to say, has yet to come true. I am not making as much money as I expected. In fact trips and conferences are rather expensive. But apart from that I spend about what I expected on other things. This is a minor detail however, I'd much rather work on projects I enjoy than be bored with a high salary.

I also expected that I would be single, since I didn't expect anyone to put up with my traveling, working and being away a lot of the time. I am very happy that I got this prediction wrong. My girlfriend Sara is wonderful, and I am blessed to be sharing my life with her.

Thursday, June 25, 2009

Jython 2.5 - Why you should upgrade

Last weekend I spent some time with the rest of the Neo4j team at Neo Technology for our Code Camp weekend. Among the things I worked on was our Python bindings for Neo4j. They obviously work with the recently released Jython 2.5, but also fairly well with CPython 2.5 (or later), with the current caveat that you have to install JPype on your own (I am working on replacing JPype with something that is maintained and more light weight, and works with Python 3.x). I also want these Python bindings to work with the previous version of Jython (2.2), for those who for some reason cannot update. So for the first time in a while I used a version of Jython that was not the very latest. This is my summary of that experience.

When comparing Jython 2.5 with Jython 2.2 it suddenly becomes obvious how much work the Jython team (including myself) have put into making this a fully compatible implementation of the Python programming language. The fact that Jython now runs Django, Pylons, SQLAlchemy, and a ton of other things is genuinely impressive. And I am more than a little proud for the part I played in making this happen, even if it was only a small part. Jython has, with the step from 2.2 to 2.5, graduated from being a system for scripting Java applications with Python to a system on which you can build full applications.

The first thing I missed when testing Neo4j with Jython 2.2 was virtualenv. I run so many different unstable applications and libraries on different versions of Python and Jython that it's inevitable to mess up on a daily basis. A lot of these have different dependencies, and dependencies on conflicting versions of the same thing and other horrible constraints. And of course I patch and rebuild my Jython at the same time as well, meaning that all libraries I've installed directly into the Jython dist directory gets wiped (I've lost a lot of work this way). In all these situations virtualenv is a saving angel, and it doesn't work with Jython 2.2. (If you have not tried virtualenv yet, do so now! I was a late user, but it was love at first use.) This is a pain - Upgrade to Jython 2.5.

The second thing I was screaming after in Jython < 2.5 was of course easy-install. If not for any other reason to install virtualenv. How about distutils? - Nope. Jython 2.2 has no pleasant way of installing libraries for you - you are on your own. Fail. The same is true for the Python bindings for Neo4j, if you use Jython 2.2 it will still download the required Java libraries for you, and you will be able to use it, but installing it into your sys.path is up to you. This is a pain - Upgrade to Jython 2.5.

I lied before. The first thing I missed was the Python 2.5 syntax. In particular the with statement. This was in fact the reason I joined the Jython project in the first place, to get Python 2.5 syntax support for Jython (and I succeeded in prototyping it during that first Google Summer of Code in 2007). Just consider the following two pieces of code, and I think you will get my point:

with neo.transaction:
    with_node = neo.node(language="Python", feature="with statement")
    example__node = neo.node(language="Python", example="Using the with statement")
tx = neo.transaction.begin()
        try_node = neo.node(language="Python", feature="try block")
        example__node = neo.node(language="Python", example="Not using the with statement")

It should be obvious which one is beautiful and which one is not. The with statement is one nice piece of syntactic sugar. Not having it is a pain - Upgrade to Jython 2.5.

Ok, so there is still a lot of work to be done on Jython. Mainly on performance. The focus up until now has been compatibility, and we've done a good job at that. In the next major version of Jython you should expect a substantial improvement of performance. Not that performance is bad now. But we know that we can do better. More on that in a later post.

Thursday, June 18, 2009

JavaOne 2009 from my perspective

It has been a week since I got back to Sweden after my trip to San Francisco for what I am sad to say was my worst JavaOne so far. Don't get me wrong, JavaOne was good. There were a lot of good sessions, and as always it was nice to meet my friends in the community. But the two previous years were better. Much of it can be blamed on the economy. And the rest can be blamed on me. To a large extent the reason I didn't get the JavaOne experience I would have liked to was because I was too busy. I had a deadline waiting for me on a project at work when I got home, and had to work quite a lot during my JavaOne week. But the fact stands, JavaOne was a lot smaller this year, only about 9000 attendees, and you notice that.

These are the highlight from the sessions I attended at CommunityOne and then JavaOne:

  • Beyond impossible: How JRuby has evolved the Java Platform by Charles Nutter. Charles is a kickass developer and has become a really good speaker as well. It's amazing what he has done with JRuby, and this talk was a summary of the highlights from that. Including things such as the fact that the JRuby team have implemented their own regular expressions library and posix layer for doing system calls from the JVM. All of which adds up to making JRuby the fastest complete implementation of Ruby.
  • Return of the Puzzlers: Schlock and Awe by Joshua Bloch and Neal Gafter. All I can say is that all the digging around on the low levels of the JVM that I have done has payed off, I had the right answer for 7 out of 7 Puzzlers, much better than the first time I attended a Java Puzzlers session.
  • Toward a Renaissance VM by John Rose. John talked about Invokedynamic and Method Handles. It was all familiar stuff, but a good introduction for the odd people who do not spend most of their time implementing languages for the JVM.
  • Meet the Java Posse. I've known the guys of the Java Posse for a while, but this is the first time that I was not doing something completely different during their BoF at JavaOne. Not much to say, listen to the podcast.
  • JSR 292 Cookbook mostly by John Rose. This is what I think was the eye opening session about Invokedynamic and MethodHandles for most people. Hands on examples, showing all of the JSR 292 goodness in action. More people should have attended. Everyone not familiar with JSR 292 on beforehand that I talked to after the session realized that it is useful for far more things than just implementing languages.
  • How to write a distributed garbage collector by Ari Zilka and Saravanan Subbiah from Terracotta. This was the last session of the last day and probably the best session at JavaOne this year. Ari and Saravanan talked about how garbage is detected and collected in the distributed environment that is Terracotta when all references to an object could be from a different node. Very interesting stuff, I need to take a deeper look at Terracotta at some point soon.

There were also some noteworthy sessions that I missed while I was working:

  • Java Platform Concurrency Gotchas by Alex Miller. A smart guy talking about interesting stuff, too bad I missed that one.
  • The Ghost in the Virtual Machine: A Reference to References by Bob Lee. I have a feeling that there are still a few things that can be done using Weak, Soft and Phantom references, that I don't know about yet. I would also have liked to hear what he had to say about Ephemerons. Are they coming to the JVM soon btw?

Then there are the two presentations I gave. A BoF on Interface Injection entitled "Language Interoperability on the JVM Made Simple" and a Technical session entitled "Exploiting Concurrency with Dynamic Languages".

Language Interoperability on the JVM Made Simple

Since this was a Birds of a Feather session I divided it into two main sections: The first half I gave an introduction to interface injection, the status of the implementation and some ideas as to what you can do with it. The second half of the session was spent on discussions about what people want from interface injection, how they want to use it and how it should work.

The short summary is:

  • I have implemented the injection mechanism.
  • I have designed a proposal API for how the injection process should work on the Java level.
  • I have wired up interface injection to invokeinterface, there are still a few more places that would need to be able to call the injection mechanism.
  • The Reflection API for injected interfaces is still unspecified.
  • A lot of people want only explicit injection of interfaces. That is for interfaces to only be injected when an inject() method is invoked in some API. This can be emulated by setting a flag in the injector when this method is invoked, and unset it at the end of this method.
  • There was also some concern about the fact that methods on an injected interface gets implemented automatically by methods with the same name and signature that exist in the target class. I agree that it would be better if interface implementations could be separate for each interface in the class file format. But I don't think this problem will be as big as people might fear since the methods have to match exactly on return type as well as parameter types.

The staff in room 102/103 was really helpful, they offered to rearrange the chairs and tables to better host the discussion I wanted to encourage about interface injection. This can not be said about the staff in room 104 where I had my second presentation. Although being very friendly, they were not very professional. First of all my session was supposed to be recorded, but after the session I was informed that they had forgotten to record the slides, and transitions. I gave them my slides on a memory stick and they said that they were going to match it to the best of their ability then send it to me for review, I have yet to hear from them. On top of that some members of the staff were constantly talking behind the stage, leaving me very distracted. Highly unprofessional of them. I was so surprised by this that I didn't even tell them to shut up.

Exploiting Concurrency with Dynamic Languages

I spent too much time at JavaOne preparing for my second talk about how languages other than the Java Programming Language on the JVM lend themselves better to expressing concurrent programs. I really should have prepared the talk much more before I left Sweden. I believe that I could have done a better presentation. The staff in the room is to blame to a large extent for the fact that I didn't feel satisfied by my performance, but half of the blame is on me. Although, the few reviews that I've heard have been good. Basically that the topic was good, that I had interesting ideas and made good points, but that it was too short, although with good Q&A. A fair review in my opinion, it will be interesting to see the review cards when they get sent out to the speakers.

What is more interesting however is what I was able to conclude in my talk. It should come as a surprise to no one that closures make it much easier to express concurrent programs, and to encapsulate best practices and patterns in libraries. Other higher level constructs are helpful here as well, please have a look at my slides for reference. I also found that Jython has a lot of room for improvement, but more on that in another blog post.

Thursday, May 14, 2009

Twitter: #dontfixreplies

Recently Twitter changed their policy for which messages to show in the regular stream of messages to each user. Previously users could choose if they wanted to see @replies (messages that start with an @ sign followed by a user name) or not. With the recent change this is no longer a configuration option, but all users now don't see @replies directed to people they don't follow. The motivation for this is of course that these messages really are directed at someone else, that you are not interested in.

A lot of people got upset with this, with the main motivation being that @replies is a good way to find interesting connections and people.

I did not notice the change until everyone (yes, everyone) started ranting about wanting it to change back to the old behavior, since I had turned off @replies a long time ago. Why did I turn it off? Because a lot of people use Twitter as yet another IM service, and those messages have a very low signal to noise ratio to me. Before I turned off @replies I got about 3 times as many tweets in my stream, and more than 90% of these were not interesting to me. Even with my current stream without @replies there is more noise than signal, but the ratio is such that I can find the tweets I'm interested in.

One interesting thing that I noticed with the recent change was that peoples twittering mentality changed (This applies only to the people I follow, a tech heavy crowd. I have not done, nor do I intend to do, forther investigation of the phenomenon). All of the sudden the signal to noise ratio got much better. I was away from twitter for about a day, a little over 20 hours. When I got back I had less than 100 new tweets in my stream. Normally 20 hours yield more than 120 tweets. There were also more interesting links than usual, I have 13 tabs open with things I want to read. I.e. almost 15% of the tweets contained links that looked interesting, compared to the usual 5%. A lot of the other tweets were interesting to, I estimate the noise ratio to be about 30% for this period of time. Interestingly the tweets that fall into the noise category are mostly either rants about how bad it is to not get @replies or from the past few hours when people started add stuff before their @replies to force them to be visible.

Conclusion: Twitter, please keep the current behavior for @replies. Or if it changes back, make the current behavior default (I'm not unreasonable).

Disclaimer: I haven't done the actual statistical calculations for this. It would be interesting to do, and post a few charts, but there are a lot of interesting things to do, and I don't have time to do all of them. Therefore the figures in this post are just from the general feeling I get from my Twitter stream.

Wednesday, April 22, 2009

Python-style generators in Scheme

I have finally gotten back to working on interface injection in OpenJDK. As before I get a few breaks when I wait for stuff to compile. These breaks are too long to just sit around and wait, but too short to context switch to another project. So what I end up doing instead are small hacks. And since I, once again, have decided to try and post stuff here more regularly, I decided to write a post about a hack I did yesterday.

A long time ago I tried to implement Python style generators in Scheme, and obviously I am a better programmer now since I was able to just sit down and write it.

The aim is to be able to write code like this:

(define-generator (iterate lst)
(define (iter lst)
(if (not (null? lst))
(begin (yield (car lst))
(iter (cdr lst)))))
(iter lst))

And then use such a generator in a Python-like for-loop, like so:

(for (element (iterate '(a b c))
(print element))

The code that enables this is at it's core two syntax definitions. The generator definition syntax, and the for-loop syntax:

(define-syntax explicit-generator-definition
(syntax-rules (yield)
((explicit-generator-definition (yield exit) params . body)
(lambda params
(lambda (done loop)
(define (exit . values)
(call/cc (lambda (next) (apply loop (cons next values)))))
. body)))))

(define-syntax for
(syntax-rules (break continue)
; multiple valued generator version
((for (break breaker) (continue continuation)
((variables ...) generator) . body)
(lambda (breaker) (generator breaker
(lambda (continuation variables ...)
. body)))))
; single valued generator version
((for (break breaker) (continue continuation)
(variable generator) . body)
(lambda (breaker) (generator breaker
(lambda (continuation variable)
. body)))))
; enable ignoring break and continue
((for (continue continuation) (break breaker) (target ...) . body)
(for (break breaker) (continue continuation) (target ...) . body))
((for (break breaker) (targets ...) . body)
(for (break breaker) (continue ignored) (targets ...) . body))
((for (continue continuation) (targets ...) . body)
(for (break ignored) (continue continuation) (targets ...) . body))
((for (targets ...) . body)
(for (break _break) (continue _continue) (targets ...) . body))))

As you can see the loop construct has the ability to break the loop and jump to the next iteration by explicitly naming the break and continue continuations. The fact that these have to be named explicitly is a good thing since it enables you to define different names for the break/continue continuations for different loops when you nest loops. You can of course name them break and continue, respectively, like so:

(for (break break) (continue continue) (element (iterate '(a b c))
(print element))

The generator definition also needs the yield continuation to be named explicitly. This is usually not what you want, so instead we define a convenience syntax for defining generators in the form of a "e;unhygienic"e; macro (this is the syntax used for unhygienic macros by most scheme implementations, PLT scheme that I used for example):

(define-macro (generator arguments . body)
`(explicit-generator-definition (yield yield) ,arguments ,@body))

(define-macro (define-generator signature . body)
`(define ,(car signature)
(explicit-generator-definition (yield yield) ,(cdr signature) ,@body)))

These macros was what prevented me from completing this when I first tried a few years ago. I started with them, and wanted them to be portable across all Scheme implementations, and unhygienic macros are not defined in the R5RS standard. What I should have done is what I did now, start with the interesting stuff and add just a little implementation dependent code for the final touches.

Now we can use these macros to define a range function that behaves in about the same way as the xrange function in Python:

(define-generator (range . params)
(define (range start stop step)
(if (and (> start stop) (> step 0))
(values start (lambda (current) (<= current stop)) (- step))
(values start (lambda (current) (>= current stop)) step)))
; parse the variables based on the number of them
(lambda () (cond
; 0 arguments ->
((null? params)
; infinite generator from 0
(values 0 (lambda (current) #f) 1))
; 1 argument: count ->
((null? (cdr params))
; generator with count elements, starting at 0
(range 0 (car params) 1))
; 2 arguments: start and stop ->
((null? (cddr params))
; generator over [start, stop[
(range (car params) (cadr params) 1))
; 3 arguments: start, stop and step ->
((null? (cdddr params))
; generator over [start, stop[ with a given step length
(range (car params) (cadr params) (caddr params)))
; more arguments - error (by passing too few arguments to internal range
(#t (values))))
(lambda (start stop? step)
(do ((current start (+ current step)))
((stop? current))
(yield current)))))

As a final touch we add a convenience syntax for nested for-loops as well:

(define-syntax for*
(syntax-rules (break continue)
; base case - one generator - expands to single for
((for* (((break breaker) (continue continuation) targets ...)) . body)
(for (break breaker) (continue continuation) (targets ...) . body))
((for* (((continue continuation) (break breaker) targets ...)) . body)
(for (break breaker) (continue continuation) (targets ...) . body))
((for* (((break breaker) targets ...)) . body)
(for (break breaker) (targets ...) . body))
((for* (((continue continuation) targets ...)) . body)
(for (continue continuation) (targets ...) . body))
((for* ((targets ...)) . body)
(for (targets ...) . body))
; recursive case - more than one generator - expands to nested for*
; for* is used instead of for to avoid repeating break and continue cases
((for* (first more ...) . body)
(for* (first) (for* (more ...) . body)))))

To understand how the generators work let's look at what a generator definition and for-loop construct expands to:

; A generator expands to a function that accepts the defined parameters
(lambda params
; The generator function, when called, returns a function to which
; the for-construct passes the break continuation and a closure
; representing the body of the loop
(lambda (done loop)
; yield is defined as a function that takes arbitrary parameters
(define (exit . values)
; when invoked it retrieves the current continuation to be
; able to resume then calls the loop body with the continuation
; and the passed in parameters as arguments
(call/cc (lambda (next) (apply loop (cons next values)))))
; The actual body of the generator is defined by the user,
; this executes arbitrary code and calls yield.
. body))

; The for-construct starts by retrieving the current conitunuation
; this is used for breaking the loop
; It then calls the generator with two arguments:
; the break continuation and the body of the loop.
(lambda (breaker) (generator breaker
; The body of the loop accepts two arguments:
; the continue continuation and the argument that was passed to yield.
(lambda (continuation variable)
; All it does is execute the body
. body))))

Happy Hacking!