Monday, August 13, 2012

MongoDB and Play2 at ease: using Play-Salat plugin and Embed Mongo plugin

A while ago I've blogged about MongoDB with Play2 using Salat, here.

This post was describing how to integrate Salat easily with Play2 and gave some advice on actions to care of.


Play 2 gained in popularity and an amazing plugin as emerge for this purpose: play2-salat.

This plugin offers a lot of configuration to hit running instances including replicasets! It integrates very well by applying the advice I talked about in my post, but not only. It defines binders to enable us using Casbah stuffs in our routing and action definition (ObjectId, and so on).

This post is not dedicated to explain how to use it, I'd recommend you to browse the project page (play2-salat), plus the wiki that points to relevant URLs.


This post is dedicated to developers teams that follows (or not...) the convention of Continuous Delivery, especially the Single Command Environment pattern. That is, the environment must be set up in one single command... in Play2 => play run OR sbt run


Create an application that uses MongoDB as (one of its) persistence backend service, use play2-sala to have access to the `ORM` for our object and easy collections connections.
When runnning in production, of course, a MongoDB instance runs somewhere that can be configured (or a replicatset).

But in Dev?

Embed Instance

When another developer is cloning the related repo, knowing that it's a play application, he's best will would be to enter the directory and launch the application. > BANG <
No running instance...

So I created a Play2 plugin that uses this amazing work which retrieves a mongodb installer, installs it and enable us to launch/stop it... Keep in mind that MongoDB is not JVM based!

Adding this plugin to the application, setup the dev configuration to starts an embed MongoDB and Play2-Salat to target it, will gives the satisfaction to our developer... Moreover if he is a Designer (the only kind of guy that add values to any app ^^) who don't care about MongoDB, at all!

How To

Add the plugin dependencies (used in PlayProject):

     //MY OWN REPO where is deployed the following plugin
     val skalaCloudbeesSnapshots = "Ska La SNAPSHOTS" at ""

    lazy val embedMongoPlayPlugin    = "be.nextlab" %% "play-plugins-embed-mongodb" % "0.0.1-SNAPSHOT" changing()

    lazy val salatPlayPlugin         = "se.radley" %% "play-plugins-salat" % "1.0.8"

    //DECLARE the deps
    val appDependencies = Seq(

A bit of configuration (application-dev.conf)


And the most only thing that requires a bit of explanation (in conf/play.plugins)
See? Yes, the Play2-Salat plugin MUST be started AFTER the embed plugin... of course (what an explanation huh).


The one-single-file-of-33-lines plugin can be forked here.

That's All, Folks!

Friday, July 20, 2012

Gatling-Tool in SBT or Play : Sample Projects


This post is a direct follow up of this one where I introduced a bit what I did in order to integrate Gatling in SBT and in Play2.
Where this post was more about bits and bytes necessary to accomplish the task, this one will talk about how to use this mess.

Gatling for SBT

Now, that we've a dedicated SBT plugin in hand we can create a sample project that uses it (I already created one here).
In this new project, we'll need to create a file plugins.sbt which will contains the reference to the gatling-sbt-plugin.
Actually, it's the classical way to add a plugin to a SBT project (and the easier and the "semanticest").
We're now prepared to configure our build by using the pieces provided by the plugin.
At the end, we'll be able to write a first test and launch it.

Project Build

First of all, we must create a directory for your project with this basic structure:

Looking at the structure, we can see a Build.scala that will define our project build, a that defines the SBT version with a single line : sbt.version=0.11.3
And the latest file in the project folder is plugins.sbt which... declares the plugin.
Let's put the Build.scala aside for now, and have a look at the content of the latter. 
Self descriptive isn't it? Yes, we've just told that SBT has to use our gatling plugin... that's all? Not yet.
Actually this line will provide us everything that have been declared in the plugin, such as the Gatling configuration keys, the command, and the basic SBT settings. 
Now, we're gonna us them within our Build.scala. 

Before going in specific details, note that we had to add my own repo (hosted at Cloudbees) in order to fetch the project... but you could also use the URI fetch provided by SBT... 
What are really important to point out is the allSettings declaration and the import of the GatlingPlugin. 
At first, we reuse the default one by using Project.defaultSettings which is being appended the gatling settings, using Gatling.gatlingSettings... that object comes from our plugin! 
This will add all relevant keys for the "gatling-test" Configuration with default values. 
At this stage we've almost finish our build definition... All that remain (and shouldn't... have to figure out any help is welcomed?) is to add two things:
  • Declare the test framework to have access to the gatling classes and the custom Simulation :
    "be.nextlab" %% "gatling-sbt-test-framework" % "0.0.1-SNAPSHOT" % "gatling-test"
  • Declare the command :
    commands ++= Seq(gatlingTakatak)
After having used everything in your project (don't forget to add the GatlingTest configuration...) you're so close to write your first test.

Gatling Conf

This section will cover the other folder src wherein reside the classical main and test folders.
But there is an other one, gatling-test that is unknown at this stage.
In fact, this folder will be the root for our Gatling tests, holding the configuration file (I provided a basic one in the sample app) and the future simulations that you'll write.

The first is dropped in conf/galing.conf, where the latter is simply simulations.

Write your first test

At this stage, I'll make an assumption that you either know the Gatling api... Nah, the Gatling DSL is sufficiently self descriptive (otherwise check the wiki).

As you can see, it's very very easy (that's why I like Gatling-Tool).
All we had to do, since we're benchmarking google, is to implemnent the apply method of a simple scenario within a class (with must have an empty constructor... a limitation for now) that must extend our custom Simulation (yeah! I know ... I'll choose another name for it that's why GSimulation is needed).
Note the dummy implementations of the interceptors whose aren't needed here.
This Simulation must be located somewhere under the simulations folder.

Now, we can launch it and see where are stored the results.

That's enough boy...
Let it shot the server and generate the result.
Done? ok.

Now, that all bullets have been shot, you can go in this folder gatling-project/target/gatling-test/result.
See? True! Every run will create a specific folder under this one, named run-${scenario-name}-time.
Make some jump in it and locate the index.html file, open it in your browser and see the magic happened. 
NB: check this page for information about the reports. 
So far, so good. But we're writing web apps and we want to stress them... we all know that the google server won't crash (that easy). 
Let's move to Play2 (or you can write your own app w/o Play2 using Spray or another Java framework maybe, then mimic the following).

Gatling for Play2

For this part, I also wrote an application that makes use of the "plugin" introduced in the previous post.
It's quite complicated, not yet polished and might be a good topic for a future post, because I tried to make an application based on Event Sourcing with some workflows and usages of pure functional structure like Lens, Writer, State, Monad and so forth.

But let's stick with the actual topic, if you fork the project and browse a bit its build configuration you'll see that it is near from the one we just talked about. But still that some particularities are presents...


First of all, since Play2 has its own convention for sources folders (app, test under the root) we'll redefine those folders for gatling as well.
In order to do that, we must adapt the following configuration keys:

Actually, yeah, we've just skipped the "src" folder. Anyway. 
So far, so good... 
I heard you... 
But, do you known that we're already done? 
Allez, let's write a test.


If you started from scratch (without forking the sample app) you can now stress test your index page that will render the default welcome page defined in Play2.
See the below Gist for that, or its follower for a more complicated one using the app.

Look what I've done Dad! 
In the Simulations, all we had to do is to simply declare an available port on which the framework can start the server, and its needed FakeApplication to have access to the routing. 
Then when we had to build a request path, we've just had to do the same as in our templates, that is, use the routes object under the controller package. Amazing
Furthermore, we can also reuse our type checked Form to build our feeder. Waouw
But, at this stage, there still some boilerplate (parameters). Looser.


And now, it's time to stress test our server and see how it'd behave on production. (Ok, that's not 100% true, since Gatling is running on the same machine...)

For that, just do the following:

Wait. Again.

Now, go to you result directory (the path hasn't changed). Open in your browser and let the shine comes in.

Wrap Up

At this stage, with small and few injuries we can use Gatling-Tool without having to install or trick stuffs in order to have information about our web app.
This by using either a full SBT application or a Play2 based one.
I know that there still a lot to do, but at least the basic features are there, and I'll enjoy anyone forking it, scrapping it if necessary and let me know.

Last words

I hope you liked these posts, otherwise I'd like to apologize because I did all projects and the blogs on the train, half asleep every morning and evening, back and forth from my actual mission's workplace.

Now I'm gonna sleep in this train...
Oh no, I can't...
I've to write my book now.

Gatling made easy for SBT or/and Play2


These last few weeks, I took some time to understand a level deeper how SBT works and what it can provide. Since this post is not related to this learning trip (which was along existing blogs and wikis), I'll jump directly to the idea this new understanding gave me.
A while ago I started (and paused) a work on Gatling-Tool to have it integrated with Play 2.0 (see these posts here and here), this work has been refactored to better integrate with SBT.

What we'll find in this post

In this post, I'm gonna give you all tools in order to stress test you Web based application, either built upon Play 2.0, either only using SBT. By directly starting writing your tests rather than having to configure stuffs or to start others... So this post is composed of two parts:
  • How I built these tools (can be skipped -- definitively)
  • How to use these tools (it's probably the useful part)

Gatling shot 3 aside projects... for goods

This part is related to what was necessary to have gatling-tool easily integrated with SBT, and after with Play2.

Gatling SBT Test Framework

This project (on github) has a sole and simple goal, to implement the Test Interface (see) used by SBT to integrate new Test Framework (like was done for ScalaTest, Specs, JUnit, and...). This project contains only 2 classes and one convenient trait.


This class is one of the helper that avoid boilerplates in the future, because it starts the GatlingConfiguration stuff needed by Gatling-Tool to execute. This class requires two things "the path to the gatling configuration file" and "the path to the folder where will be stored the stress results".


This trait is another tool in hand that extends the com.excilys.ebi.gatling.core.Predef.Simulation provided by Gatling in order to add it two interceptors whose are pre and post. Those interceptors will be really useful in the future, in order to start/stop a server for instance (e.g. FakeServer in Play2).


This class is the real processor of the project. Actually, it's also the implementation of the interfaces declared by the SBT test framework which are Framework, Runner2 and some Fingerprints. Basically, what is done there are those tasks:
  • Create the gatling bootstrap based on sys properties
  • Declare fingerprint to discover Stress Test based on the parent class being be.nextlab.gatling.sbt.plugin.Simulation
  • Create the stress tests instances (only classes are handled for now) by reflection
  • Call the pre interceptor
  • Execute the stress test using the gatling api
  • Call the post interceptor
  • Generate the reports using the gatlin api

Gatling SBT Plugin

This project (here on github) aims to provide the basics to use the test framework easily within an SBT project
Its intent is to have testers able to write stress tests directly, just after having imported the module in SBT. 

It basically provides an SBT Configuration (gatling-test) that extends the SBT's Test one and one Command (takatak). The other goodies are settings which are common for all such stress tested projects using gatling. 
Those settings are essentially the conf file path, the result directories, the libraries (gatling ones principally) and so on. The last important thing that it does is to add the gatling test framework to the already existing list of test frameworks supported out of the box by SBT.

Gatling Play2 Plugin

This misnamed project (no more a Play2 plugin, neither a SBT one... but it's ok for now) brings only one little thing. It comes with a base implementation the custom Simulation declared in the Gatling Test Framework.

This base implementation is Play2 specific while creating a fake server and starting it in the pre interceptor, stoping it in the post one.

This will ease further tests in the Play2 environment.

Save Point

So far so good, if you read this you can now navigate the small projects that were necessary to do what we'll discuss in the next blog...

This one is already too long...

Saturday, June 16, 2012

Type-safed Composable Interceptors in Play2.0

This post is just a quick follow-up of this post, which introduced my latest utility for Play 2.0.
I recommend you to have a quick overview on it.

Type-safe composition of interceptors : Premises

Briefly, we'll just see how the first "future work" as been addressed. That is, avoid boilerplate for interceptors composition.

A quick recall, when we had to compose such Interceptors we had to take care our-selves of the validation results and the combined result (tuple or whatever).
The real problem that is under the sea here, is that tuples are not so easily generalizable (no append method, roughly).

So I decided to use the Shapeless library (thx to @milessabin).

Shapeless has an amazing core structure that enables type-and-value chaining (somehow). The HList type is a kind of list but each element is one value and one type. For instance, it has the head value and the Head type on top of the Stack. Here is the kind of stuffs that we can do with Shapeless:

val | = false
val thisIsNotAPipe = "this" :: 15 :: false :: "a" :: | :: HNil

> thisIsNotAPipe: shapeless.::[java.lang.String,shapeless.::[Int,shapeless.::[Boolean,shapeless.::[java.lang.String,shapeless.::[Boolean,shapeless.HNil]]]]] = this :: 15 :: false :: a :: false :: HNil

>Type-safe list of stuffs<

In Action...

While trying to use HList generically I had problem with the implicits that are needed to prepend two lists, but StackOverflow has brought me the answer here.
I'm not gonna tackle here how I did, by I'll demonstrate what is now possible with the new composition functionalities added to Interceptors.

First of all, I had to declare an Interceptor for HList and the related implicit conversion. After what, I added three methods:

  • hlist this method on Intercept is able to convert a classical one to a HList one
  • ~::~ this one is available to compose any interceptors that is not defined using HList with an HList one. It will create a new Interceptor with the new composed HList as result
  • ~:::~ this one enable to compose two Interceptor defined with HList, the results will be the concatanation of the two HList.
Note: concatanation of HLists preserves the type sequence, actually we can see that as if it concat two lists of values and two list of types.

Let's see how we can deal with them:

Easy no? Combine interceptors and use compile-time type-checking to validate the required kinds of items.

Without boilerplate

Friday, June 15, 2012

An Attempt of Play2.0 Action Interceptor

In this post, I'm gonna introduce a piece of code that I laid with the help of two stuffs.
The Security object provided by Play 2.0 out of the box and somehow the Secured one provided by the TypeSafe's plugin util.

The idea here is to leverage the actual functionality wich only enables to provide one Option[String] as result.
What I wanted at first is to satisfy my use case, which is : a lot of actions are secured and I need a username and its id. Where id is an Int.

Here we are, the actual Security and Secured don't permit me to have several extracted values, or to have an Int.

That's why I created this project :

Steal help

The idea is to enable any actions to be preceded by some interceptors that are stealing values either from the request, the cache, the database (or...) and set them as the parameters of a closure that outputs an Action.

But we must also care of the cases when something went wrong during the stealing.


This trait is the core of the solution, it defines:

  • the stealing operation: a function that takes (currently) the request and outputs a validated output (Validation from ScalaZ)
  • the err callback:  a function that takes the request and the failure (when computing the stolen value) and outputs a valid Result to send back to the client.
  • apply: a closure that takes the result (not a Validation) and output an Action.
At this stage and looking at the code below (I've omitted the apply impl because not important here -- just wrapping and unwrapping), it's fairly simple to know how to define such interceptor.

I provided the simplest implementation of this trait that is a case class extending it by defining the two callbacks as fields. Like so:

So now we're armed to do stuff like that:

Where we defined two interceptors that takes a string from the cache and an Int form the cache also (just an illustration). After what, we combined them in an another Intercept using a for-comprehension.

So far, so good, now how to use such combined interceptor within an Action. Check this out:


With the help of Monoid (from ScalaZ) and if the case permits it, I defined an implementation of Interceptor that append the successive results one after the another. Reducing and simplifying the composition. Like so:

Note: we used the classical case class. Thanks to an implicit def that wraps the Intercept into the Monoid implementation, using the TypeClass bound declaration. see

What will come next

The next step I've already started is to try using Shapeless to avoid the composition boilerplate. Things are ongoing. Stay tune for that.

One step further, I'll add another parameter to the steal callback, which will be the optional result of the previous computations of other interceptors. That in order to combine them at the function and result levels.

And probably, add all boileplate my self that creates the tuples out-of-th-box in the compose function.

That's all folks!

Monday, June 11, 2012

How Monad Transformer saved my time


These days (this week-end) I wanted to put some work on a Neo4J Rest driver that I'm writing for Play 2.0 in Scala.
The only thing I wanted, actually, is to have the current embryo more functional. I mean I was throwing exceptions (frightening... huh!).
Since this driver is meant to be fully asynchronous (man... it's http, it MUST be) and  non-blocking (Play's philosophy), I was hardly using the Promise thing via the use of the WS api of Play.
This is the kind of thing I've got (briefly):

  def getNode(id:Int) : Promise[Neo4JElement]

Where Neo4JElement stands for the wrapper of all Neo4J Rest response (Json). Hence, it can be either a Failure response (Json with stacktrace and message), it can be a Node, or .... throw an Exception (br...) when the service crashed (f.i.).

Hmm, not so intuitive and goes against the functional paradigm that orders: "you can't ever introduce side-effects, boy!". An exception that blows in my face, is one side effect (head-aches, ...).

Diego Validation to the rescue

Validation is a very simple thing, it holds either a value, either an error...
Ok, why not just Either then. Actually, you're right but Validation that I took in the ScalaZ library contains a lot of thing very helpful for the purpose of validation. But if you worry it, just replace Validation by Either in your mind from here.
Now, here is the getNode signature:

  def getNode(id:Int) : Promise[Validation[Aoutch, Node]]

Isn't it more intuitive? For sure, you get back our relevant type in the signature : Node. Great!

So far, so good now what the heck is Aoutch : something that hurts... and what could hurt a runtime execution... exceptions yeah! Thus, Aoutch is just a shortcut for Either[NonEmptyList[String], Failure]. We can see that we represent with one single type an unexpected exception and a failure (missing node f.i.).


If you don't know what a Monad is, from here thing about a structure that can evolve in a for-comprehension (in scala it must implement flatMap and map).

Promise and Validation are Monads. And their used one over the another. But what really interests us is the leaf of the chain Node. That introduced some boilerplate code when you try to sequence actions like that:

See... yes we have to skip the first level (validation) to be able to work with the meaningful objects.
But still that we can extract some pattern... no?

Monad Transformer

The pattern that we can extract is kind of two-level composition. I did this composition my-self trying to figure out what we'll be possible.
It was successful but, I've have to introduce a new type and a new method, that was like a flatMap.
So I asked on StackOverflow ^^ (and it was my first question, yeepeee). You can find it here. So I want explain how I did, because the question explains it. But the real question was, is there a well-known functional construction for this problem?.
Thanks to @purefn, I knew that it was the case!

It was time to use Monad Transformer.

Monad Transformer

Briefly (and very roughly), a Monad Transformer is a construction that is able to transform a M[N[_]] into a P[_], where M, N, P are Monads.
I won't explain here how it does, because it would be long, but here is a good link (you've to understand Haskell a bit, sorry).
With the help of such transformer, you'll get you back the opportunity to use for-comprehension... with the wrapped-twice type as bound value.
Here is the the transformer for our Promise[Validation[E, A]]:

And how we can now link nodes:

Awesome! No! We get 3 async and non-blocking calls, totally typed checked and resistant to exceptions and failures... In 5 lines.

Future work

At the SO question, @purefn tolds me that scalaz 7 (snapshot) is defining (fully) this kind of Validation Transformer.
Why didn't I used it, yet:

  • I'm trying to use not Snapshot (not a good reason, but still)
  • In order to use ValidationT, I'll have first to create an instance of the Type Class Monad. Because the flatMap signature needs it in the context.


I love functional (even if I'm still learning -- back -- the basis ^^)

Sunday, April 22, 2012

Gatling and Play2.0: continued

This blog entry is a follow-up of this entry where I introduced a spike I did on Play 2.0 stress tested using Gatling-tool.

At the time writing the above entry, I had to quickly hack gatling to use Akka 2.0 as Play 2.0 uses it, and I didn't wanted to have clashes.

But, thanks to Stéphane Landelle, Gatling is now Akka 2.0 enabled (since 1.1.2).

So that, it was time to give the plugin's embryo a refresh. For that I used another project that aims testing the neo4j rest plugin for Play2.0 I'm also writing. In case of, I've also introduced what I did in this project in this post.


Using the Gatling I first tested how I've been able to stress test:

  • simple (get) urls
  • mutating (post) urls using server form underground to compile excepted data.
  • duration tests (stress testing on a given period basis)
What I liked so much, even in this embryos+, is the ease to create stress tests when coupled with Play2.0 functionalities and Gatling's DSL.


The Gatling plugin I'm currently building is located on github here and is based on sbt to build it.

But it's based on the version 1.1.4-SNAPSHOT version of Gatling's libraries (due to some fixes the Stéphane did "for me", while he was at devoxx fr, isn't he gentle !!!). 
At the time writing, you'll have to build gatling and gatling-highcharts locally using maven (quick fast!).

How to

Set up

Having created a Play 2.0 app, you now have to powerful of sbt in hands (especially if you've installed the TypeSafe stack 2.0). So, to stress test your app, you'll have first to build to plugin I told above.


First of all, clone/fork this library project on github plugin, after what, you'll just have to run sbt publish-local in the related folder. That's it, you now have the plugin in your local repo.


In your Play 2.0 app, you now are able to define the plugin as `test` dependency, using the following
That's it...



Personally, for testing I use org.specs2.Specification, for using it with the plugin (at the time writing) you'll have to create the following:
This listing is creating a fake server to enable urls to be tested, and some functions to deal with it.

To create the server, the plugin defines a Util object (, which also defines rough helpers to be used in stress tests.

A full spec should look the following:

Simple Url

You saw, in the provided gist above, that Util also defines a way to simply defines a gatling Simulation (basically a Function0 that returns a scenario builder: Gatling DSL result).

Having that in hand, here is the fragment to stress test the root url:

As you can see, it's pretty simple, but nothing can really be checked in the body of the specs (I'm working on having relevant information to check).
But, at least you can run this test to hit 10 times the root url ramping the numbers of users by 2.

Running it (sbt test) you'll have a new gatling folder in your target folder that contains a  results directory where are located all stress results in an html report (with great charts)

And all you had to do is to define the request headers and the url...

Mutating the server

If you have controllers that mutates the server, you should have define POST urls, which are using the Form feature provided by Play 2.0.

Having did so, you'll be able to stress it very easily using Map, JsObject or in the best case your Model.

Let's say we have a controller controllers.Stuffs that uses case classes models.Stuff. The controller defines a stuffForm and a createStuff action.

Your stress test can now be defined like the following:

In the gist, you can see 5 points to note, they are key-clues to create reusable stress tests.

Nothing is really hardcoded, neither the path to the http end point nor the parameters data.

That's http stress testing using type checked requests. CoOl isn't it? Hey, man, we got back our lovin' type cheking (one of the best scala feature).

Heavy check, duration based

This part is more a Gatling feature highlighting.

This last example is an heavy test that uses looping over a configuration for a given period. This gives you how many users could use your application.

Such test might be shaped the following:


For now, it's NOT an official plugin neither a gatling nor a play 2.0 one, but discussions are on the way for that... stay tuned on twitter or here.

Wednesday, April 18, 2012

Still Playing... but new players are in


Because I love to play, with Play 2.0 and scala (still learning).
But also because I'm currently investigating technologies that I might choose for a new product line currently building in my company, NextLab (in Belgium).

With ?

This time, I've more played with client side libraries or frameworks (no I won't post yet another entry on JQuery...), but I also tried how it is easy to create totally async code (so parallelized) using server side ones.
The technologies that will be quickly introduced in this post can be found hereafter, but everything has been packaged in a github repo, and a running instance on Heroku.

At first the current spike was dedicated to the slowly growing Play 2.0's Neo4J REST plugin, we are creating at NextLab. But, in order to demonstrate what could be done with the coupling of these two technologies, I've extended the spike's scope to something more funny.

So let's introduce the technologies:

Client Side

Twitter Bootstrap
An amazing toolbelt helping building responsive website without having to bother boilerplates in HTML, CSS.

Even if it is neat, complete and well thought, Bootstrap comes with another handy factor... it has been built using LESS. And by chance (I know, chance is not part of the equation) LESS is supported by Play 2.0 by default.

Just a note, LESS will let you reuse color codes, mixins, etc etc that Bootstrap has already defined.
As we'll below, we'll have to discuss with WS (json), upon which a REST interface has been added for meaningful resources.

That's where Spine.js comes in the game. This lightweight MVC library brought me the small tools that I needed for fetching, saving resources without having to write not even a single request by hand...
Probably my favorite (that might be my mathematician part who's talking), this powerful Data-Drive Document toolkit has taken the right thing by the right end.

Its functional approach of decoupling data from the document, and link them using layouts helps you to concentrate on each part of the data usage independently:
  • the incoming/rought data mapping to a representative data
  • the mapping from represented data to document (most of the time, one data for one element)


The communication layer is of course HTTP, with a little help from the emerging HTML5 features. One in particular, Server-Sent Events (here is a great intro).

This, stable, HTML5 feature comes with the handy functionality to let the server sent events to connected client, without hacks; that said Comet or Polling.

Open a connection, push data, and that ONLY when the server needs to.

Server Side

Play 2.0 (Scala)
Of course... But I used some "advanced" feature like,
Async is a Play 2.0 feature that let the server deal with the tasks it has to schedule.

That is, when you think that the server might have to wait for actions being executed before being able to respond a request, Play 2.0 let you, really simply, create Async request (non blocking).

Very handy when you have to call third parties services for instance...
The only way I would consider from now to consume data. Iteratee is a fairly difficult thing to understand (read this wiki) but it gives you the same smart decomposition that in d3.js, that is, decoupling the management of a sources, its useful representation and its computed result.
Powerful, actor-based, parallelizer, asynchronous task, scheduler library.

Actually I needed, a request to launch async tasks (you know like event generation and dispatching)... so what else!
A database for storing graph... let's use a graph database.

Within NextLab, we started a open sourced Play2.0 plugin for calling the REST api provided by a Neo4J Server (helpful on Heroku). It's still emerging, and continue to evolve a lot because features are implemented on the fly (need), and a re-pass is forecasted to add a meaningful DSL (like FaKod did).

Why do we make this choice to implement a distinct plugin instead of wrapping the java library?
  1. yet another library, which brings me too much (I need REST only)
  2. I want requests being async and under control

Libraries Repo

See below, we'll use Heroku. So in order to deploy this application wich uses our plugin, I needed to publish somewhere.

This is what can offer Cloudbees. Among other great things like git repo, CI and so on, Cloudbees provides you for free four maven repositories that you can make public if you wish.

So I used sbt to publish on my "snapshot" repo on cloudbees, letting heroky has access to it for downloading the plugin dependency.


Free, reliable, easy to use with scala and Play 2.0... which else than Heroku?

So what

I think that this post is already too long... However, I can let you play with the resulting app here.

Check out the code (and fork it) there.

Depending on the comment, I will expand this post to other ones to respond potential demands (if there is any ^^).

Good play.

Oh yes, one last note, the application is to Create Stuff which contains dummies. Stuff are created using a simple form that must be fulfilled. Stuffs can be linked one to another by clicking the graph.

Please create Stufs and links, it will be a good test for Neo4J, the plugin and Play 2.0.

Hope you've reach this.

SMAK. hehe

Thursday, March 22, 2012

TypeSafe Stack 2.0 missing "play debug" like feature

A quick one to help players that are using TypeSafe stack instead of the Play! 2.0 distribution package.
Because I discussed some points on the groups and I saw related StackOverflow entries, that this post might  avoid in the future ^^.

TypeSafe Stack

With its second version, the TypeSafe stack stroke a hit integrating Akka 2.0, Play 2.0 and... its amazing console built on top of both technologies.

With the Scala IDE 2.0 (yeah a lot of 2.0), this stack is ready to tackle the SpringSource Tool Suite, but I don't want to make the comparison here neither explain all of these components... would be long and longer.

But once you've installed the stack and you want to Play! around, they recommend you to use the giter8 template from the typesafehub on github (it also contains a lot of plugins, which you might want).

SBT instead of Play launcher

Using the stack, you won't have the play tool in your hands to generate application and so on, because the way to go is to use g8 and sbt.

This is not an issue but there are some points you'll need to have in mind:

  • the secret is not generated at creation: they're not so far, because the secret is only a random string, and an issue on giter8 is on-going. So, you can create a random string by your own until it will be done. I've also proposed that a new command in sbt might be helpful to regenerate the secret.
  • play debug isn't available: when you need to debug your Play! 2.0 app you need sbt to activate the jdwp when running. For that, there is a MVN_OPTS like SBT_OPTS that comes in your help. Set it with the regular options (-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=9999).


Thursday, March 15, 2012

Gatling-Tool Plugin for Play 2.0


This post will be a kind of write-up of what I'm trying to do now. And for what I already achieved some tasks.

Since I'll soon start products in my company that will be based on technologies like Scala, Play 2.0, Neo4J, MongoDB, Heroku and misc, and that I'm a bit control freak; I wanted to be sure that what I'll build will mach the requirements in terms of capacity.

This is where Gatling-Tool comes in the equation, this is a very powerful (and scala based) stress testing tool, which the name recall.

We're about to have some words on it, but first let me tell you that a GitHub ready to be fork is available with my first step into a gatling plugin, find it here.

Gatling Tool

Gatling Tool is a cute, smart and intuitive stress testing tool for web application, which a neat DSL for http request and asserts.

The DSL written in scala, and following the good conventions for it will aim anybody to be able to write stress tools. In a way that even non-programmer peoples with a basic understanding will be able to do basic stress tests with a good learning curve.

It's integration with browsers (Firefox, through a proxy) ease much more the work because you'll be able to register like macros (or like badboy does) your scenario to be repeated again and again. This is what is called the recorder.

Scenarios could be written with a custom external DSL, but they're also available as regular scala code (internal DSL) and there is where the coupling with Play 2.0 scala should pay.

Integration with Play2.0

Akka 2.0

Fact: since the RC-5, Play 2.0 comes with the Akka 2.0 support.
Fact: Gatling being it-self based on Akka (and they're right for that) but on a previous version for the stable version (logical because Akka 2.0 is pretty new).

So an integration must go through an update of the Gatling-Tool to the same version of Akka 2.0 in order to be able to use them correctly using the same project (testing phase, but still).

That's why I decided to fork the Gatling-Tool on GitHub (aaah the great world of open source), in order to switch the support of Akka from their 1.x to 2.0.

Even if it is true that I did it roughly (at first), it remains that it works and is the two needed projects for the following are:
So fork, clone them and build them locally using maven using the classical mvn clean install.

You'll have a brand new version of gatling 1.1.0-SNAPSHOT.

NOTE: I had some difficult choices when doing the migration, some are breaking the runtime behavior (a bit) and I'll have to discuss them further with the Gatling team. I've already been contacted by Stéphane Landelle who told me that we was interested by the work since it was planned the 1.2 release.
So don't be afraid, the official release will match the needs soon. (But ping me if you want more info and help me.)

Typesafe Stack 2.0

I recommend you to install this brand new stack that integrates all stuffs that you'll need for scala development, including Play 2.0 project. 
Now, simply follows this link for further steps, and then create a play-scala project.


Since Play 2.0 is using Sbt for building its project, and the custom gatling library we built is in our .m2 repo, we have first to add our local maven repo to the repositories list this way (updating your Build.scala):

Tada, now we have a Play 2.0 project having our custom gatling as dependency.


Before going in further details with integrating Gatling as a Play 2.0 plugin, I'm gonna talk about an uncovered subject in Play 2.0 (or not easy to track); the Plugin feature.

Play 2.0 comes with a pretty easy simple Plugin integration, this through the specific file, in the conf folder, named play.plugins. This file is meant to contain one single line by defined plugin, shaped this way: {priority}:{plugin's class path}.

But what is a plugin finally, this is a classical class extending play.api.Plugin... simply. This Plugin trait only defines three methods which are:
  • onStart: this adds a hook when the application starts, helpful for initializing objects.
  • onStop: cleaning the fields.
  • enabled: helpful to disable the plugin in some specific cases.
Another point, that I have to highlight is that it seems that such Plugin's constructor must have an argument being the application it-self.

Test only

Hey wait gatling should be available in tests only ?! Right!
The first way to achieve this is the easies also, while implementing isEnabled, you can use the application (remember it is part of the constructor) which has a method isTest that should toggle the plugin.

The second way is to create a Specification that starts a FakeServer (since we'll stress the entire Play 2.0 flows) and give it a FakeApplication which is defined with the Plugins you wish. 1000 words replaced by one Gist:

Gatling integration

Now you're wondering what the heck is that Gatling Plugin class, don't you?
Gatling Plugin
The Gatling plugin class, located in my repo under the test/gatling folder, is extending the play.Plugin class (which defines dummy implementation of the three methods), this way I can concentrate on the only method I need, onStart

Actually I need some initialization in order to use gatling, it needs some folder to be defined, including the one interesting use the more: results.

So, the Gatling#onStart method is creating ephemeres folders under the target directory (that can be cleaned) and also the needed Gatling configuration file. And that's it.

We can now stress test our app.

Gatling is able to understand Scala written scenarios, those Scala script have only one constraint: being an instance of com.excilys.ebi.gatling.core.scenario.configuration.Simulation.

This trait is actually a Function0 and thus defines only one helpful method which is apply(). The latter method is the container for building the stress test that we want to execute.

What is very common now is that you can write Gatling scenarios using the Scala Type System being checked in your favorite IDE, and they will be compiled by Sbt it self when requested (and hot swapped ^^). Where the classical Gatling workflow is to compile them on the fly, using their internal routines.

Run 'em
Having simulations written (example here), you can now ask Gatling to run them by creating a gatling runner instance on them. I won't go into deep details because it is not the purpose here, but here is how you can do.

See 'em in action
That's the easiest part, entering the play environment using sbt in console, you can launch the tests by typing test.

What'll be done is:
  1. enter the specification
  2. create the fake application
  3. load the additional plugin
  4. create the gatling folder and conf
  5. configure the gatling system
  6. create the fake server on 3333
  7. create the simulations
  8. run them
  9. generate reports on them (located in the target folder => look 'em in your browser and you'll see how they're cute)


There are problems for now when executing several tests, because streams are closed (while generating further reports), that comes from a choice that I've have to do and which is commented on github here. This is mainly related to a feature that is no more available in Akka 2.0 (for good reasons, I'd say).

To be continued

If you want to help me going further, don't hesitate to contact me on my mailbox, or comment this post or on twitter.

What I like to have in the future is :
  • clear Specification for Gatling (preventing the need to define each time the server and plugin)
  • website for enabling test one by one, or any, or...
  • redirecting to results reports
  • more
  • and more

Sunday, March 11, 2012

Play 2.0 and Salat (MongoDB DAO provider)

Play 2.0 using MongoDB document storage through Salat

In this post, I'll cover some points and library that ease such use case.

I won't go into deep details about Play and MongoDB, instead I'll jump straight to the MongoDB usage.

Let's talk a bit about Salat


This free library available on github here (and deployed on ivy repo, so that easily usable with sbt and Play) is able to deal with case classes and MongoDB as we'd do with JPA.

That say, case classes can be used directly for storing document by simply declaring a DAO and with the help of some annotations (not mandatory).


Salat as the notion of Grater that is responsible to the de/serialization of a case class. This through two simple functions, let have a Grater for the type T:
  • asDBObject(t:T) : returns a MongoDB representation of t
  • asObject(dbo:DBObject) : returns a T instance based on the provided MongoDB content

The latter thing to note is how to create such Grater? What do we have to do? The answer is almost nothing, in the package com.novus.salat is available the very handy method grater[Y <: AnyRef].

What that grater method does is to parse your case class provided as the method generic, and create the related Grater instance. The important thing to note at this stage is that the latter Grater is cached for further needs.

So now, you're already able to deal with you case class. What is missing is the DAO part, that will ease again your job.


Salat provides another handy structure named SalatDAO. This trait defines all lifecycle operations that you might need for your domain objects.

Its usage is very simple, you just have to define an object that extends this trait by giving the Domain Specific class and Id type for your structure. The last parameter it needs is a reference to the collection.

Here is an excerpt I pulled from the salat wiki:
object UserDao extends SalatDAO[User, ObjectId](collection = DB.connection("users"))

Play 2.0

Having covered the basic of Salat, it's time to use it in our Play 2.0 application.
There is a cool wiki page, that I wished have discovered before which explain how to use salat with Play 1.2.x. I recommend you to read it, even if I'm gonna cover some of the important steps here too.


This step is more easy that for the previous version of Play. Because now the only two things required are:
  • add the novus repo
  • add the deps to salat
Both in the Build.scala in the project folder of your Play 2.0 app.

**Edit (on 21th June 2012)**
Before you continue, this post that explains some basics about how to deal with Salat and Play, you can now choose to simply move to this module
It introduced the tricks that I'll explain below, and add amazing functionalities for Model and DAO creation.
So starting at this point, it's no more mandatory to read this post... unless you're curious ;-)
**end of EDIT**

Here is the main thing I have to discuss here: Salat makes intensive usage of a structure named Context which holds a reference to what have been processed along the classes and structures.

Such instance is created by default in the package object, and the quick start of Salat recommend to import it along with salat and annotations. Don't!

Doing so will fail when using Play 2.0 in DEV mode (only) because of the cool-hot-reload-on-change feature of Play. The specific case where it will fail is the following:
If you want to keep a static reference to the Grater instance of a specific case class.

Why? Because of the (needed for sure!) graters' cache which keeps a reference to the Class instance.
But this class instance might change on bytecode refresh, moreover the fact that Play has a specific ClassLoader for that (ReloadableClassLoader).

The result is incomprehensible errors when you change your code, which errors saying the there is a mismatch between classes that you even not change yet... 

The solution (which is referred in the wiki page I told above)

Instead of importing com.novus.salat.gobal._ which only contains an implicit definition of the Context object to use in Salat core system, create a new one using the correct ClassLoader.

In the above Gist, we can see that we simply created a Context that will refer to the provided ClassLoader by the Play app itself.

That will keep enabled the class reloading, without impacting caches instances.

That's all folks!

Now you're ready to use both Play 2.0 and Salat without messing around with conversion between DSM and MongoDB and so on.
Have a gode work.

Wednesday, February 29, 2012

Neo4J with Scala Play! 2.0 on Heroku (Part 9) :: Final


This post is a continuation of this post (which started there), which is the last part of a blog suite that aims the use of Neo4j and Play2.0 together on Heroku.

What have been accomplished so far:

  • install Play 2.0
  • install Neo4J
  • use Dispatch
  • create model
  • create persistence service in Neo4J
  • create views and controllers
Ok, where almost done. Let's see how to deploy the whole app on Heroku.

Heroku, here I come

But wait, who're you?

Heroku is simply one of the best cloud players of the moment, I won't talk too much about it, because i'd have to talk a lot otherwise.

But here are some very interesting features and paradigms followed by Heroku.

Process Centric

Where almost all other cloud providers are binding their services to server instances, CPU flop count, memory usages, and other similar metrics, it is a fact that their aren't easily forecastable and hard to track in development phases (even if I encouraged to do it, though).

Heroku comes with a much more easy concept, that is, Web Dynos. A Web Dynos is simply a process that can handle requests. So, what if the requests are too numerous? Just add Dynos. Note that Dynos are existsing for background process, one Dyno by worker.

Costs are very simple too, you have one free Dyno by month, and the rest is billed at low cost by hour.

Thanks for simplicity.

Remote CLI

We've just ask how to handle more requests in an efficient way, and answered by adding dynos.

So far so good, but how? That's where comes the Heroku remote CLI that is able to operate remotely on a deployed application behavior.

Thus, adding a dyno is doing that in console : $> heroku dynos 1

Now, alerts on performance are quickly resolved.

Thanks for rapidity.

Continuous Deployment

The paradigm followed by Heroku to deploy their app are based on Continuous Deployment.

Having that, you app should define how you app must be deployed using their Procfile.

And it will be deployed automatically when the sources are pushed to the Git repo that is created for each application.

This ensure you to at any time be able to retrieve the sources related to the running instance (for example).

Thanks for debugging ease.


What to say? A good sdk to create add-ons, a good architecture and service level. It makes a pleiade of powerful add-ons including Neo4J running instances.

Thanks to be open.

Can I Play! with Heroku?

Of course, you can!

Actually, Heroku has already integrated Play! starting with its first version, and has also added the scala support some time ago.

And finally the Play 2.0 wiki is explaining how to do...

Ok, let's Go then.

Getting started

First of all, you must have been registered too Heroku. Hopefully, it's free and fast. So go on, and create your account here

Having your account, you can now install the Heroku toolkit belt. This will gives you acces to your Heroku CLI that can manage your account, apps, and app configuration.

When you're done with the installation, you just have to login using the console command: $> heroku login

Play! app side

What is needed for your app is to have a Git repo and to contain a Heroku process description file.
Since everything is already explain there, I won't go into deep details.

Create Heroku app

Since we are using Play! and scala, we need a JVM, that stack at Heroku is called Cedar.

So, to create your app, open a shell and do the following:
heroku create my-playing-app-with-neo4j --stack cedar

Now, you have an up and running environment to setup and deploy your application. And the application will be named my-playing-app-with-neo4j.

Neo4J add-on

Ok, but I've to use a Neo4J database, not embedded (too heavy for a cloud). Do I have to install it somewhere and host it myself. Na!

Neo4J's team is actually working on an integration in the Heroku platform, and a beta test add-on is available at the time writing.

That says that to have a running database that we can use, we just have to open a shell (in our app folder) and drop the following command: heroku addons:add neo4j 

You don't believe it, huh?

Since you'll need to retrieve the database url and credentials, either you go to the Heroku site and...
Na, just keep your shell and do: heroku addons:open neo4j 

Ta da!

App update

In my previous post, for the sake of simplicity I left the Neo4J database hard coded to localhost:7474.

But now, we have to update this to use our deployed Neo4J instance and credentials.

We should have (must) define an application configuration property for such paramater, but It is not what I want to illustrate here so let's keep it simple and hard coded.

But we have to add something to the Neo4J's Dispatch url, the credentials. For that we just have to do the following:

SSH key

Just a note, to remember you to add your ssh key to Heroku. This is simply accomplished (after you've have created 'em) using the CLI: $> heroku keys:add

Beginning to love this CLI, no?


This is the Heroku configuration file that tells the continuous deployer (if I can say) how the application will be deployed and are its needs.

This file is located at the root of the Play's application folder and only contains one line:
web: target/start -Dhttp.port=${PORT} ${JAVA_OPTS}

This simple line tells that we need a web process for the staging application located under target/start
Actually, this folder will contains the staging Play! application after Heroku will run sbt clean compile stage on it.

Aaaaand Deploy! (push)

Getting closer to the end!

After having added all the necessary sources to the local git repo for your app, (including last update and the Procfile), we can now commit everything and push it to the git repo that Heroku holds for our application.

Actually, when created the Heroku app, the CLI has updated the git local configuration to add the related remote repo called heroku.

So, the only thing that left to do is to push: $> git push heroku master
To test if it is ok: $> heroku ps. This will display the proceses running on Heroku.

If the process is shown, let's open the application in our default brower (leave your mouse alone and...):  $> heroku open.

I hope that I didn't made too many mistakes and you are now able to see your application running and using Neo4J.

At least, here is the one I succeed to deploy:
I've also shared this app on Heroku's GenSen that is meant to share project template on Heroku.

Now, you should love the CLI, but also Heroku, and Neo4J and Play! 2.0 and Scala and Dispatch and arbor.js and...

Thanks for reading, if someone do have ^^.

Monday, February 27, 2012

Neo4J with Scala Play! 2.0 on Heroku (Part 8) :: Scala template+Arbor.js to browse Neo4J via Play 2.0


This post is a continuation of this post, which is the seventh part of a blog suite that aims the use of Neo4j and Play2.0 together on Heroku.

Viewing Neo4J Model Object in Play2.0


In this post, I'll talk about some functionalities that Play2.0 offers to create web application/site.

The main goal will be to have html views that enable us to create User, Group and link them, but not only, we'll use arbor.js to view what's being created or linked in Neo4J as a... graph of course.

Basically, it will consist into one html page, containing several forms for creating model instance (or link) through AJAX call on Json controllers.

So let's begin by explaining how to define a querying and persisting controllers using Play 2.0 Form.


In that case, we'll take basic needs for our use case, that is, to retrieve the users list stored in Neo4J or create a new group.

Get Users

Briefly, Play 2.0 as the notion of controllers to handle server request, such controllers are bound to urls using a route configuration.

So what we have to do here is to create a controller, let's say Users, with a handler named j_all for list of users rendered in Json.

Using what we've discussed in previous posts, such controller and definition are rather simple, check this out:

As we can see, we have simple call the Model persistence utility object to retrieve all defined User in Neo4J. Which we are rendering directly in Json thanks to their Formatter. And finally, we stream the result in the http response.
Mmmh, simple no ? Here we did:

  1. send a Json request to Neo4J requesting all nodes that are linked to the root using the kind users (found using the User's ClassManifest)
  2. retrieve the Json response from Neo4J and un-marshall them in a List[User] (using the User Formatter)
  3. re-render them into the expected Model Json Format (again using the Fomatter)
  4. generate the String representation
  5. append it in the response body
  6. define the content type as being Json
In one single line.

To test it, roughly, just use this url http://localhost:9000/users.json. This will return a Json encoded response.

Create Group

Now, we want to add the possibility to create a new group remotely. For that, we'll create a controller Groups which defines a create handler.

This handler expects to receive a group name. After what, it creates the group instance and persist it in Neo4J.

To recover such request parameter (in a POST since we are creating something and changing the server state), we use a Play 2.0 construction that offers a lot of helpers to parse the body into a map of values (can be embedded).

In the following example, the goup name is extracted form the request's body (url encoded) as a nonEmptyText mapped as name. This is a helper mapping for extracting String that cannot be empty.

As we can see, the Form can be directly rendered in the Model instance by giving an apply and unapply functions after the mapping definition.

Javascript Routing

Using static urls are cool... no ok, let's try to use what some calls Web 2.0, you know Ajax.

The problem comes when you have to deal with Urls within Ajax calls. How to keep track of your urls changes for instances.

Pretty hard, so let's forget about hard coded urls in your javascript and use a routes file that can be downloaded client side. This routes file contains all your controllers' url mapping that you want to be exposed in javascript (if I can say).

How it works is simple:
  1. Use Routes.javascriptRouter to define a javascript object and the controllers to be remoted
  2. For each of them, you must use the following object controllers.routes.javascript..
  3. This object is created at compile time when defining the controller in the route conf file
  4. defines a handler (in the Application controller f.i.) that return the result of the javascriptRouter as being javascript file
  5. route this new controller to what you want (like /js/routes)
Having done that, you are now ready to use the created object in the javascript part.

If we take the controller controllers.Users.j_one (returns a User base on its given id), we'll have in our javascript access to a js function playRoutes.controllers.Users.j_one(id) that can takes an id.

By using this js function, you'll have in return a js object that defines at least two useful properties:
  • url: the formatted url for the controller (having compiled the parameter in the url)
  • ajax(c): a jquery (by default) ajax function that takes a payload object, but already defines the url and the method.
So far so good, but to use all of these stuffs, let's see in a coffeescript (thanks Play 2.0) example:

In the previous example, I wrote the ajax call my-self using jQuery... so I could have simply use the ajax property. But nevermind, I love sometime to be control freak.

C'est chic! No?


For browsing our model graph, I've used arbor.js as the rendering framework, because it's the best one for graph... that's it. 
Since my intent here isn't to explain it, I'll leave you alone with that part. But I recommend you to browse its site here.

So what I did is simply using Users as nodes, all linked to a central root node. Clicking one them will show you their inter-relationships.

I've also added a select box that helps you showing all users in a chosen group.

Taking that the next post will be on how to deploy the whole stuff on Heroku. I don't have at this time any instance in the wild, but if you wish you can clone (and fork) my repo on github for this posts' suite.

But here is a preview of what has been achieved.
Fun but not so cute  -> I'm not a designer... :'(

Next post, the last, will talk about how to deploy this whole thing onto the Heroku PaaS.

Saturday, February 25, 2012

Neo4J with Scala Play! 2.0 on Heroku (Part 7) :: DSM+DAO+Neo4J+Play


This post is a continuation of this post, which is the sixth part of a blog suite that aims the use of Neo4j and Play2.0 together on Heroku.

Using Neo4J in Play 2.0... and simple DAO

What I'll intent to show is a way to use a Domain Specific Model, persisted in a Neo4J back end service. For such DSM, we'll have an abstract magic Model class that defines generic DAO operations.

For simplicity, we'll try to link each category of classes to the root/entry node. For instance, all the Users will be bound to the entry node by a reference of kind user.


I'll choose the very common use case, that is, Users and Groups. Here is its shape:
  • A User has a first name
  • A Group has a name
  • A User can be in several Groups
  • A Group can contain several Users
  • A User can know several Users
Let's keep the classes definition aside for a few, and stick to the persistence service.

Graph Service

The Graph Service is an abstraction of what is needed for a Graph Persistence Layer. It is bound to a generic type that defines the model implementation and defines traversal and persistence operations of graph's nodes.

Graph Service for Neo4J

Let's update now, the service that has been used in the previous post, for Neo4J persistence, in order to have it able to deal with model instance.

Let's start with the saveNode operation to see what is needed in the model and elsewhere.

In this Gist above, I've enlighted some points that must be found around the Model construction. (A) and (C) are composing a Json Format (as SJson propose), (B) is more related to model abstraction.

(C) has a special need when used with Dispatch, we could have a Dispatch Handler that can do both action parsing/unmarshalling and direct use in a continuation.


Now, we are at the right point to talk the Model, since we've met almost all its requirement. So let's build a Magic Model class that can be extended by all concrete model classes.
That's the easy part, we just define the id property that is an id (part of the Rest Url in Neo4J).
Ok, this part is simple too in this abstract Model definition because, a Format implementation must be part of the concrete DSM classes. That is, User that extends Model must define a Format[User] instance, and put it in the implicit context.
So, at this stage we have Model and User like this:

Class -- Relation's kind : F-Bounded
As we saw in the saveNode method needs to associate the concrete class to a relation kind. But what I wanted is to have a save method in Model, that implies that we cannot (at first glance) give the saveNode the information needed, that is the concrete class.

For that, we'll use a F-Bounded type for Model, that way we'll be able to give the saveNode method what is the really class... Mmmh ok, let me show you: But that's not sufficient, the saveNode method will need to use such available ClassManifest to find the relation it must create.

I choose a very common and easy solution, which is having a function in the Model companion that helps in registering classes against relation kind.

Model Dispatch Handler

Now we'll discuss something I find really useful and easy in Dispatch, create a Handler that can handle a Json response from Neo4J into a Model instance.
For that, we have already defined in previous post a way to handle json response into Play's JsValue.

Now, what we need is to use the implicit formatter of all model concrete classes to create instances. And it'll be the way to reach the goal, except that a problem comes from the Json response of Neo4J: the data is not present at the Json root, but is the value of the data property.
So it breaks our Format if we use it directly.

That's why the above definition of the Handler takes an extra parameter which is the conversion between JsValue to JsValue, that is to say, a function that goes directly to the data definition.


Finally, let's gather all our work in a simple implementation of a generic saveNode function: As we can see, it's very easy to handle Neo4J response as DSO and use them directly in the continuation method of the Handler.


having all pieces in places (check out the related Git repo here). We can now really simply create a User and retrieve it updated with its id, or even get it from the database using its id.

In the next Post, we'll create some Play template for viewing such data, but create them also.