Using Spray to mock 3rd party APIs in your tests

If you need to test/mock 3rd party API’s / a crawler / external web service, and you dont want to use some fancy DI framework (I hate the magic in DI frameworks, I use constructor DI instead..), anyway, here’s a helper class I put togather to that end

If you haven’t got a chance to try out Spray, you totally should, its a high performance web server built on top of Netty and Akka

Spray has a very nice HTTP API. Few things I liked:

  • HTTP method mapping spray.http.HttpMethods - GET, POST etc..
  • HttpRequest and HttpResopnse are basically wrappers around data you would expect to have in a request and response data
  • spray.http.Uri wrapper class around what you would expect to find in the URI
  • it really is just an Akka Actor, with just the right amount of control exposed and the rest well hidden behind its nice APIs

So, for testing/mocking all I really need is:

  • HTTP method
  • URI
  • body(content)

Here’s an implementation of the helper class

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
import akka.actor._
import akka.io.IO
import java.util.concurrent.atomic.AtomicInteger
import spray.can.Http
import spray.http.{Uri, HttpMethod, HttpResponse, HttpRequest}
 
object Helpers {
 	
	lazy implicit val testSystem = ActorSystem("helpers-test-system")
	// used to track the last assigned port
	val lastPort = new AtomicInteger(20000)
 	// a single expected triplet of method+uri+body
	type Replay = (HttpMethod,Uri.Path,String)
 
	// factory method to get a replater to play around with
	def webReplayer(replay : List[Replay]) : (ActorRef,Int) = {
		// everytime the factory is used a new port is assigned, 
		// so many instances can co-exist and tests can run in parallel without getting
		// on eachother's toes
		val newPort = lastPort.addAndGet(1)
		// instantiate a new Spray server. Yes, its that simple!
		val newServer = testSystem.actorOf(Props(new Helpers.ReplayerActor(replay)))
		IO(Http) ! Http.Bind(newServer, interface = "127.0.0.1", port = newPort)
		// return a pair of the server instance and the assigned port
		// the newPort should be used by client code to construct the remote URI with
		(newServer,newPort)
	}
 
	// object to use to tell the server to shutdown..
	object ShutTestServer
 
	// a concrete implementation for a Spray Actor to listen to incoming HttpRequest(..) messages
	// provided a list of messages to be matched against arrived and content to be sent back
	class ReplayerActor(var replay :List[Replay]) extends Actor {
		var ioServer : ActorRef = null
		override def receive: Receive = {
			case _ : Http.Connected =>
				sender ! Http.Register(self)
				ioServer = sender()
			// this is basically a "catch all" HttpRequest pattern
			case HttpRequest(method, uri,_,_,_) =>
				// a test to see that this is indeed expected to happen at this point
				if (!replay.isEmpty && replay.head._1 == method && uri.path == replay.head._2){
					sender ! HttpResponse(entity = replay.head._3)
					replay = replay.tail
				} else {
					sender ! HttpResponse(status = 400, entity = "was expecting a different request at this point")
				}
			case ShutTestServer => ioServer ! PoisonPill
			case _ => //ignore
		}
	}
}

And now a cocrete client code example to use this tool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
// some random content I would like to receive in this scenario (or test..)
val content =
s"""
  |<html>
  |<head>
  |<title>sometitle</title>
  |</head>
  |<body>some body</body>
  |</html>
""".stripMargin
 
// we use the factory method to get an instance of "replayer" to play around with
val (_testWebServer,port) = Helpers.webReplayer(
	List(
		(GET,Uri.Path("/somesite"),content)
	)
)
 
// and where ever in your actual code, use the port we got back above and have it "hit" the server
val whatEver = YourHTTPGetter.fetch(s"http://127.0.0.1:$port/somesite")
assert(whatEver == content)

Profit!

Making The Case For Kotlin

The Kotlin project page has this to say about Kotlin: “is a statically typed programming language that compiles to JVM byte codes and JavaScript.” project page

Kotlin is a very interesting project, I wanted to check it out, so I decided to write this blog post exploring some design aspects behind Kotlin, and specifically how it compares to the leading JVM “hacker” language - Scala.

Who Is Behind Kotlin?

Even though there are several decent free IDE’s available today (Eclipse, NetBeans, Visual Studio Express..), it (could be) surprising that JetBrains is able to rack in between 199$ for a personal license and up to 699$ for a commercial license (for companies) including 1 year update subscription (I have indeed bought one personal license for my development needs). If you are developing on top of VisualStudio you are (most) probably familiar with the 250$+ plugin ReSharper.

I would argue that this makes them a good candidate, with insight into good and bad language features, but lets not get ahead of ourselves and set the expectations too high.

Why Another JVM Language?

JVM is a well known platform, its weaknesses and scalability strategies are available and documented well on the web, its being used as classic Java run-time VM and for some other popular languages like Scala, Groovy, Clojure and others. Many modules have been already written in Java, normally JVM run languages are compatible with the existing Java ecosystem.

Why Static Typing?

In the old (and still ongoing) battle of dynamic and static languages, the consensus is that dynamic languages are faster and easier way to get things done, on the static languages side they allow programmers harness the power of intelligent IDEs to scale up and maintain their code. Type-inferred languages try to find a middle ground approach so that you don’t need to write so much boilerplate code (cue JAVA) to support the static typing, while having the benefits of a full compile time type enforcement.

Finally, Kotlin Vs. Scala - Fight!

Scala today is the fastest growing JVM language and is probably competing for the same crowd as Kotlin, Kotlin put up a comparison page link so we will simply go over the omissions part and try to make sense of it.

OMITTED - Implicit conversions, parameters

What is it? Scala implicits is (IMHO) a single biggest weakness in the language, once they are used, along with the ability to boast symbols for syntax, it quickly stops being human readable and more like a DSL. Which requires many trips to the docs or even worse - the source code.

SBT (Scala Build Tool) is the Maven/Ant of the Scala world link, its very difficult to get things done with it without many trips to google/documentation

1
2
3
4
5
lazy val pricing = Project (
    "pricing",
    file ("cdap2-pricing"),
    settings = buildSettings ++ Seq (libraryDependencies ++= pricingDeps)
  ) dependsOn (common, compact, server)

Lift Is a popular web framework, here’s a snippet I took from one of its documentation pages link (see the "#result *" #> x part)

1
2
3
4
5
6
7
8
9
def render = inputParam match {
  case Full(x) => 
    println("Input is: "+x)
    "#result *" #> x
      
  case _ =>  
    println("No input present! Rendering input form HTML")
    PassThru  
}

Thoughts - Indeed for many people this is the part that makes Kotlin a better “scalable” language, by not allowing programmers and libraries hinder the code unreadable without an IDE

OMITTED - Overridable Type Members

What is it? Type members in Scala allow you to manage local generic types by assign a type to a class/abstract class/trait member

1
2
3
abstract class HasTypedMember {
  type typeMe
}

An extending class will provide a specific type for this type type member

1
2
3
class ExtendingClass extends HasTypedMember {
  type typeMe = Int
}

Thoughts - Frankly, its not really clear why Kotlin decided to omit having typed members, I could be missing something.. Never the less Kotlin usually takes the minimalistic and simplistic route, my guess is that if this is indeed useful, the door is open to add it later..

OMITTED - Path-dependent types

What is it? Pointing to a class under another class in Java would mean a specific type (Parent.Child), well not so in Scala. Scala makes the nested class of two same class instances different, much like you would expect two members of two different instances of a single class. Confused? here is some code to explain this better

Java:

1
2
3
4
5
6
7
8
9
10
11
12
public class Outer {
  public class Inner {}
  public void useInner(Outer.Inner inner){...}
}
//.. somewhere in the code
o1 = new Outer()
o2 = new Outer()
i1 = new o1.inner()
i2 = new o2.inner()
// this would work fine
o1.useInner(i2) // works
o2.useInner(i1) // works

Scala:

1
2
3
4
5
6
7
8
9
10
11
12
class Outer {
  class Inner {}
  def useInner(inner : Inner){...}
}
//.. somewhere in the code..
val o1 = new Outer
val o2 = new Outer
val i1 = new o1.Inner
val i2 = new o2.Inner
o1.useInner(i1) // works fine
o2.useInner(i2) // works fine
o1.useInner(i2) // THROWS type mismatch exception!

Thoughts - It makes some sense that the path to a type that can only exist in an instance is enforced as such.. It would be interesting to know why Kotlin decided to implement this feature in a similar way to Java (conservative?)

OMITTED - Existential types

What is it? This is yet another trade-off for Scala to be able to inter-op with Java
Let’s see an example:

1
2
3
4
5
// Create a case class with generic type
case class Z[T]( blah:T )
// create an instance and cast it to a type created with existential type annotation
Seq( Z[Int]( 5 ), Z[String]("blah") ) : Seq[Z[X] forSome { type X >: String with Int }]
res1 = Seq[Z[_ >: String with Int]] = List(Z(5), Z(blah))

Thoughts - Scala supports type variance to describe generic type bounds, Scala designer (Odersky) chose to add existential types support due to three main reasons link: erasure, raw types (lack of generics) and Java’s “wildcard” types.

Kotlin took the more “purist” approach (also known as reified types) of dropping support for erasure, raw types, wildcard types and finally existential types.

OMITTED - Complicated logic for initialization of traits

What is it? Traits are similar to abstract classes, except that you may use several traits on the same class at the same time (like interfaces).

In Scala, traits are constructed before the extended classes

1
2
3
4
5
6
7
8
9
10
11
12
13
14
trait Averager {
  val first : Int
  val second : Int
  val avg : Int = first / second
}
// if we were to naively construct a class like so:
class Naive extends Averager {
  val first = 20
  val second = 10
}
// create instance
> new Naive()
java.lang.ArithmeticException: / by zero
// what happened was that our val "avg" was computed at construction time before seeing the values we put in Naive

Alternatively the trait can be used with a pre-initialized anonymous class val res = new { val first = 20 ; val second = 10 } with Averager, also the val which depends on other vals (avg in our case) can be made “lazy” with the (obvious) lazy keyword lazy val avg : Int = first / second.

As a side note, using def for “avg” could work, but it would mean evaluating whatever is defined under “avg” every-time this method is called.

Thoughts - Traits in Scala can have state which could be made tricky. If you read this blog post link you will see Kotlin developers thoughts and a nice insight into what went into designing the inheritance in Kotlin. (don’t forget to read the comments!)

OMITTED - Custom symbolic operations

What is it? Scala does not have a built in handling of symbols (like =,+,/ etc), they are actually method calls, as you can have symbols used for method names.

Thoughts - This may sound useful at first, but can and (arguably) will create very confusing looking code, like we saw before in the Lift and SBT examples.

OMITTED - Built-in XML

What is it? Exactly as it sounds, Scala built in XML native support, so you can do stuff like this

1
val holdsXml = <outer><inner>some text</inner></outer>

Thoughts - Having a built in support for XML as a language feature is interesting, and sometimes useful. An interview link with Odersky reveals that he would consider removing the native XML support in Scala if had written Scala now.

Bottom Line

Kotlin is an idiomatic language that draws from the many good features in todays leading languages, specifically from Scala which is a leading language on the JVM platform today. We’ve looked into omissions on Kotlin’s part versus Scala

I’ve very much enjoyed writing this blog post and learning about the evolution that the design of Java/Scala/Kotlin languages, and the choices the respective authors made along the way.

To get a better feeling of Kotlin I might pick it up for a small project, at which point I will surely blog about it some more..

Distributed Scheduled Queue With Redis

Unlike the classic FIFO queue, jobs are scheduled for execution in the future, possibly on a specific day, week, month year.. similar to “cron” in a Unix OS. Unlike the normal Unix “cron” implementation, we want it to be distributed, so that several machines(workers) can query and then execute these jobs.

Implementing

  • Planning - The standart Unix cron format is very flexible and has been time tested for describing schedules
  • Managing/Coordinating - The queues are stored in Redis Sorted Sets, Lua will be used for atomic execution
  • Execution - Getting the pieces together

Planning

“Unix Time” format, is the number of milliseconds since (UTC) 1 January 1970
Example (* the actual unix time number is in UTC, while the formatted date is localized)

1350745504313 == Sat Oct 20 2012 17:05:04 GMT+0200 (IST)

Using this format will make it possible to slice the queue with specific point in time in mind.

Managing/Coordinating

Redis’st will be holding the job IDs as members scored by their corresponding execution times(in unix time format).

Redis commands that will come in handy

  • Scheduling job run: ZADD cron:queue 1350745504313 1000
  • Finding a job to run: ZRANGEBYSCORE cron:queue 1350745504314 LIMIT 0 1

Execution

Without going into any platform specific implementation details, this is what the process looks like

  1. When a new schedule is added, jobs are created for upcoming execution times, which are calculated for the next two hours and added to our cron queue
  2. A special job is added to be run every hour, it will run over all the schedules, calculate and add their execution time to the scheduled queue, two hours into the future
  3. Workers will be polling the database every second to find jobs
  4. Once an entry comes up, it is removed from the queue and added to another Sorted Set queue, so we dont “forget” any jobs..

As for items 3 and 4 above, it is not possible to implement this logic in the client, because for example when we find a job that needs to be executed, another worker might grab it, or worse they both “grabbed” it and are now working on the same job. To work around this, we need to write a little Lua script, since Lua scripts are executed serially, just like any other Redis command

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
local currentTimeMS = KEYS[1]
local workingQueue = "scheduler:working"
local scheduleQueue = "schedule"
-- for easier reading, epoch time is in milliseconds
local second = 1000

local function foundJob(jobID)
	-- implement removing the job from all other queues
	-- implement updating job status where the actual job object is stored
end

local function findNextJob()
	local result
	-- check if there are any "stale" jobs
	result = redis.call( 'ZRANGEBYSCORE', workingQueue, 0, currentTimeMS-60*second, 'LIMIT',0,1 )
	if result[1] ~= nil then return foundJob(result[1]) end
	-- check the normal queue
	result = redis.call( 'ZRANGEBYSCORE', scheduleQueue, 0, currentTimeMS, 'LIMIT',0,1 )
	if result[1] ~= nil then return foundJob(result[1]) end
	-- if we didnt find anything return nil
	return nil
end

return findNextJob()

And there you have it! a “Distributed scheduled queue” implemented on top of Redis.

If you are going to implement this with node, I created a little cron format parser you can find here:
JSCron

Using Contractor With Child Processes

Contractor is a factory of contracts, that helps with documenting our APIs.
The child_process facility built into node is using EventEmitter (v1), and does not support callbacks, so it is a great fit for Contractor!
Since I needed it for a distributed worker framework I am working on, I added a helper facility called “Lawyer”, it basically reads the contracts and routes them to the provided object.

Let us take the parent and child example code from Node’s official documentation (http://nodejs.org/api/child_process.html) and see how simple it would be to add some documentation.

Parent

1
2
3
4
5
6
7
8
9
var cp = require('child_process');

var n = cp.fork(__dirname + '/sub.js');

n.on('message', function(m) {
  console.log('PARENT got message:', m);
});

n.send({ hello: 'world' });

Child

1
2
3
4
5
process.on('message', function(m) {
  console.log('CHILD got message:', m);
});

process.send({ foo: 'bar' });

Node’s child_process facility provied us with a very basic means of communication, we simply call n.send({any: "thing"}) and it automagically recieved by the forked child. Let’s spice it up with a more RPC message passing style, with contracts and lawyers

contracts.js

1
2
3
4
5
6
7
8
9
10
var Contractor = require('contractor').Contractor;
exports.ChildPublish = {
	"GreetingResponse" : Contractor.Create("GreetingResponse", Contractor.Required("childs greeting response")).
	"StatusResponse" : Contractor.Create("StatusResponse", Contractor.Required("errors count"), Contractor.Required("number of completed jobs"))
}

exports.ChildSubscribe = {
	"Greeting": Contractor.Create("Greeting", Contractor.Required("greeting word")),
	"StatusQuery" : Contractor.Create("StatusQuery")
}

parent.js

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
var contracts = require('./contracts');
var Contractor = require('contractor').Contractor;
var Lawyer = require('contractor').Lawyer;
var cp = require('child_process');

var n = cp.fork(__dirname + '/sub.js');

n.on('message', function(m) {
	Lawyer.Read(m, {
		"GreetingResponse" : function(childResponse){ console.log("child responded:" + childResponse) },
		"StatusResponse" : function(errorCount, numCompletedJobs){ console.log("childs status, errors:"+errorCount+" jobs done:" + numCompletedJobs) }
	});
});
n.send(contracts.ChildSubscribe.Greeting("this is your father!"));
n.send(contracts.ChildSubscribe.StatusQuery());

sub.js

1
2
3
4
5
6
7
8
9
var contracts = require('./contracts');
var Contractor = require('contractor').Contractor;
var Lawyer = require('contractor').Lawyer;
process.on('message', function(m) {
  Lawyer.Read(m, {
	"Greeting" : function(parentGreeting){ process.send( contracts.ChildPublish.GreetingResponse("Hi "+parentGreeting+" can I have 10$?")) },
	"StatusQuery" : function(){ process.send( contracts.ChildPublish.StatusResponse(0, 42) ) }
 });
});

If we were to run these

1
2
3
#> node parent.js
child responded:Hi this is your father! can I have 10$?
childs status, errors:0 jobs done:42

If you found Contractor / Lawyer interesting and/or useful, I would love to hear about it!

Typing in JavaScript?

“Strong typing is for people with weak memories” - not sure who originally said that, but you have got to love the enthusiasm people have for dynamic languges. What is there not to love? powerful basic language constructs such as closure, prototypal inheritance, objects as first class citizens and anonymous functions, mainly, with which you can simulate almost anything static languages have to offer!

Proof you say? no problem! here are few implementations that are worthy of noting:

Compile time type checking https://developers.google.com/closure/compiler/docs/js-for-compiler#types

Classic OO inheritance http://mootools.net/docs/core/Class/Class & http://coffeescript.org/#classes

(if you have other interesting examples let me know!)

Convention over configuration

The combination of CoffeeScript’s “classical” Class like system and Backbone.JS’s MVC style structure, give (IMHO) the best value in terms of complexity and pure fun writing apps. The challenges I come across while building large complex applications is the inherited nature of a true dynamic language “typelessness“.

The ugly monster, that is -a large application written in JavaScript, pushed me into struggling with these two main challenges:

  • Many-to-many PubSub communication
  • 3rd party API’s

Many-to-many PubSub communication

While the PubSub (also known as observer pattern) in JavaScript is a powerful tool to decouple your code, having multiple callers and listeners on the same event makes it really difficult to keep track of the actual common API for these events. Lats try to reflect on this while looking at an actual implementation.

(please excuse my CoffeeScript)

Pub/Sub implementation in Backbone/Underscore.JS

1
2
3
4
5
6
7
8
9
10
11
12
13
14
pubsub = _.extend {}, Backbone.Events
pubsub.on 'myEvent', (param1, param2)->
	x1 = param1 + param2
	# do something with x1

# in some other obscure location in the code
pubsub.on 'myEvent', (param1)->
	x2 = param1
	# do something with x2

# ...
pubsub.trigger 'myEvent', 5
# ...
pubsub.trigger 'myEvent', 5,10

The “pubsub.on(event name, callback)” API is very common amongst different JS pub/sub frameworks. here’s an example of how a generic implementation might look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
pubsub = {
	_listeners : {}
	on : (channel, callback)->
		@_listeners[ channel ] ?= []
		@_listeners[ channel ].push callback

	trigger : (channel, params...)->
		@_listeners[ channel ]?.forEach (cb)-> cb.apply(null, params)
}

pubsub.on 'myEvent', (param1, param2)->
	x1 = param1 + param2
	# ... do something with x1..

# in some other obscure location in the code
pubsub.on 'myEvent', (param1)->
	x2 = param1
	# .... do something with x2

# ...
pubsub.trigger 'myEvent', 5
# ...
pubsub.trigger 'myEvent', 5,10

If your using the very popular Socket.IO library, for realtime client/server communication, you are using the exact same API, in which case it makes it even more complicated to keep track of, since the ambiguity is manifested on both the client and the server.

3rd party API’s

This is how a type related bug might look like in a dynamic language

1
2
3
a = "42" # returned from some API
a == 42
=> false

Enter “Contractor” & “Backbone-typed”

Contractor
https://github.com/romansky/Contractor

Backbone-typed
https://github.com/romansky/backbone-typed

I’m introducing a convention here, that helps keep the API consistent across the application, the approach here is to create wrapper functions, that are used as both the API documentation and run-time validation facility for required and optional arguments.

Example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# declare the events and their APIs
Messages = {
	LoginEvent : Contractor.Create( "LoginEvent", Contractor.Required("user ID"), Contractor.Optional("additional info") )
	AppErrorEvent : Contractor.Create( "AppErrorEvent", Contractor.Required("user ID"), Contractor.Required("error description"), Contractor.Optional("exception") )
}

# register on event API
pubsub.on Messages.LoginEvent, (userID, info)-> #... do stuff here
pubsub.on Messages.AppErrorEvent, (userID, description, exception)-> #... do stuff here

# trigger the event
pubsub.apply(pubsub, Messages.LoginEvent(112233, "mobile"))
pubsub.apply(pubsub, Messages.LoginEvent(112233))

pubsub.apply(pubsub, Messages.AppErrorEvent(112233, "user did not have access to resource", e))
pubsub.apply(pubsub, Messages.AppErrorEvent(112233)) # since we did not provide the second required argument, this will log an error and return null

With NodeJS, I am using the same code on the front and back end, which makes it really easy to manage my Socket.IO communication.

If you have been paying attention, there’s still the type issue. Since I work allot with Backbone, it made a lot of sense to write “backbone-typed” which adds optional typing to Backbone-Models.
Example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Wearing = {
	"bracelet" : "bracelet"
	"watch" : "watch"
}

class User extends TypedModel
	defaults: {
		name: null
		email: null
		lotteryNumber: null
		isAwesome: null
	}

	types: {
		name: Types.String
		email: Types.String
		lotteryNumber: Types.Integer
		isAwesome: Types.Boolean
		wearing : Types.Enum(Wearing)
	}

user1 = new User({name: "foo", email: "foo@bar.com", lotteryNumber: 12345, isAwesome: true, wearing: Wearing.watch})
user1.toJSON() #=> {name: "foo", email: "foo@bar.com", lotteryNumber: 12345, isAwesome: true, wearing: "watch"} - nothing special going on here..

user2 = new User({name: "foo", email: "foo@bar.com", lotteryNumber: "54321", isAwesome: "false", wearing: "thong"})
user2.toJSON() #=> {name: "bar", email: "bar@foo.com", lotteryNumber: 54321, isAwesome: false, wearing: null} - shit happens!

user2.set({wearing: Wearing.bracelet})

if user2.get("lotteryNumber") == 54321 and user2.get("wearing") == Wearing.bracelet then user2.set({isAwesome: "true"})
user2.toJSON() #=> {name: "bar", email: "bar@foo.com", lotteryNumber: 54321, isAwesome: true, wearing: "bracelet"} - awesome for sure..

If you want to know a little more about how these work, please visit the respective Git repositories, and let me know what you think!

HLD High Level Design

This post is a follow-up on “Bootstrapping A New Project”

Previously…

I’ve written about “User Stories”, a tool that helps with expectation setting when working with (mostly) non-technical stakeholders.

Designing Toward

  • database (schema, tables, indexes..)
  • application layout
  • deployment
  • modules
  • internal and external APIs
  • application infra (frameworks, tools etc..)

Design Tools At Your Disposal

Audiences

  • System Architect - mostly interested in the software stack, deployment and the database design
  • Team Members - mostly interested in the more lower level stuff like modules and API’s
  • Other Teams - mostly interested in API’s

Feedback on design is great for everyone involved, setup meetings with relevant parties to do DR (design review)

UML and Other Vegetables

UML is complex, you don’t need to know it perfectly, start with what makes sense, and slowly pick up tutorials. Remember, its a visual tool that helps you communicate, not everyone who will look at your designs has the time (or cares for) to read throughly a book about UML and its specifics. In Einstein’s words “Make everything as simple as possible, but not simpler.”

Disclosure

Design does not last. Nobody is going to update the design as the system grows and evolves. Having said that, can still be used a good reference.

User Stories

This post is a follow-up on “Bootstrapping A New Project”

“User Story” is an important communication tool, it is used between you (the developer) and who ever is making the requirements (the stakeholder).
A “contract” which (unlike one written by a lawyer) is clear, understood and agreed upon.
On our side (development) user stories embody a work unit that can be further broken down and estimated.

Key value points:

  • Language - its key that user stories are written in a language everyone can understand (stakeholders, developers etc..)
  • Perspective - user stories need to be written from the perspective of the “feature user”, so they represent real world value
  • Estimation - to help you get better and more precise estimation for the project overall
  • Prioritization - what needs to be delivered soon/later

Lets imagine we just got a call from the product manager, where he is telling us about this new survey tool:

  1. It needs to collect answers from users via a link sent to our users by e-mail
  2. It needs to be summarized and sent to … once a month on the 1st
  3. Managers need to be able to see past surveys

Hmm.. OK, thats pretty clear, but suppose this can be written clearer, lats try using the following template: (not my invention;))
As a [role], I want [Requirement / Feature], So that [Justigication/ Reason]

The requirements now look like this:

  1. As an end user, I will be able to follow a link I got via e-mail, so that I can fill a survey and express my opinion
  2. As a manager, I will receive an e-mail with a digest of surveys, so that I can evaluate performance
  3. As a manager, I will be able to access old surveys, so that I can compare and evaluate performance over time

By reading the above I can clearly understand the required functionality and the motivation behind writing it, this gives me a good starting point.
Since I am the one to implement this, I will need to provide estimates, so that product can plan ahead and communicate the product road map onward.
Unfortunately, “The Devil is In The Details” as they say, so giving estimates based on points 1,2,3 is still difficult.
Points 2 & 3 are “huge” stories, also known as “epics”, we need to further break down these stories, while still using the same language and practices:

  1. As an end user, I will receive an email with a link as a result of some communication with a company, so that I can follow it and fill a survey
  2. As an end user, I get a set of questions to answer, so that my opinion is collected
  3. As a manager, I will receive an e-mail with a digest of surveys, so that I can evaluate companies performance
  4. As a manager, I will have a survey management portal, so that I can overview previous results and compare to
  5. As a manager, I can select a specific survey, so that I can extract meaningful actionable information

The functionality expected of deliverables from the points above is clear, and now I feel more comfortable estimating the time it will take me to complete each.
Along with the estimation, these can now be prioritized against each other and against other ongoing projects..

Summery

The beginning of a new project is the perfect time to introduce user stories, it will help with getting the conversation going, further clarifying the scope and requirements.
While the project scope might change, user stories are (optimally) modular, decoupled and encapsulated, this should help with managing changes and their effect on the overall project.

Setting Up IntelliJ For Android And Scala On Ubuntu

Prerequisites

Android SDK

  1. get the latest SDK

  2. after extracting, run ./tools/android

  3. from the list select the Android version your afer, right now its Android 4.0.3 (API 15) (I also installed Android 2.3.3), click on Install packages to get the installation started

  4. setup your env echo "export ANDROID_HOME=~/Applications/android-sdk-linux/" >> ~/.bashrc

  5. refresh your currect session source ~/.bashrc

Adding SBT support to IntelliJ

  1. open File > Settings > Plugin

  2. click on Browser repositories

  3. find SBT, right click and select Download and install

Setup SBT

  1. create folder ~/bin/ (if you dont have ~/bin in your PATH, then you can do this: echo "PATH=$PATH:~/bin" >> ~/.bashrc (refresh your currect session source ~/.bashrc)

  2. download sbt-launcher.jar and place it under ~/bin

  3. create launcher: touch ~/bin/sbt and then echo 'java -Xmx512M -jar `dirname $0`/sbt-launch.jar "$@"' >> ~/bin/sbt

  4. make it executable: chmod u+x ~/bin/sbt

Setup android-plugin for Scala support

  1. install giter8 curl https://raw.github.com/n8han/conscript/master/setup.sh | sh

  2. run it ~/bin/cs n8han/giter8

Setup sbt-idea

  1. create folder (if it doesnt exist) mkdir -p ~/.sbt/plugins/

  2. create the file ~/.sbt/plugins/build.sbt and add the following lines to it:

    resolvers += "sbt-idea-repo" at "http://mpeltonen.github.com/maven/"
    addSbtPlugin("com.github.mpeltonen" % "sbt-idea" % "1.0.0")
    

Finally, now that you have all the tools you need, proceed to creating a new project

  1. run the android plugin project setup tool: ~/bin/g8 jberkel/android-app, follow the onscreen questions (or press return for defaults)
    this will create the project folder and create the files needed to build an android project

  2. open the new folder cd <project name>

  3. setup IntelliJ project support sbt gen-idea

  4. open the project in IntelliJ

Bootstrapping A New Project

My take on: How to give your new project a good starting point

So, you have a new shiny project on your lap, so now what?
Today everyone is talking about “agile”, and how its better then the old “waterfall” methodology of doing things

Waterfall

The “Waterfall” approach is notorious in the software community, some highlights of why this is the case:

  • It assumes that the initial requirements will not change
  • There is only one version delivered to the stakeholders, at the end of the process
  • All the planning and “thinking” happen only in the beginning of the project

Agile

Everyone likes “Agile”, its the “cool” way to manage a project, it asks for less paperwork, less work upfront, built from the ground up to support changes, to name a few conceptions about it.
While I call it “Agile” here, in reality it is a world of sub cultures, some of which are: “XP” (eXtreme Programming), “scrum”, “Kanban” to name a few.
The difference between specific cultures is not always clear, we don’t necessarily have to stick to one, we can mix and match, like I am doing here, thus from this point onward I will just ignorantly call it “Agile” with out specifying the specific methodology.
The main strengths going for the Agile methodology are

  • Short upfront work phase (requirements, design ..)
  • Iterations - project is broken down to deliverables, which is translated to real value delivered very early in the project lifetime
  • Changes are welcome and expected and are introduced into following iterations (bugs found in previous deliverables, design “inadequacies”)

Doing things in small pieces is great, just ask a WEB developer, how there’s nothing quite like writing a bit of code, refreshing the page and seeing the changes reflected, especially when working on a big complex project. This gives you a feeling of progress and achievement, even though the end goal is far far away..

Some Agile approaches talk beyond writing business code and going into proactive bug searching using TDD (Test Driven Development), which turns out to be a good fit to the overall process as it secures quality of delivered pieces of logic.
BDD (Behavior Driven Development) is very similar to TDD, but instead of going after “test coverage” it takes on a higher level, business specification driven approach, where the business embodied user stories are part of the actual test suite.

My recipe for success

  • User Stories
  • Mock-ups (in the case of an app that has UI)
  • HLD (High Level -system component- Design)
  • Process Sequence diagram / Workflow diagram (possibly atop HLD)

Tooling

Tools that keep me productive:

  • Google Document (User Stories)
  • Google Drawing (mockups, diagrams, HLD)
  • Lucidchart (diagrams)

In addition, I use a light weight web based project management tool as a central place to keep reference to important docs and user stories, and see my progress (and others working on the project). A great tool for the job is Trello.com - which free and super simple!

REATED UPDATES:

User Stories