Learning Through Tinkering: The Need for Pet Projects

As presented @ Devoxx Belgium

Speaking at Devoxx is great, but sharing what you learned is even better. All the references I made in the presentation are in this post and I will follow up with some extra blog posts about these topics. I do not share the slides, since they are to support my story.

So here we go! ;)


The toolbox

The toolbox is our most important asset as developers. It contains all the frameworks we know about, the tools we have seen, etc. While expanding our toolbox is good (learning new things), it still requires some proficiency to know how to select the right tool and then use it in an appropriate way.

And for me… Pet Projects are a great way to both expand and master my toolbox.

But what to build?

But what do you build? How do we start applying what we have learned? I shared 3 different ways to inspire myself, which you can find below.

Limit yourself

Although it sounds contradictory, limiting yourself is a great way to expand your toolbox.

Some ways to apply this idea:

My personal example was about how I tried to build an Android Game without any libraries. To be able to do this, I had to solve 2 problems

  1. The game engine
  2. Implementing Physics

The actual game engine (update/draw-loop)

For the game engine I followed a blog from James Cho. It is an amazing step-by-step guide to build a game engine in Android. The tutorial is getting quite old but I had no problem following it and reaching my goal.

While his entire website is full of info, I specifically used these pages:

The phyics

You can only model something correctly if you understand it to the fullest. While I had little understanding of physics there is an amazing resource available for free.

The Nature of Code by Daniel Shiffman is an amazing book which explains both the physics and how to program them in a simple way. I am still baffled by how simple he made it look.

    Abstract class used for most of the items on screen.
    This is what it looks like when only applying chapter 1 and beginning of chapter 2
public abstract class Drawable {
    protected Vector2 location;
    protected Vector2 velocity;
    protected Vector2 acceleration;

    public Drawable(Vector2 location) {
        this.location = location;
        this.velocity = new Vector2(0, 0);
        this.acceleration = new Vector2(0, 0);

    public void applyForce(Vector2 force) {

    public void update() {

        //Revert to 0

The best part is that he published his book in html format with Javascript examples on the internet. You can find it here: https://natureofcode.com/book/ While his entire website is full of info, I specifically used these pages:

My code

While I didn’t really finish it properly (because I apparently love games but dislike creating them) you can find my code on Github.

Automate Anything

Trying to automate something can really help you in two ways:

  1. You learn how the process works (because how else do you automate it?)
  2. You get a new tool, being the automated result.

What you learn really depends on what you decide to automate. Some examples:

  • A small script of commands you run often
  • Automate a guitar

My code

I was a teacher for a while and I need to keep an attendance list of my students each day. So I thought, why not automate it with a bit of facial recognition.

For the actual recognition, I just use Azure Cognitive Services Face API. It is extremly easy to use. Ask a free Azure trial and get going! :-).

As a buffer between the camera (a stream of images) and the Face API (a single request/response frame) I employed OpenCV. The most important part of the code is the following:

public void run() {
        // Select a Camera
        VideoCapture camera = new VideoCapture(0);

        // Load a Classifier (pre-trained model packaged with OpenCV)
        CascadeClassifier faceDetector =

        Mat frame = new Mat();
        // LOOP FOREVER!!!
        while (true) {
            if (camera.read(frame)) {

                // Run a detection on the frame
                MatOfRect faceDetections = new MatOfRect();
                faceDetector.detectMultiScale(frame, faceDetections);
                Rect[] detectedFaces = faceDetections.toArray();

                // Draw rectangles on the original frame
                drawRectangles(detectedFaces, frame);

                // only send the frame to Cognitive Services if you actually detected a face
                if(detectedFaces.length > 0) {

                // Render the edited frame in a simple application

The Github code example is here. I still need to clean the Repo a bit.


The 3rd thing I talked about was about replicating something which already exists. There is a lot you can learn by trying to rebuild something you love.

So, I wanted to replicate something I love, being Pokémon Go. While I did make a design myself using JSON, Microservices in Vert.X and many other things, I also wanted to know what the real app uses. I discovered a great reddit subforum where the real reverse engineers are, which is great since I do not have the time nor interest to reverse engineer myself.

The two things I discovered on reddit that shocked me a bit:

  • They do not use JSON to communicate, they use Google Protocol Buffers
  • They use an RPC approach instead of a pure Restful approach. (1 Endpoint for almost everything).

The bottom line

In the end, the tools in our toolbox are only valuable if we are also proficient with them. So do not get stuck in the theoretical learning but apply what you learn.

Action Pics