A device must study to know, know to understand, understand to judge

Now that the foundations are done, my home automation system has to study before it hopefully becomes smart one day. This was the part I worked on lately.

What to study

Studying means that actions must be saved so that they can later be analyzed, and repeated automatically, when appropriate. The current tactic that I intend to use for this is supervised learning. The training will be based on the previous actions saved in long term memory : The expected outputs are the actions and the input data is the context associated to each action. The context is the state of the system when the action was performed. So far I identified the following variables as necessary in the context:

  • The user id
  • The controller id
  • The date of the action
  • The context of the controller
  • The context of the concentrator

The user id is the identifier of one home system. The controller id identifies the device that was used to trigger the action. This allows to differentiate which mobile phone or wall switch was used to do the action. I have big expectations on the date information because our lives are heavily driven by time. The context of the controller is the state of all sensors present on the controller. For know I will monitor two sensors available on all smartphones : location, and acceleration. The context of the concentrator is the state of all sensors present on the concentrator.

Storing logs

In order to easily analyze these information and test different prediction algorithms, the storage is done on the cloud. This is not the targeted architecture but a necessary step to prototype the learning part.

Many different solutions exists to log data on the cloud. My choice quickly went to a NOSQL solution : this allows to easily store a lot of structured information without a strict schema definition. Two solutions seemed the most adapted to my needs : couchbase and mongodb.

Couchbase was my - a priori - favorite choice, and the first I tried. It supports more features that mongob, and especially triggers that I though could be useful later. Being written in Erlang it should be robust by design. The setup is very easy thanks to its web interface. However the python tutorial are « too big » in my opinion : If you never used Flask you have to understand both Flask and couchbase.

I then tried mongodb and was immediately trapped : For a test setup, the installation only consists in starting the deamon. You then follow the introduction tutorial while adapting them to your needs, and after 15 minutes, your logging software is done !

As of today (december 2015) both databases popularity is increasing as shown in this trend and mongodb is the most used NOSQL database. The ease of use combined with the popularity made me choose mongodb as a logging database.

Log format

With the information to log being identified and the database chosen, the log format now has to be defined. The information described in the first part of the article are saved with the following json structure:

    user : bare jid(string),
    domain : domain name(string),
    controller_id : resource(string),
    data {
        context {
            time' {
                ‘date'  : date(string),
                dst'  : daylight saving time(boolean)
        ‘event’ {
            ‘actuator_id' : id of actuator(string),
            type' : event type(string)

The first 3 fields are extracted from the incoming XMPP logging message. The controller_id is carried in the xmpp resource. I use the XMPP resource to store the id of the controller device.

The time is encoded as an ISO8601 string. This combined with the DST information will allow to extract many different kind of information : The date information in itself is not really useful, but once translated to other domains such as the day of week it should be a very important input variable for correct prediction. This will be true for most context information that will be used : Raw information is translated to higher level information that can then be used as an input variable for a prediction algorithm (unless this is part of the algorithm like in deep neural networks).

The type of an event is a string. This allows to define almost any action. For now I log only binary actions such as on/off and open/close.

Logging infrastructure

The last piece of the logging infrastructure is the component that fills the database. Since XMPP is the protocol used between the controller and the concentrator, the log command is also done via XMPP. A « logger » component is added to the XMPP server, waiting for messages with the specified structure and storing them on the database. This component is implemented in python with SleekXMPP. This is the XMPP client that I also use in the concentrator. I do not use the twisted XMPP client anymore because I had issues with it and it is not easy to add new features in it. On the other way sleekxmpp is designed to be extended via plugins. The full component is implemented in only 140 lines of python. I will probably publish it as open source later.

Next steps

All the actions are now being logged since 2 months. I now have a real data set to try different prediction algorithms. I will add other information to the context, especially the location and acceleration. But this means that I need to rewrite the HTML application to an Android native application : It will be much easier to access the sensors this way. The drawback of this choice is that a completely different implementation will be needed for iOS, which could be avoided with an HTML app. Anyway there will be more challenges on iOS to monitor the sensors. So I will focus on Android for now.