Controlling LightwaveRF panels with a Raspbery Pi

I hate waking up in winter with an alarm when everything is still dark and gloomy, and would much prefer to wake up more naturally with light.  You can buy various “daylight alarms”, but they are just more clutter to have in the room, and it felt unnecessary to buy something  when the room already has a perfectly good light hanging from the ceiling.  I just needed a way to control it.

There are various WiFi enabled light bulbs around, but they all have the same basic flaw, that if the wall switch is turned off, no wifi in the world is going to turn the bulb on again.  This means you would need to always use a phone/remote to control the light, rather than being able to use a normal switch as well.

Eventually I came across “LightwaveRF” units, which replace the switch with a dimmer, and then you use a normal dimmable bulb.  The switches are about £30, but to connect it to a network you also need their wifi link, which is £50.  This would push the price up to £80, which isn’t too crazy compared to the price of some wifi bulbs, but I wanted to do it cheaper than this, and learn something about using the GPIO pins on the Pi as well.

 

Fortunately the RF signal the panels use is a standard 433Mhz, and you can get transmitters for this frequency for the huge cost of £1.

rf

All I needed now was to find out exactly what signal to transmit to control the panels from the Pi.  Fortunately all the hard work has been done by someone else: https://github.com/roberttidey/LightwaveRF  This github project provides C libraries for the Arduino and Pi to transmit and receive using the LightwaveRF protocol.  It also provides python bindings which is perfect.

Hardware

Obviously first replace your existing light switch with the Lightwave one.  This was a bit of hassle because it’s deeper than a normal panel, so you might need to excavate the wall a bit to get it to fit.

Then connect the 5v (vcc), data and ground pins to the Pi, noting which pin on the Pi you connect the data to.  If you’re not sure which pins on the Pi are which, refer to this website.

Pigpio

LightwaveRF has a dependency on “pigpio” which is a C library used to control the GPIO pins on the Pi. Follow the pigpio instructions to download and install this.  If you get errors when running ‘make’ to build this, check you have the necessary python packages:

sudo apt-get install build-essential

You should be able to install any other missing packages using ‘apt’ as well.

This will install the pigpio C libraries, a daemon – ‘pigpiod’ – that runs in the background, and a python library that can be ‘import’ed into scripts.

Once installed, start the daemon by running: ‘pigpiod’.  If it starts OK it will just silently return.

LighwaveRF

Create a location somewhere on your pi, and copy the ‘lwrf.py‘ file from the github project into it.

Then create a test file with the below contents in the same location:

import sys
import pigpio
import lightwaverf.lwrf

# This is a simple test class for the lwrf and pigpiod programs.

# The GPIO pin on the Pi you've connected the transmitter to.
# You probably need to change this!
gpio_pin = 7

# How often to repeat the signal, 3 seems to be OK.
repeat = 3

# An ID that must be unique for each dimmer. 
id = 1

pi = pigpio.pi() # Connect to GPIO daemon.
tx = lightwaverf.lwrf.tx(pi, gpio_pin)

# this should be between 0 and 32
value = int(sys.argv[1])

if (value == 0):
  tx_val = 64 # according to the LightwaveRF docs, when turning off, this should be 64.
  c = 0 # "command" setting i.e. on/off
else:
  tx_val = value + 128
  c = 1

a = tx_val >> 4 # first 4 bits
b = tx_val % 16 # last 4 bits
data = [a, b, 0, c, 15, id, 0, 0, 0, 0]
tx.put(data, repeat)
print("Sent " + str(value))
tx.cancel();
pi.stop();

Edit the file with the ‘gpio_pin’ you connected the transmitter to, the other values can be left as they are.

Test this runs OK this with python, supplying an example brightness:

python test.py 10
Sent 10

If you get errors, check that that the pigpiod daemon is running.

Before it will actually do anything, you need to pair the transmitter with the panel.  LightwaveRF panels don’t have their own unique addresses, instead they need to be given an ID to respond to.  Each panel can remember up to 6 IDs and they will then respond to any signals transmitted with that ID.

To put the panels into “learning” mode, press and hold both panel buttons until the orange and blue lights start flashing alternately.  This “learning” mode lasts for about 15sec, so when the lights are still flashing, run the script above again.  The blue light only should then flash to indicate it has paired successfully. Refer to the LightwaveRF dimmer manual for more details.

Now running the python script again (with an argument between 0 and 32) should actually control the light!

Of course having to boot a laptop, ssh into a Pi and run some python is somewhat inconvenient just to turn a light on. I’ve written a very simple website that can be used to control the light.

Advertisements

Energy monitoring with a Raspberry Pi

A lot of energy companies have given away free electricity meters.  These have a clamp you put round the house supply, and a wireless link to a display.  They are generally branded by the energy company, but a lot of them are Current Cost EnviR meters.  If you look on the back, there is what appears to be an ethernet port, but is actually a serial connection in disguise.  You can get a usb data cable that connects to the Pi, and enables you to read an XML string from the meter, with the current power and temperature.  You’ll need to hunt around the interwebs for this, search  “Current Cost Data Cable” to try and find one on amazon or ebay.

Once you have the cable and it’s connected up, you can use a few lines of python to read the data.

import serial
serialObj = serial.Serial("/dev/ttyUSB0", 57600, timeout=6)
xml = serialObj.readline()
print(xml)

Running this should output something like:

<msg><src>CC128-v1.29</src><dsb>00484</dsb><time>20:16:49</time><tmpr>17.8</tmpr><sensor>0</sensor><id>00077</id><type>1</type><ch1><watts>00270</watts></ch1></msg>

You can then use a regular expression, or XML library to extract the data and do something useful with it. The example below sends it to a Graphite instance that is running on the Pi.

e.g.

import serial
import re
import urllib2
import time
import socket

serialObj = serial.Serial("/dev/ttyUSB0", 57600, timeout=30)
xml = serialObj.readline()
print "xml: " + xml
m = re.search('(.*)',xml)
power = m.group(1)
print 'Power: ' + power

# This conditional exists because occasionally the energy monitor returns an incorrectly low number.
if (int(power) &amp;gt; 10):
# Send the data to graphite
sock = socket.socket()
sock.connect( ("localhost", 2003) )
sock.send("house.power %d %d \n" % (int(power), time.time()))
sock.close()

See the next blog post for details on how to configure Graphite on a Raspberry Pi

If it all works, you’ll be able to end up with graphs like this:

Setting up “graphite-api” & Grafana on a Raspberry Pi

Graphite is a great graphing system with a very simple API for importing data, and a lot of support from other tools.

There are two parts to a Graphite installation:

  • “Carbon”  which is the process that handles receiving and storing data
  • “graphite-web” which provides a front-end and HTTP API

Graphite-web is pretty complex to install however – especially if you have minimal python knowledge – with a number of dependencies (e.g. django, MySQL) and associated configuration.  It’s also not the most elegant application to use.

As a result, a number of other front ends have been developed, one of which is the excellent Grafana.  Using alternative front-ends means you only really need the HTTP API from Graphite, and not the whole web application (with django etc), but the main Graphite project doesn’t support installing just this element.  There is however a project on Github that aims to provide just this – graphite-api.

This blog post will cover how to install carbon, graphite-api, and finally Grafana v1

Installing Carbon

Carbon can be install using apt:

apt-get install graphite-carbon

Once installed you should be able to start it with the standard “service carbon-cache start”.  This will silently fail however, because for some inexplicable reason, the package is configured by default to be disabled, and the init script only reports this if it is in “verbose” mode, which again by default it isn’t.  So the default install will just silently fail to do anything!

To fix this, edit /etc/default/graphite-carbon and change the line below to true:

CARBON_CACHE_ENABLED=true

Then “service carbon-cache start” should start the service.

Check carbon is running with the following python script:

If this returns a “socket.error”, check if carbon is running with “ps -ef | grep carbon”, and check for errors in /var/log/carbon/console.log

Installing graphite-api

Follow the “Python” instructions  at http://graphite-api.readthedocs.org/en/latest/installation.html#python-package

Install required dependencies

apt-get install python python-pip build-essential python-dev libcairo2-dev libffi-dev

And then graphite-api

pip install graphite-api

This will download and compile graphite-api.  If you have cryptic errors about “gcc”, check you have installed “build-essential” and all the required “*-dev” libraries.  Depending on your system, you may also need to install other dependencies, but “apt” should take care of this for you.

Configure carbon

Once installed you need to create the configuration file.  Graphite-api will run without a config file, but the default file locations are different to what graphite-carbon used so we need to manually specify them.

Create “/etc/graphite-api.yml” with the following contents.

search_index: /var/lib/graphite/index
finders:
  - graphite_api.finders.whisper.WhisperFinder
functions:
  - graphite_api.functions.SeriesFunctions
  - graphite_api.functions.PieFunctions
whisper:
  directories:
    - /var/lib/graphite/whisper
carbon:
  hosts:
    - 127.0.0.1:7002
  timeout: 1
  retry_delay: 15
  carbon_prefix: carbon
  replication_factor: 1

If you want to change the data locations, ensure you edit “/etc/carbon/carbon.conf” as well to match.

Deployment

Graphite-api doesn’t install a daemon like carbon, it needs to be run inside a web server.  There are several options documented on the website.  The simplest (although not most performant) is probably to use Apache and mod_wsgi

apt-get install libapache2-mod-wsgi

Then just follow the documented instructions.

Create /var/www/wsgi-scripts/graphite-api.wsgi

# /var/www/wsgi-scripts/graphite-api.wsgi

from graphite_api.app import app as application

And /etc/apache2/sites-available/graphite.conf

# /etc/apache2/sites-available/graphite.conf
LoadModule wsgi_module modules/mod_wsgi.so
WSGISocketPrefix /var/run/wsgi
Listen 8013
<VirtualHost *:8013>

 WSGIDaemonProcess graphite-api processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120
 WSGIProcessGroup graphite-api
 WSGIApplicationGroup %{GLOBAL}
 WSGIImportScript /var/www/wsgi-scripts/graphite-api.wsgi process-group=graphite-api application-group=%{GLOBAL}

 WSGIScriptAlias / /var/www/wsgi-scripts/graphite-api.wsgi

 <Directory /var/www/wsgi-scripts/>
 Order deny,allow
 Allow from all
 </Directory>
 </VirtualHost>

Then symlink this into /etc/apache2/sites-enabled/

ln -s ../sites-available/graphite.conf .

Finally restarting Apache:

service apache2 restart

This should start graphite-api on port 8013.

You can check this by browsing to http://<IP_OF_PI >:8013/render?target=test.metric

This should return a fairly dull graph showing the data entered using the basic test python script above. If you get a image back that says “No Data”, check you have run the test python above successfully, and that your data paths in /etc/carbon/carbon.conf and “/etc/graphite-api.yml” match.

Any errors will be logged into the standard Apache error log at /var/log/apache2/error.log

Installing Grafana

The final step is to install the “Grafana” frontend. The original Grafana is a pure HTML5 application that connected directly to the graphite API and didn’t require anything other than a webserver to host the pages.  Grafana 2 has now been released, which as well as connecting to Graphite, also provides it’s own backend that is written in Go.

There aren’t prebuilt packages of Grafana 2 available for the Raspberry Pi, and building it from source would be quite a bit of time and hassle (if it’s even possible), so I’d recommend sticking to Grafana 1. The main limitation of Grafana 1 is being unable to directly save dashboards from the GUI. To save a dashboard, you will need to copy the JSON for it from the GUI, and save it manually as a file on the Pi to “/var/www/grafana/app/dashboards/”

Installation

  • Download the latest 1.x release from http://grafana.org/download/
  • Unzip this into /var/www/grafana
  • copy “config.sample.js” to “config.js” and edit the datasources section to point at your graphite instance above. This is likely to be http://IP_OF_PI:8013
  • Open a browser and point it at: http://IP_OF_PI/grafana and you should get the Grafana UI.
    • If you don’t get this, check your Javascript console log for errors or typos in config.js file

Explaining how to use Grafana is out of scope of this blog post, but have fun graphing all your r-pi stats!

Monitoring

Lets face it, monitoring is hard.  We want to know every detail of our systems, from the state of the RAID, the latency on the storage, or the size of the Java heap, through to the number of failed login attempts or the speed of the database transactions.  Possibly even hotspots in your code, although pre-production profiling should really have caught those.  This is a huge range of metrics to try and capture in one place.  We also want to be alerted to problems, without being spammed by false positive alerts.   Combined with every organisation having it’s own configuration and architecture, and a myriad of different products available – both open source and commercial –  it’s hard to know where to start.

This article is going to document some approaches to bringing a level of insight into a web farm based on Tomcat, running on Linux.  We’re not talking web-scale here, the companies deploying thousands of the these things have the level of staffing to build their own custom solutions.  But a medium size organisation, running a webfarm with 20  or so web servers probably doesn’t have that level of resource.

So what do we want from a monitoring solution?

The most basic question is obviously “is my website up”?  Great, if that’s all you want then pingdom.com is your answer.  But knowing your website is up (or down) isn’t that helpful.  If it went down, you’d probably know soon enough anyway.  So the next question is “how fast is my page response?”, and more specifically “are the average response times in the last minute OK/bad/terrible?”  At least this might give you a chance to catch problems before the site blows up in your face.  From a Java application point of view, you might want to know if the number of garbage collections / min has suddenly increased, or you’re about to run out of perm-gen space.

But crucially, you also need to know what the baseline norms are.  There’s little point in an alert saying the number of page faults/sec is 150 if you have no idea if that’s high or low.  I’m a massive fan of graphs:  without needing to know anything about a system, if your website crashes at 9am, and you can see from your monitoring application that a particular metric skyrocketed at the same time, you know where to look.  Simply logging onto a box and running [top|iostat|vmstat|etc] isn’t much help without knowing what the numbers are supposed to be.  And who has time to do all that when there is a problem?

So far this has all been about system monitoring, but as an application developer you will also have application metrics you want to capture.  Say the number of completed purchases a min, or the number of uploads of cute cats a sec or something.  So how do we expose those in an easy way?

Finally there is log monitoring.  Everything that moves writes to a log file, often in an unstructured, inconsistent way.  There’s no point having these logs scattered across file systems and servers.  We need a quick searchable interface for reactive problems, and someway of raising automatic alerts from the logs.