Thursday, 3 December 2020

Modifying the Ubiquiti CRM Key to run Cloud Key firmware

This is an extract from the information that I posted on the Ubiquiti forums in this topic. I'd like to be very clear that if you choose to run any of the commands on this page, you are doing this at your own risk. Don't proceed unless you're okay with potentially bricking your device without any way to recover.


I recently got a CRM Point (being well aware that it it not supported anymore by Ubiquiti) and started playing around with it. For the uninitiated: the Cloud Key is an ARM-based small computer that can be powered over POE. It uses the MediaTek mtk7623 SOC and has 2GiB of RAM memory. The point of the CRM Key was to install it at customers networks which allows a sysadmin to remotely manage compatible (Ubiquiti airMAX) network equipment. 

I noticed that the firmware isn't updated that regularly anymore which is sad. Because it has a limited amount of flash memory and any time I run apt-get update && apt-get upgrade -y this fills the read-write partition. 

But most packages are so old that installing (for example) an up to date version of Java will take more than half of the available storage. Of course I had purchased this device because I intended to tinker with it. So I did exactly that. 

First exploration

I installed CRM.mtk7623.v0.6.0.a670e69M.170615.0748.bin on it (to my knowledge this is the latest firmware that was released for this hardware) and was happy to see that a clean install leaves 100% of space on the rw-partition. This got me thinking: if I can install the updated packages in the read-only partition this will leave the read-write partition almost empty! Given that the read-only partition is situated at /dev/mmcblk0p6 I started trying some stuff out: 
root@control-point:/data# dd if=/dev/mmcblk0p6 of=/data/dd.img
2097152+0 records in
2097152+0 records out

1073741824 bytes (1.1 GB) copied, 101.841 s, 10.5 MB/s

root@control-point:/data# dd if=/data/dd.img of=/dev/mmcblk0p6
2097152+0 records in
2097152+0 records out

1073741824 bytes (1.1 GB) copied, 242.271 s, 4.4 MB/s
So writing to the read-only partition directly does not seem to be blocked in any way. After a reboot everything still works just fine! But interestingly enough the partition is 1.1GB while the size of the read-only filesystem as reported by df is only 207MB. This means that there is some potential room for additional tools and packages! 

First attempt at a 'custom rom'

I extracted the squashfs from /dev/mmvblk0p6 onto /data/, then ran mksquashfs on the squashfs-root folder in /data and then I proceeded to dd the resulting squashfs back to /dev/mmvblk0p6. While the filesystem is mostly fine, somehow my root/ubnt user got borked and I can't log in using ssh :(. I can however log in to the AirControl web UI. My theory is that this was an issue caused by the layer _under_ the rw-overlay changing. Therefore the stuff on top became invalid. Trashing the entire rw-layer fixes that. 

How to start modifying your read-only squashfs:
  • ssh to your crm point
  • cd to /data directory
  • apt-get update && apt-get install squashfs-tools
  • unsquashfs /dev/mmcblk0p6
  • Follow the step on this page about the policy.d file
  • Mount /dev, /dev/pts, /proc sections according to the copy-pasta here
  • chroot squashfs-root
  • mkdir /tmp
At this point you can run apt-get commands as if you are actually "running this system". So this includes adding packages, Fixing /etc/apt/sources.list to also contain jessie-backports, updating existing packages. Just make sure that all of this will still fit in the 1.1GB partition that it will eventually need to squeeze in to. In my case I also removed the aircontrol and postgres packages. Then start your cleanup:
rm -rf /tmp
apt-get clean
# Next line makes sure that on factory reset a new SSH key is generated
rm /etc/ssh/ /etc/ssh/ /etc/ssh/ /etc/ssh/
rm -rf /usr/sbin/policy-rc.d
history -c
exit #(this is where you leave your chroot)
umount /proc /dev/pts /dev #In my case some of them didn't want to unmount. If this happens, just reboot the CRM point.
mksquashfs /data/squashfs-root root.sqfs
dd if=/data/root.sqfs of=/dev/mmcblk0p6
From this point on any command you run will likely fail because the rofs was overwritten while mounted. This confuses the system which is understandable. So just pull the power, plug it in again, and attempt to reset the device with a paperclip or something. It should boot up pretty fast and you are now running your custom squashfs image!

Next steps

So given that I was able to modify my squashfs I started to wonder... What if I could extract the kernel and rootfs from the cloud key firmware update and copy it to the flash of my CRM Point using DD? Would the kernel crash? Would the bootloader do some checksum check on the kernel partition? I didn't know, but I wanted to try. Especially since a lot of people on the forum over the years have said "you can't", "you'll brick it" and things like that. And I am pretty stubborn. 

 So first I tried to cross-flash the Cloud Key the firmware using ubnt-systool fwupdate but that failed. This is because the tool that checks which product this is probably uses data from one of the partitions that I didn't touch. But that's actually perfect because I wouldn't want an accidental update to brick my hacked cloud key, so I guess this worked out just fine. 

 1. Determine flash layout

So as most people probably already know, embedded devices running linux usually have multiple partitions. In order to do some analysis on firmwares the tool dd is very useful to make backups of partitions so that if you screw up but you can get the device to boot to some kind of recovery you can restore it. So let's take a look at the partitions:
root@control-point:~# parted /dev/mmcblk0
GNU Parted 3.2
Using /dev/mmcblk0
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: MMC 004GE0 (sd/mmc)
Disk /dev/mmcblk0: 3937MB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name      Flags
 1      262kB   786kB   524kB                uboot
 2      786kB   1049kB  262kB                config
 3      1049kB  1311kB  262kB                factory
 4      1311kB  34.9MB  33.6MB               kernel
 5      34.9MB  68.4MB  33.6MB               recovery
 6      68.4MB  1142MB  1074MB               rootfs
 7      1142MB  2753MB  1611MB  ext4         appdata
 8      2753MB  3937MB  1185MB  ext4         userdata
As we can see there are quite some partitions. Let's back them all up first. 

 2. Back up all partitions

This makes a copy of every individual partition on to your SD card (which should be mounted at /data)
root@control-point:~# dd if=/dev/mmcblk0p1 of=/data/mmcblk0p1.img
1024+0 records in
1024+0 records out
524288 bytes (524 kB) copied, 0.0360363 s, 14.5 MB/s
root@control-point:~# dd if=/dev/mmcblk0p2 of=/data/mmcblk0p2.img
512+0 records in
512+0 records out
# repeat for every partition number (so till mmcblk0p8)
Next I copied these files from the SD card to my NAS, but of course just putting the SD-card in a SD-card reader is also perfectly fine.

root@control-point:/data# scp mmc* root@
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is c4:0f:f2:89:57:df:bb:85:ee:ba:fb:41:e6:d3:90:9e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
root@'s password:
mmcblk0p1.img                               100%  512KB 512.0KB/s   00:00
mmcblk0p2.img                               100%  256KB 256.0KB/s   00:00
mmcblk0p3.img                               100%  256KB 256.0KB/s   00:00
mmcblk0p4.img                               100%   32MB   8.0MB/s   00:04
mmcblk0p5.img                               100%   32MB  10.7MB/s   00:03

 3. Examining the DD-images and the firmware files

For this I used the tool binwalk. This walks through a firmware image and looks for certain signatures that are known to identify certain 'parts' of a firmware image. So first I did this for the latest firmware updates for the UCK and CRM Point:

$ binwalk UCK.mtk7623.v1.1.13.818cc5f.200430.0950.bin

0             0x0             Ubiquiti firmware header, header size: 264 bytes, ~CRC32: 0x8ED3EC98, version: "UCK.mtk7623.v1.1.13.818cc5f.200430.0950"
260           0x104           Ubiquiti partition header, header size: 56 bytes, name: "PARTkernel", base address: 0x00000000, data size: 0 bytes
324           0x144           uImage header, header size: 64 bytes, header CRC: 0x3960E04E, created: 2020-04-30 10:00:43, image size: 7079048 bytes, Data Address: 0x80008000, Entry Point: 0x80008000, data CRC: 0x7D24D70D, OS: Linux, CPU: ARM, image type: OS Kernel Image, compression type: none, image name: "Linux-3.10.20-ubnt-mtk"
14642         0x3932          xz compressed data
14863         0x3A0F          xz compressed data
7079436       0x6C060C        Ubiquiti partition header, header size: 56 bytes, name: "PARTrootfs", base address: 0x00000002, data size: 0 bytes
7079500       0x6C064C        Squashfs filesystem, little endian, version 4.0, compression:gzip, size: 349822683 bytes, 29395 inodes, blocksize: 262144 bytes, created: 2020-04-30 10:00:22

$ binwalk CRM.mtk7623.v0.6.0.a670e69M.170615.0748.bin

0             0x0             Ubiquiti firmware header, header size: 264 bytes, ~CRC32: 0x604E5185, version: "CRM.mtk7623.v0.6.0.a670e69M.170615.0748"
260           0x104           Ubiquiti partition header, header size: 56 bytes, name: "PARTkernel", base address: 0x00000000, data size: 0 bytes
324           0x144           uImage header, header size: 64 bytes, header CRC: 0xE24B3D5A, created: 2017-06-15 14:57:10, image size: 6586640 bytes, Data Address: 0x80008000, Entry Point: 0x80008000, data CRC: 0x2526FE9E, OS: Linux, CPU: ARM, image type: OS Kernel Image, compression type: none, image name: "Linux-3.10.20-ubnt-mtk"
14642         0x3932          xz compressed data
14863         0x3A0F          xz compressed data
6587028       0x648294        Ubiquiti partition header, header size: 56 bytes, name: "PARTrootfs", base address: 0x00000002, data size: 0 bytes
6587092       0x6482D4        Squashfs filesystem, little endian, version 4.0, compression:gzip, size: 216177339 bytes, 17413 inodes, blocksize: 262144 bytes, created: 2017-06-15 14:56:50
So clearly the first 'segments' are ubiquiti specific. Then at offset 324 we see the declaration of a 7079048 byte image. Given that the read-only filesystem is a lot larger than 7 Megabytes this is probably the kernel. The last entry in the table is the actual squashfs filesystem. And this is where it gets interesting! 

 4. Testing a theory 

So let's assume that the images in the firmware update files are being copied 1:1 to the partitions (so: kernel part goes to mmcblk0p4, rofs goes to mmcblk0p6) and no other security/encryption happens in that process. That would mean that we can test this theory by comparing the kernel partition image we created on the CRM Point to the kernel partition image in the firmware update file. dd if=CRM.mtk7623.v0.6.0.a670e69M.170615.0748.bin skip=324 count=6586704 iflag=count_bytes bs=1 of=extracted_crm_kernel So let's test that theory. So we're copying data starting at offset 324 (where the uImage header starts) and we take image size + header size as the count argument. Comparing the extracted_crm_kernel file to mmcblk0p4.img confirmed my suspicion. The only difference between the two files is that the partition we extracted from the CRM is filled with zero's where the kernel data ends. For obvious reasons the firmware update does not contain all the 0's. We're going to take a wild guess and assume that this is also the case for the data partition:

dd if=UCK.mtk7623.v1.1.13.818cc5f.200430.0950.bin skip=324 count=7079112 iflag=count_bytes bs=1 of=extracted_uck_kernel
dd if=UCK.mtk7623.v1.1.13.818cc5f.200430.0950.bin skip=1769875 bs=4 of=extracted_uckrootfs.squashfs

5. Crafting our update file

Earlier in this post thread I showed that using dd to overwrite a partition works just fine, but everything that reads from the partition (especially the overlay one) is screwed up because the underlying data is completely different. Unfortunately swapping a kernel and not the filesystem may also result in a brick. But the kernel and rootfs partitions have a recovery partition in between. Initially I wanted to make a single file that we can write with dd that includes the kernel, recovery and rootfs partitions. But I took a quick peek in mmcblk0p5.img and realised this partition is filled with zeros. This explains why people have no way to recover once they brick this thing :D.

$ cp mmcblk0p5.img extracted_uck_kernel_padded.img # take 32MiB img filled with zeroes
$ dd if=extracted_uck_kernel conv=notrunc of=extracted_uck_kernel_padded.img
13826+1 records in
13826+1 records out
7079112 bytes (7.1 MB, 6.8 MiB) copied, 0.177849 s, 39.8 MB/s
$ dd if=mmcblk0p5.img >>extracted_uck_kernel_padded.img # insert 32MiB of zeroes
65536+0 records in
65536+0 records out
33554432 bytes (34 MB, 32 MiB) copied, 1.16823 s, 28.7 MB/s
$ dd if=extracted_uckrootfs.squashfs >>extracted_uck_kernel_padded.img # then append the squashfs
683248+1 records in
683248+1 records out
349823248 bytes (350 MB, 334 MiB) copied, 11.6135 s, 30.1 MB/s
So next up: copy extracted_uck_kernel_padded.img on to the sdcard so we can flash it! 6. Flashing This is the part where everything either succeeds, or fails. This is your last chance to choose for safety. If you enter these commands and you brick your device, you are on your own. Neither Ubiquiti nor I can provide support to you if this fails. First off, let's determine where we have to write this file to.

root@control-point:/# parted /dev/mmcblk0 'unit s print'
Model: MMC 004GE0 (sd/mmc)
Disk /dev/mmcblk0: 7690240s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start     End       Size      File system  Name      Flags
 1      512s      1535s     1024s                  uboot
 2      1536s     2047s     512s                   config
 3      2048s     2559s     512s                   factory
 4      2560s     68095s    65536s                 kernel
 5      68096s    133631s   65536s                 recovery
 6      133632s   2230783s  2097152s               rootfs
 7      2230784s  5376511s  3145728s  ext4         appdata
 8      5376512s  7690206s  2313695s  ext4         userdata
Ok, cool. So because we have an image file that spans kernel (32MiB), recovery (32MiB) and rootfs (the rest) we can just tell dd to start writing at offset 2560 until we reach the end of the file. This works because the new rootfs is larger than the previous one. Otherwise it would have been wiser to append some more 00's at the end of our image just to make sure we don't have weird garbage at the end of our actual partition data.

root@control-point:/# dd if=/data/extracted_uck_kernel_padded.img of=/dev/mmcblk0 \
814320+1 records in
814320+1 records out
416932112 bytes (417 MB) copied, 60.8669 s, 6.8 MB/s

So from this point your stick will be a little confused. SSH logins might fail, and that kind of good stuff. For me just a normal paperclip-reset wasn't enough, I had to do a reset from the web interface as well (or SSH... not 100% sure anymore). But after the reset this thing behaves like a UniFi Cloud Key! Enjoy!

Thursday, 6 December 2018

Customise Hue Bulb Startup Behaviour after a power cut

For years people have been wondering why and when Philips would allow users to customise startup behaviour after a power cut. No more! Philips has recently updated the firmware of the bridge and bulbs to allow for a custom power-on behaviour. Unfortunately for now it seems to allow only color temperature (whites) and brightness. But it’s better than nothing!

Open the bridge debug page

  • Open a browser (preferably on a PC or Mac)
  • Go to to see the IP address of your Hue bridge (it is called internalipaddress). In my case it is
  • Open your favourite browser and enter replacing my IP address with the IP you found in the previous step.

    You should see the following page:

Get an API key

If you don’t have an API key yet you can do the following:
  • Replace the default value (/api/1234) with /api
  • Paste the following content in the ‘message body’ field: {"devicetype": "debug_credential" }
  • Press the ‘POST’-button on the web page
  • Run to your Hue Bridge and press the round button on it
  • Press the ‘POST’-button on the web page again
  • Copy the value for “username” to some text editor so you won’t lose it. This value represents your API key.

Identify the lights to modify

In the URL field of the debug page you can append your username, for example /api/0123456789012345678901234567 if your api key is 0123456789012345678901234567.

Next you can get a list of lights by appending /lights to the URL. Then you can press the GET button.

Every light that allows custom startup behaviour will have a startup key in the config property (see image below).

Write down all the ID’s of the lights that you want to modify and that allow custom startup behaviour.

Example showing that the light with ID 1 allows custom startup configuration. You can also see 2, which shows the ID of the second light in the response.

Modify the lights

Append the ID of the light you want to customise to the URL. As a sanity check, press the GET-button and verify that you get the status of the light in the response (no errors).

Paste the following in the Message Body field:
  "config": {
    "startup": {
      "mode": "custom",
      "customsettings": {
        "bri": 254,
        "ct": 500
  • Adjust the Bri (Brightness, 0-255) and CT (Colour Temperature, ranging from 153 (6500 Kelvin, bright white) to 500 (2000 Kelvin, warm white).
  • Press the PUT button
  • Repeat for the other lights
Now to verify that it works, turn off the physical power switch to your light, wait for approx 5 seconds, and flip the switch again. Enjoy!

Sunday, 16 July 2017

HP Microserver Gen8 + i3 2120: does it ECC?

So the interwebs does a lot of claims about whether the Sandy Bridge generation i3's support ECC memory. For example here, here and here. Intel ARK does not show any information that mentions ECC support for the i3-2120.

Now as some of you have read, I've purchased an HP Microserver Gen8 with a Celeron G1610T inside which does support ECC. I purchased a second-hand Intel Core i3-2120 but as soon as it arrived my colleague said "well that's lame, it doesn't support ECC". I researched the internet and came to a sad conclusion. There seemed to be no official support for ECC. Why risk ruining a "production" server with a non-ECC capable CPU? So the CPU disappeared in my desk.

A few months later

I re-did some of my research and realised that HP did offer the Microserver with i3 processors from the Sandy Bridge generation. This meant that the hardware should be capable and knowing that most of the cpu's from a certain family probably are from the same wafers it wouldn't be too far fetched to assume that most (if not: all) of the chips from the same series should be able to support ECC.

Long story short

I'm running the i3-2120 now in the Microserver Gen8 and it works perfectly. ECC is shown as enabled in both the iLO and unRaid. No issues whatsoever. An i3 CPU of this generation can be found second hand between 25 and 45 euro's (approx.). And don't worry about the TDP except if you're hammering the system 24/7. My temperatures haven't gone up at all :). So I can recommend this upgrade to anyone wanting to get some more oomph from their Microserver Gen8 with a G1610T in it (which is a great deal on it's own!). 

Sunday, 20 November 2016

LEDE project: prevent having to re-flash the latest nightly build every time you need a new package

I really admire the LEDE project and the reasons why they decided that it was necessary to fork openwrt and continue in a way that they saw fit. Unfortunately as they only recently started (somewhere about May 2016) there are no official releases yet and this means that there are only nightly builds.

This 'guide' will work for both openwrt and lede nightly builds. The reason is that every night the packages and dependencies get rebuilt and packages that rely on a specific kernel/kernel module version can not be installed at a later moment (when the new nightly build replaces the previous one online). To solve this we can make a copy of all packages for a given nightly build. In a nutshell:

  1. Download the firmware file from the LEDE/openwrt site
  2. Run the commands that download all packages from the LEDE/openwrt site
  3. Install/upgrade to recent build
  4. Run a local webserver
  5. Modify the package repository URL's to match your local webserver

So lets get started!

Download the firmware file

This is easy and probably something you've done before. If not: check out the architecture of your router and try to find whether openwrt/LEDE already supports your router. If they don't you're out of luck for now. For this guide I'll be using the TP-Link Archer C7 which is contained in the ar71xx builds of both LEDE and openwrt. 

Run these commands that download all packages from the LEDE/openwrt site

wget -m --no-parent -e robots=off
wget -m --no-parent -e robots=off
wget -m --no-parent -e robots=off
wget -m --no-parent -e robots=off
wget -m --no-parent -e robots=off
wget -m --no-parent -e robots=off

The commands above should download all the packages that LEDE references within it's "software distribution feeds". For openwrt all references to should be replaced with If you already have a version of openwrt/LEDE installed you can just make sure that you wget every of the URL's in the list instead of the URL's I provided above. Checkout 'System > Software' => Configuration tab for a list of feeds used by the current firmware.

After downloading the feeds you might want to move the contents of the folder to a new folder. I'm using /Users/flix/Documens/lede-snapshots here as the /Users folder can be shared with Docker containers by default if you're using macOS. Not sure about how permissive Linux is in this regard.

Install/upgrade to the downloaded build

After downloading the packages you can install the firmware. To be sure that your 'offline' package feeds actually will work I suggest not (re)installing any of your packages just yet. Just the firmware.

Run a local webserver

Of course, the goal of downloading the packages is to serve them at a later moment in time. Let's get it started:

docker run --name nginx -v /Users/flix/Documents/lede-snapshots:/usr/share/nginx/html:ro -d -p 8080:80 nginx

The above command will pull an nginx container from docker hub and tell it to serve your local folder (containing the packages) on port 8080 from the machine you're running on. Of course you can copy the files to your home server, NAS, raspberry pi or any other device that can run a web server permanently, but for my use case (occasional desire to try out a package) I find spinning up a docker container the easiest.

Modify the package feed URLs

So now we can just swap the part with the IP address of our webserver. This is it! You can now install packages from your local package mirror! Good luck!

Monday, 7 November 2016

The HP Microserver Gen8 for EUR 200. Is it worth it?

So recently our Drobo at QwikSense (my other company) died. Which wasn't really a problem, except that it was storing our Time Machine backups for my computer and my colleagues' MacBooks. So I was looking for a reliable storage solution with room to spare for an extra disk and preferably with ECC memory.

In the NAS world you're either out of luck if you're looking for something at around €200. Most of the time this will give you an ARM device with HDD-slots. On top of that this locks you into the vendor firmware (and support) and most of the time building something yourself with the provided kernel/firmware sources is not enough to build a robust solution that gives you a 'simple' user interface that just does what it should.

Of course we could build something ourselves. Whack a Mini-ITX board in a dito casing and voilĂ : hard disks attached to your network. But I don't have the time to do this and the number of Mini-ITX boards that sports an ECC-capable CPU/RAM at a reasonable price is close to zero.

Our saviour

Enter the HP Gen8 Microserver. The model with a Celeron G1610T and 4GB ECC(!) RAM can currently be purchased in the Netherlands for €203 euros at the cheapest reliable internet store at the time of writing! Now the strange (and awesome) thing about this tiny server is that it has a CPU, memory and a motherboard that support ECC! Can you believe it? And as an extra, HP's ILO4 management solution is also present. You can actually turn this thing on remotely if it's switched off. You can forcefully reboot it if the kernel crashes. It has features that "real" servers have, but in a tiny and silent package that looks AWESOME.

Note that the image above was ripped from Also note that this is not the exact same model that you can get for such a low price. My model did not come with an optical drive, but who needs that anyway?

Current setup

We still had some drives from the Drobo, so we put those in and added a 128GB SSD from Plextor to act as a caching drive. This confirmed my suspicion that the Drobo was in fact dead as a dodo. The drives were all okay! Because some of the drives that were used in the Drobo were WD Greens (and everybody is always crying about how WD Greens drop out of RAID arrays) I decided to WDIDLE3 them. The easiest way for me was to get UBCD, put it on a USB drive and boot the Microserver from that. Now I must stress that you should NOT connect ANY other drives than the ONE drive you want to flash (except for USB disks of course, they don't count). When booting from de USB drive, open "Disk Management" or something similar (sorry, I forgot what it actually was) and then select WDIDLE3 at the bottom of the list of tools. You might get a list with a lot of choices and a countdown. Let the countdown pass and wait for a DOS-like prompt. At the prompt enter the following command:

WDIDLE3.exe /s300 (for 5 minutes).

That's it. Power down the system and repeat for any WD Green drives you have lying around. Unraid time!


Since I saw LinusTechTips' 7 Gamers 1 CPU build I was very interested in unRaid as a storage solution. While I'm still running my Netgear ReadyNAS 4 Ultra at home (running ReadyNAS OS6 while Netgear only provided updates on OS4) I have wanted to own a machine like this. Docker on the ReadyNAS wouldn't work (because I believe the kernel was too old, not sure if that is still the case since they migrated to Jessie in the latest firmware update) but on unRaid the sky is the limit as VM's, Docker and plugins can all be used out of the box. There is a very active community providing solutions to run Plex Media Server, Resilio Sync, CouchPotato, SabNZBD and many more tools for media management. As we use Resilio Sync at work I decided to install that. 

Note that the basic license for unRaid allows a maximum of 6 internal drives (== SATA) while the MicroServer only has 5 SATA ports (the fifth port is for the optical drive). The USB drive does not count towards the number of drives. Unless you're planning on adding a SAS card and an external storage enclosure you'll be fine with a basic license. And if you're not, you probably weren't planning on keeping the budget low anyway.

Photo from a review on

Power consumption

While I've read different thing about the Gen8 Micro I've not been able to measure the power consumption yet. Hopefully I'll be able to report on that soon.

Upgrades (and potential for win)

The Gen8 Microserver has a regular desktop-formfactor Intel CPU on board in socket 1155. While the CPU has a TDP of just 35 watts, many people have retrofitted their Gen8 with beefier CPU's. This is one of the cheapest ways to create a powerful, compact and mostly silent reliable server for a small company or for media streaming. Especially if you have a few drives lying around, this project (including CPU upgrade, SSD cache disk and unRaid license but excluding HDD's) would set you back around €420. Sounds like a sweet deal to me! But even without the CPU upgrade it's a great deal. Let me know what you think!

Sunday, 21 August 2016

Energy measurement with Domoticz and Grafana. Yes, it's easy!

A while ago I started tracking the power and gas consumption at home. To do this I read the meters every sunday afternoon and this gave me some insight in the power consumption. Unfortunately this also means that you have the measurement resolution of a week, so you can't really change your habits and see the effect on power/gas consumption.

Sunday, 14 August 2016

11 Months Later: About productivity and mental health as a software dev

Recently I was asked whether I could share the tips and tricks that I still implement from my post from about 11 months ago. At first, I wasn't even sure that I would be able to name things I've learned and managed to keep doing.

Then I started thinking: is it even important that I can name them? Do I have to be consciously aware of the improvements I've made? But maybe the more important thing to realise is: even if I gain a single improvement from reading a book, something tiny that I did not do before, it is a gain and it is worth it! On top of that, everyones situation is different. This is why the stuff that helped me might not necessarily work for you.

So in my opinion books like these should be used more like frameworks, toolboxes, but not as a manual or bible. Similar to how this post describes the concept of Scrum implementation:
The Scrum framework leaves different options and tactics to play the game, ways that are at any time adopted to the context and circumstances.
Even if you implement only a small subset of the possibilities that Scrum provides, you're still benefiting from it and you can choose to combine it with other tools within (or even outside) the Scrum framework to streamline (optimise, improve, whatever) your work(flow) even more. This is something that is explained based on examples and anecdotes in the book Scrum & XP from the trenches, which you can download for free!

In conclusion

While I don't want to sum up the things I do better because they might not work for you, I'll list the things I currently do and things that I wish I would be better at implementing. These are from Blueprint of a productive programmer as the other book (Becoming a better programmer) is basically a huge list of great advice that you should probably read every 4 weeks until you really think you're implementing > 10% of it :).

Things I have learned and am able to implement (at least partially):

  • Chapter 2 from Blueprint of a productive programmer: 
    • Minimise Distractions, Stay off Facebook (I actually uninstalled FB and WhatsApp from my phone)
    • Avoid meetings (unless it's REALLY clear what needs to be discussed for both parties and other means of communication will NOT suffice)
    • Commit to repository often (especially if you have trouble focussing because of stuff like A.D.D.)
Stuff that I really want to do better (but have forgotten to actually keep doing)
  • Chapter 5 from Blueprint of a productive programmer:
    • Eat the right food
    • Take regular breaks
    • Prevent or treat RSI
At the same time I found the book as a whole to give me a great insight in overall pitfalls in your daily routines as a programmer. If you're a programmer (and even if you're a pretty good programmer) I'm sure you will gain something from reading it :). Again, here are the books:

Becoming a better programmer by Pete Goodliffe:

O'Reilly StoreAmazon (Kindle)

The Blueprint for a Productive Programmer by Moshfegh Hamedani:

Amazon ebook