Cloud Detection - Part Five

This post follows on at quite a distance from my last post on the subject, and until recently I had not realised quite how long this project has been languishing on the bench. Like many of the electronics projects I undertake they need to be able to withstand being outside and exposed to weather conditions, dealing with packaging them to handle harsher environments is always the most difficult part.

For the same reason as above, I have also changed one of the key measurement components, the humidity sensor.

I am also dropping the Arduino Uno in favour of the Arduino Yún. I wrote a brief article on the Yún some time ago.

View comments.

more ...

Mammatus Clouds Spotted

I spotted some unusual clouds over the house the other day!

Panorama shot of mammatus over the house

Panorama shot of mammatus over the house

I have always enjoyed identifying clouds and if I can take pictures of any unusual formations. This type, Mammamtus, is pretty rare in these parts; I can count on one hand the number of times I have seen Mammatus clouds.

View comments.

more ...

Arduino and Multiple MLX90614 Sensors Take Two

Over two years ago I wrote this article about giving an MLX90614 a new slave address, different from the default they are all set with from the factory (0x5A), in order to enable having more than one individually addressable device on the bus.

As before I am still using two of these MLX90614 sensors in my cloud detection project and have changed slave addresses on these devices many times before with very few problems.

Why am I writing a new post about this? Well I have had quite a few questions on my original article and I decided to hopefully make a few things a little clearer with some updated code examples thrown into the mix.

What are we going to do in this article? Focus on the following:

  1. Take a quick peek at the MLX90614 again
  2. Look at some updated code for changing slave addresses
  3. See the range of supported addresses
  4. Look at actually running the code

View comments.

more ...


Getting Bookmark Data from del.icio.us

Why am I doing this? Well, I've been bookmarking links using del.icio.us since March 2009 and have, very slowly, built up a steady collection of over 900 bookmarks, most of which are probably still quite useful.

The ups and downs of social bookmarking service del.icio.us are very well documented here, so I won't go into all the details, but suffice to say that reading the aforementioned article gave me all the impetus I needed to go ahead and homestead my list of bookmarks. Simple, I thought.

Then came the snag...

We're sorry, but due to heavy load on our database we are no longer able to offer an export function. Our engineers are working on this and we will restore it as soon as possible.

Great... what now? Time to write a script of course. I've initially created this article using a Jupyter Notebook which may, I hope, explain the recipe feel to the writing.

The Plan

Well, initially the plan was just to grab the bookmarks by scraping my del.icio.us pages from beginning to end and turn that data straight into some form of bookmark HTML. Thinking about it more, I decided to opt for a JSON data format; that way I could easily turn the data to any format I like at my leisure.

View comments.

more ...

Show Source Plugin Update

I received some good news this week: my Show Source plugin, the one inspired by Sphinx, has had its pull request accepted into the getpelican/pelican-plugins repo. This means that the plugin becomes a part of the standard Pelican plugin canon.

My next task will be to make a small addition, and hence pull request, to the developers of the theme I use with this site (pelican-bootstrap3) to accomodate a couple of small template changes to support Show Source automatically.

The officially accepted version of show source is available now right here.

Enjoy!

View comments.

more ...

Creating AWS Data Pipelines with Boto3 and JSON

I have been doing a little work with AWS data pipeline recently for undertaking ETL tasks at work. AWS data pipeline handles data driven workflows called pipelines. The data pipelines take care of scheduling, data depenedencies, data sources and destinations in a nicely managed workflow. My tasks take batch datasets from SQL databases, processing and loading those datasets into S3 buckets, then import into a Redshift reporting database.

Seeing that production database structures are frequently updated, those changes need to be reflected in the reporting backend service. Now for a couple of years I have struggled on with Amazon's web based data pipeline architect to manage those changes to the pipeline. This has been an onerous task, as the architect does not really lend itself very well to the task of managing a large set of pipelines. Here begins a little tale of delving into the AWS data pipeline API to find another way.

View comments.

more ...

The Show Source Pelican Plugin

I have always been a fan of the Sphinx Python documentation generator and it has, I think, a nice feature where you can check out the raw source of a generated piece of documentation - a Show Source link. I decided that developing a Pelican plugin to imitate that feature would be a great way of getting a little deeper into the Pelican code itself.

This second post of the series explains the use of the Show Source plugin that I developed from the learning in the previous post. The following article has been, in some part, reproduced in the ReadMe.rst file included as part of the plugin.

View comments.

more ...

Basic Notes on Pelican Plugin Architecture

I have always been a fan of the Sphinx Python documentation generator and it has, I think, a nice feature where you can check out the raw source of a generated piece of documentation - a Show Source link. I decided that developing a Pelican plugin to imitate that feature would be a great way of getting a little deeper into the Pelican code itself.

This first post explores the Pelican plugin architecture and the basics of building a plugin.

View comments.

more ...