Portable Git

A few months back I posted an article about setting up a portable Python environment. I think just as important as having a nice environment to work in is having all your stuff properly backed up for when things go wrong!

I use Git to properly keep track of all changes I have made to programming projects, journal articles and conference papers. I am also using it to backup and keep tabs on my final dissertation. This is really useful, not just for when things go wrong but also to go back and look at previous versions of, for example, a paper where some text you later deleted is still there and can be recovered.

Setting up a portable version of Git is super easy. Just download the portable version for your operating system at this website, put it on your USB stick and use it whenever you want. The trickier part is actually creating a git repository and keeping stuff synced.

GUI environments exist for maintaining Git repositories on your computer but I actually think it is just easier to do everything from command line in this case. If you are working on these projects by yourself you can keep things relatively simple and only need to know a few basic commands. First run the ‘git-cmd.exe’ file (if you are running windows) and then use the ‘cd’ command to get to the directory you want to backup.

git init

Run this command in the folder you want to keep track of, it can contain code, pictures, documents or whatever else you want to keep backed up. So you now have an empty git repository on your computer. So as to add all the files in that folder you need to run:

git add .

So all the files are ‘staged’ or ready for commiting. Now type:

git commit -m "message telling me what this commit is"

This command ‘commits’ the ‘staged’ files in the current folder to the current ‘branch’. What this basically means is that whatever you had in the folder when you typed ‘git add .’ is now backed up as a saved point in time that you can retrieve and go back to later.

Typically you will want to save your files on a remote server (in the ‘Cloud’) so they can be retrieved from wherever you are. If you are at a university you should have access to a private GitLab account. GitHub can also be used for free if you are willing to share you work with everyone, otherwise you have to pay a small fee to keep it private. BitBucket is another hosting service that offers free accounts.

Once you have created an online repository for the project you are backing up you need to tell the Git program running on your computer where the online repository is. You should have been given a URL when you created your  online project, something like ‘https://user@gitlab.com/username/Project.git’. Once you have this you can link it up with the repository on your computer by typing:

git remote add origin https://user@gitlab.com/username/Project.git

where you should replace the above URL with your real one.

Once you have this all setup all you need to do is type:

git push -u origin master

That will send all your files over to your online repository and keep them safe. In future, whenever you want to save the current state of your folder just type these commands:

git add .
git commit -m "something about the current save"
git push

That is about all you need to know for 90% of the time. You can do the same thing on another computer and use the following command to retrieve the last saved state:

git pull

That’s it!

One other thing that can happen is you change one or two things that aren’t important on the other computer, you then wish to pull the last save and Git tells you that the two versions of that repository don’t match anymore. If that happens you can force the folder on your computer to match that of the online repository using this command:

git fetch --all
git reset --hard origin/master

That will wipe whatever changes you made on your local computer and replace it with the online version, so be careful!

You can start making things more complicated with branches representing different version histories of the same repository, but if you are just using Git to keep basic track of your work these commands are all you will need.

======== CHEAT SHEET =========
---- Start a new repository ----
git init
git remote add origin <url>

---- Save current state ----
git add .
git commit -m "message about save"

---- Upload repository to online server ----
git push -u origin master   # First time
git push   # Any time afterwards

---- Get repository from online server ----
git pull

---- Overwrite local repository with online version ----
git fetch --all
git reset --hard origin/master


I hope this helps some people get their heads around using Git. It is really useful even for small projects and I don’t think it needs to be super complicated to use.





Does energy matter?

Many of my posts are looking at how the energy efficiency of wastewater treatment can be improved. In fact one of my first posts was looking at exactly this. At large municipal wastewater treatment works that are constantly treating the waste produced by whole towns any little piece of energy they can save will help them cut costs. If we really could reduce energy usage down to zero for wastewater treatment, what sort of effect would that have on a country like Germany?

Energy usage by industry in Germany, 2014

So the energy usage of wastewater treatment is part of “Other” in the above chart. How much of “Other” is it?

Energy usage for wastewater treatment in Germany

You see that little red strip up there? That is total energy usage of all wastewater treatment plants in Germany when compared to other industries. Although only a small blip on the chart I have shown it adds up to one of the larger energy consumers for municipalities. This is all good, but what if we start looking at industry and some of the bigger pieces of the pie?

One interesting idea is to treat the highly concentrated wastewater from certain industries on-site. This makes the job of the municipal wastewater treatment works easier and should help reduce fees for wastewater disposal required by the factory. Some places where this makes sense include: paper production, milk processing, meat processing and abattoirs, and many food and beverage production facilities. The processes produce waste streams with very high organic loading, that can often be treated using anaerobic digestion.

A big advantage of anaerobic digestion that often gets touted is its ability to produce biogas as a potential energy source. This should make it super attractive to industries that need a system for treating their waste, right?

Well… Maybe. The big factor that comes into play is cost. Let’s say we are living in a perfect world where the treatment plant is running and most of the cost has been offset by the fewer waste disposal fees the factory now has to pay. What about all that biogas being produced? How much of an effect would the energy recovered from biogas production have on a large factory? Let’s have a look at energy costs for industrial use in Germany for an example week, the blue line below is the continuous average price of energy in €/MWh.

Source: https://www.energy-charts.de/power_de.htm
Source: https://www.energy-charts.de/power_de.htm

What the hell is happening at the end there?!?! The actual price of energy is negative, people are paying me to use more energy now or what? Yes, actually. Even if you ignore the negative prices (the phenomenen for which the priority given to renewable energy is often blamed) it is possible to see the change in energy prices even within a single day. If you are running a large factory and someone says you can save so and so much money by building a biogas plant and processing your waste into renewable energy and someone else comes and says you can save even more money by just running one of your production lines a bit later in the evening, which option do you choose?

So at the moment energy does matter but money matters more.

Portable Python

The work I am doing at the moment involves building lots of models and running lots of simulations. Some of the models are running in Java but most of the analysis and especially the work I am doing on machine learning is taking place in Python.

However, I do not have administrator rights on the computer I use at work, which can make installing the latest Python packages and updates difficult. So I looked around a bit at options for having my Python workspace running completely on USB, this gives me the added advantage of being able to run my projects on different computers (useful when presenting your work to colleagues!).

Portable Python was the first stop when looking for a portable version of Python and although it is not being developed anymore they provide the alternative options on their website. From this list of options I chose to use WinPython as it is fully portable and includes all the packages I needed already.

WinPython comes with the Qt graphical user interfaces and the Spyder interactive scientific development environment. I used these for a while but at home I used PyCharm and I wasn’t liking switching environments. So I browsed around looking for anyone else who was trying to get a portable Python setup with PyCharm and information was pretty slim. So now that I have my setup running I thought I should share the details, it is pretty simple and works like a charm:)

So here is how you get a portable Python with PyCharm in Windows:

  1. First download the latest versions of WinPython and PyCharm (I downloaded Community Edition).
  2. Once you have both these downloaded simply install WinPython to your USB stick (for me this is drive G:) or harddrive (this can take a while). It does not require any special permissions unless you want integration with Windows explorer.
  3. Now, the trickier part is portable PyCharm. To do this you will need to have 7-Zip installed on the computer where you are creating your portable workspace (assuming you are on Windows). 7-Zip can be downloaded for free from here.
  4. Open 7-Zip and go to your downloads directory. Now instead of extracting a zip file you will extract the exe file you downloaded from the PyCharm website. Just click on the downloaded file and extract the file to your USB.
  5. The only thing left to do now is open the portable version of PyCharm on your USB stick and tell it to use the portable version of WinPython as the interpreter. The executable can be found in the “PyCharm\bin” folder. The portable python interpreter can be selected as shown below.
  6. However, you may notice that PyCharm is still storing its settings on the computer you are using. So that all information is saved on the USB and you have a truly portable setup you need to change one more file. The file that needs to be edited is found under “PyCharm\bin\idea.properties”. Open this file with a text editor and change it as follows:
    # Use ${idea.home.path} macro to specify location relative to IDE installation home.
    # Use ${xxx} where xxx is any Java property (including defined in previous lines of this file) to refer to its value.
    # Note for Windows users: please make sure you're using forward slashes (e.g. c:/idea/system).
    # Uncomment this option if you want to customize path to IDE config folder. Make sure you're using forward slashes.
    # idea.config.path=${user.home}/.PyCharmCE/config
    # Uncomment this option if you want to customize path to IDE system folder. Make sure you're using forward slashes.
    # idea.system.path=${user.home}/.PyCharmCE/system
    # Uncomment this option if you want to customize path to user installed plugins folder. Make sure you're using forward slashes.
    # idea.plugins.path=${idea.config.path}/plugins
    # Uncomment this option if you want to customize path to IDE logs folder. Make sure you're using forward slashes.
    # idea.log.path=${idea.system.path}/log

    Here we are uncommenting the first four options and setting the paths to the PyCharm installation home folder, in our case on the USB.

That’s it! You should now be able to run a completely portable Python workspace with PyCharm as your workspace environment. I hope this helps some more people get up and running with Python and PyCharm! In a future post I will explain how to get a portable Git setup on the same USB so that you have complete version control of your projects and everything is properly backed up.

UPDATE: Article on running a portable Git setup is now posted here!

Industry 4.0 & Water 4.0

‘Industrie 4.0’ is a term that originated in Germany around 2011. It describes the next generation of industrial production based on cyber-physical systems. The National Science Foundation defines a cyber-physical system as:

…the tight conjoining of and coordination between computational and physical resources.  We envision that the cyber-physical systems of tomorrow will far exceed those of today in terms of adaptability, autonomy, efficiency, functionality, reliability, safety, and usability.

NSF 10-515 

The closest thing to ‘Industrie 4.0’ in English has been suggested to be ‘The Internet of Things (IoT)’, which I feel isn’t correct as Industry 4.0 is really, as the name suggests, focused on industry.

So why 4.0? The idea is that we have already had 3 industrial revolutions. The first was the introduction of mechanical production systems powered by water and steam, such as the automatic loom. The second is when electrically powered assembly lines appeared, massively increasing production output. The third was the implementation of electronics and information technology into the production industry through the use of devices such as the programmable logic controller or PLC. As to why they say 4.0 instead of just 4 or 4th, I’m just guessing it sounds more modern and computery. Atleast they didn’t call it ‘iRevolution 4.0’…



So now Germany wants to bring this same idea to the water industry with the inventive name ‘Wasser 4.0’. Now we have a problem here, as recently a book was published in America called ‘Water 4.0’. Professor Sedlak already describes his 4 revolutions in the water industry in this book. Water 1.0 is the distribution of water in ancient Rome using pipes and canals. Water 2.0 is the treating of drinking water using filtration and chlorination. Water 3.0 is the development of wastewater treatment plants and sewage networks. This leads to his concept of Water 4.0 regarding technologies to deal with water shortages.

I think this is different to what the Germans wish to convey when they speak about ‘Water 4.0’. Water 4.0 is the same as Industry 4.0 but applied to the water industry, that is the digitalisation and networking of automation and monitoring systems and the introduction of smart technologies in water and wastewater treatment. In this example there aren’t any water 1.0’s or 2.0’s as Water 4.0 is just a copy of Industry 4.0 but for water.

However, I think there could be an image for Water 4.0 that describes the revolutions in the water industry over the past century in a simplified way. In this concept I would say the first water industry revolution was the usage of chemicals and sedimentation in the treatment of water and wastewater. The second revolution was the discovery of the activated sludge process for wastewater treatment in the UK at the beginning of the 20th century. The third revolution was the implementation of membranes for desalination and wastewater treatment and recycling. The fourth revolution then matches up with that of ‘Industry 4.0’ with the implementation of advanced cyber-physical systems.



In the end, it will probably be another 100 years before we can really look back and say “That was when the 4th revolution occured in the water and wastewater industry”. At the moment it is still difficult to say what these 4.0 revolutions in industry and in water are even going to mean? Are we going to see a big increase in production and capability suddenly? Will everything be automated and everyone out of a job? Is there going to be a big adjustment where we enter a new golden (or dark) age or is it going to be just another little blip in history where there was lots of talk but not much really changed… Only time will tell.




Cost effective and simple control and automation

This post is a translation of an article originally appearing in SPS-MAGAZIN 8 2016. The original article can be found here.

The Institute of Fluid Mechanics at the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) has developed an automation system that meets the requirements for testing of pilot-scale plants while still giving the flexibility needed for research. The following report describes the system, which is based on Arduino and Raspberry Pi.

Authors: Liam Pettigrew, Rolf Zech, Prof. Dr.-Ing. habil. Antonio Delgado

The Institute of Fluid Mechanics at the FAU in Erlangen was looking for a cost-effective yet easy-to-use solution for automation of laboratory and pilot scale processes. It was decided to develop a system that was based on open-source hardware and software. Open-source hardware systems for automation have been used by hobbyists and developers for many years and cannot be compared with larger industrial control systems. Open-source hardware usually offers only a limited number of digital and analog inputs and outputs, which puts them in the compact controller category. An alternative system that implements existing open-source hardware but removes their limitations was developed and built by engineers and technicians at the Institute of Fluid Mechanics. This system is modular, flexible, extensible, inexpensive and easy to program. The Arduino platform was implemented as the controller for the system.

Design of the lab-scale system. An optional extension is shown for future expandability. Multiple systems can be connected over the Ethernet interface. Connection over serial USB is possible directly with a PC rather than using a Raspberry Pi. The process devices shown are for example only. (Figure: Friedrich-Alexander-Universität Erlangen-Nürnberg)

Industry related research

At universities and in laboratories, students and researchers often need to control and automate processes for research projects under tight deadlines. The students and researchers are specialists in their fields, but have little or no experience with professional PLC hardware and software. It is therefore very difficult for them to familiarize themselves with the complicated instrumentation and software in short periods of time. The question can also be raised if a professional automation system is required for smaller research projects. However, industrial partners often wish to see research results that are application-orientated and compatible with industry standards, where an appropriate control system with a high degree of accuracy manages everything.

Quick, easy and cheap

Process automation in research with industry partners must be able to be implemented quickly, so that scientists can concentrate on the process being studied in proper detail. An easy-to-use programming language, that is already well-known from other applications, is a prerequisite. The implementation of the automation system can be further simplified with the free availability of libraries and examples for quickly completing different tasks. Data acquisition and system control should use methods that are compatible with standard software that the user does not need to newly learn. A big challenge for smaller research projects is the tight budget and timeline given for the implementation of process control when the research itself is not focused on automation. Therefore, a PLC based automation solution must be inexpensive, license-free and easily understandable.

Free flow of information

Although commercial solutions for compact control already exist (E.g. Siemens Logo, Rockwell MicroLogix or Eaton easy, etc.), they are proprietary and protected systems. Flexible solutions, such as those often needed in research but which may not be typical in industry, require bypass capabilities and means of variation. Problems that often occur during research can, by using open-source hardware and software, more easily be solved with access to system information and the large communities on the internet where ideas and solutions are openly and freely exchanged.

Housing for the Arduino module. The compact design allows for easy access to the USB port, I2C connectors and the 24V power supply. (Photo: Friedrich-Alexander-Universität Erlangen-Nürnberg)

Arduino and Raspberry Pi

The Arduino is a low cost microcontroller board. It was originally developed in 2005 as a teaching tool for students at the Interaction Design Institute Ivrea. The Arduino has since become one of the most popular ‘do-it-yourself’ components in the world. Over 700,000 official boards had been registered when last recorded in 2013. The Raspberry Pi is a low cost single board computer in credit card format, which was developed in 2009 in the UK by the Raspberry Pi Foundation to promote the study of computer science in schools. About 5 million Raspberry Pis had been sold in the three years since their inception, making it the best-selling British computer. The strength of these two modules is the good combination of open-source electronics and software, which includes freely available source code and an easy-to-use and free development environment. Other components produced by companies such as Texas Instruments and Infineon are similarly constructed and can be implemented just as easily.

Modules for control

The system developed by the Institute of Fluid Mechanics is composed of individual modules, which are interconnected via an I2C bus. The Arduino operates as the controller module, while the other modules act as interface circuits for digital or analog inputs and outputs. The modules also act as measurement amplifiers for different sensor technologies. Each module has an address through which it can be controlled, where 8 digital output modules containing 8 switching channels each (certain modules contain 16 channels) can switch up to 128 channels. Each module has an I2C bus controller with switch amplifiers, relays, and contactors where small loads of up to 100 mA per channel can be controlled directly. High side switching stages allow for conventional ‘relay to ground’ wiring. Analog modules can process up to 64 channels using the available addresses. Another I2C bus controller type allows for a further increase of inputs and outputs by 64 or 128 channels. A multiplexer and bus driver allows for the bus system and structure to be further expanded and developed. As can be seen in the pictures, the modules are contained in DIN rail housings with plug-in terminals. The housing width for each module is 22.5mm. The supply voltage can be anywhere between 12 and 30 Volts (typically 24 VDC), therefore meeting electrical control cabinet requirements. The programming and connection of the controller with a PC is through the front of the module via USB is possible. A module for a Raspberry Pi allows it to be installed as a server module within the cabinet. The server uses a custom Java software that sends commands and collects all data from the distributed Arduino controller modules.

Finished modules in a control cabinet. The 24V power supply and I2C bus connect the modules on the upper side. Below are the I/O connections for the individual modules. (Photo: Friedrich-Alexander-Universität Erlangen-Nürnberg)

Open PLC without borders

Due to the wide range of addresses available for modules, enough channels are available for larger systems. The modular design allows for flexibility in construction, i.e. the PLC could consist of only analog or digital channels if required. The I2C bus was introduced by the company Philips in the 80s for television systems and is today used in everything from chip card readers and household appliances to flashing lights in the automotive industry. Data transfer rates and the bus capacitance (capacitive load) can be limited when using I2C. However, this can be compensated by using suitable components and circuits. The newest generation of Arduino microcontrollers are 32-bit, have a much larger memory and a higher clock speed making them more suitable for complex projects.



Machine learning in the water industry

The past few years have seen a huge surge in interest in artificial intelligence (AI). There are a number of factors that have contributed to this. The large increase in data available and computing power that can crunch it have been two big factors. Researchers in machine learning techniques have also been able to combine existing techniques and algorithms into new methods that can utilize these emerging resources.

It is difficult to avoid the articles on Facebook’s and Google’s usage of artificial intelligence. Magazine and newspaper articles are becoming flooded with buzz words and important people talking about the dangers of AI and how Skynet is going to take over the world just as Mr Cameron predicted.

But once we move past the hype and look at what is actually happening we see that the methods being used are the same as or very similar to the methods used in statistics for making predictions from data for over 50 years. These techniques are anything but scary and are actually very important tools for providing us with valuable insights into the large and unwieldy amounts of data we are capable of currently generating.

Obvious existing examples in the water and wastewater industry include prediction of water supply and demand in cities, investigations into potential outbreaks through water supply systems, and environmental impacts of wastewater treatment and disposal. More recently researchers have been focusing on the potential to estimate the effluent quality of wastewater treatment plants using the large amount of data generated by them to train prediction models. The data generated is only going to increase at these plants as new and advanced sensors become more prevalent.

Another interesting aspect is the possibility to learn new water and wastewater treatment strategies from these machine learning algorithms. The classic example is that of TD-Gammon, where the self-learning algorithm was able to eventually beat even the best players and even changed the way people played the game as it introduced new concepts and strategies for winning the game that had not been thought of before. This method combined reinforcement learning and neural nets, where the neural net learns an action strategy given certain state variables and potential rewards.


In this case the environment is the game Backgammon, but Google’s DeepMind have used this same basic concept to recently beat top players in the more difficult (for AI atleast) game of Go. Humans are already learning from the tactics that the machine is using for board games, could we also learn of better ways to treat and distribute clean water from a machine?

The advantage of using old arcade and board games for machine learning is that the environments are easily definable and have strict boundaries and rules. The machine can learn by making hundreds of thousands of errors in simulations before having to take on a real human. Water and wastewater treatment is anything but easily definable! We also can’t let a machine expel millions of liters of untreated wastewater into our rivers and streams for the next hundred years until it learns how to treat the wastewater properly.

The solution is to produce an accurate enough simulation of the treatment plant that a machine can train itself on. However, this is in itself a very difficult problem due to the multitudes of microorganisms that can come into play and the constantly changing composition of wastewaters. Benchmark simulation models exist for testing of control strategies, but even these standardised models require a fair amount of parameter calibration and variable initialisation to obtain a decent representation of the plant to be tested. Perhaps new information becoming available regarding the microorganisms present in these treatment facilities can be used to produce more accurate models with less of the specialised ‘research lab only’ measurements and approximation required for the current generation of models. Maybe the next generation of models will be simulating the entire ecology of the plant right down to the cellular level? Imagine what a machine could learn from that!


ICA to the WWTP

The ARC Advisory Group recognizes the water and wastewater industry as one of the greatest opportunities for automation and control businesses over the next 20 years. Developed countries will require significant investment to improve already aging and outdated systems. Emerging economies and developing countries are also expected to continue investing in new infrastructure to meet the needs of a growing population and increased industrial activity.

Large commercial production industries are able to invest significant amounts in advanced, complete automation and control solutions with properly trained individuals. However, there is a divide between what the wastewater industry, particularly in the developing world, is willing to invest in both education and hardware, and what the current automation industry can provide for this price. Smaller treatment facilities often implement the cheapest solution that can still implement basic control strategies with the required human intervention. Advanced techniques for simulation, modeling and analysis are often not considered outside of research institutions due to the higher cost of hardware and software, and the required training.

Instrumentation, control and automation (ICA) in wastewater treatment plants (WWTP) is becomingly increasingly complex as the plants and process analysis techniques become more advanced. Online nutrient and other advanced sensor technologies are providing operators with large amounts of data that can allow for many improvements within the system and help operators manage the advanced, and often sensitive, processes. ICA has already been demonstrated to increase biological nutrient removal capacity by up to 30% today, while furthering the understanding of mechanisms involved for future improvement.

Despite the availability of cheap computing power, advanced and cheap sensor technologies, universal communication systems and greater usage of process systems techniques in other industries, ICA is still considered a costly addition to the initial design of a wastewater treatment plant, with many of the advanced control systems and sensors still considered to be too expensive. Training operators to use a specific commercial system is also expensive for smaller treatment facilities, and the trained individuals are only able to use the specific software provided. Even then appropriate data management tools are not properly available and restrict efficient use of sensors and analyzers for process control.

Availability of an economic, open and universal control and monitoring system would be especially useful in small, decentralized plants often found in remote rural areas. The open nature of the system would allow for easy access to knowledge, so that problems could be quickly fixed on-site without requiring expert assistance.

In the past few years there has been increasing interest in producing open-source automation systems for smaller tasks. Research has already been conducted into using the Raspberry Pi to directly control some basic tasks on an example water treatment facility. This research showed the possibility of using such a device to directly control sensors and controllers used in an industrial setting where stringent requirements must be met.

Accuracy tests conducted in a laboratory on the Arduino UNO have shown that synchrony across channels is accurate and scaling up the number of channels does not affect accuracy.

The Arduino has already been modified to allow for access to industrial systems. The Controllino is an Arduino standard and Arduino software compatible device that conforms to EN61010-1, EN61010-2-201 and EN61131-2 standards and allows for 35mm top hat rail mounting. A Kickstarter crowdfunding campaign to finance the initial production and marketing of the device attracted 191 supporters and successfully raised over $65,000 US, showing that there is an interest in such a device from the general public.

Could existing devices such as the Raspberry Pi Single Board Computer and the range of Arduino microcontrollers be combined into an open-source automation system with the stability, safety and security required for a full-scale wastewater treatment plant?

If this idea seems too far-fetched how about opening up such a system to the developing world, where more and smaller water and wastewater treatment plants are required? Of course, most of the work in developing countries is focusing on reliable and clean water with a minimum of technical equipment, which can break and needs maintenance by knowledgable technicians. But, what if the technical equipment can be programmed, fixed, operated on and controlled by anyone with access to the internet and the multitudes of forums and tutorials on programming and wiring such equipment? What if all that is needed is a cheap mobile phone to control and operate the plant? Any problem can be answered by the hundreds of thousands of enthusiasts online (The arduino.cc forum alone has nearly 400, 000 users looking to answer and ask questions), always willing to offer help and advice on the technical hurdles met by others willing to learn.

There must be somewhere inbetween

Maybe I am living in a dream world where developing countries and rural communities can have access to the more advanced systems for treating water and wastewater. Systems that require proper instrumentation and automation that is, in its current state, simply too complex and expensive for something as unprofitable as our waste and its effect on the environment.


Treatment or Recovery

There has been a trend in recent years for renaming some standard terms in the wastewater industry. Wastewater Treatment Plants (WWTP) are now being increasingly called Wastewater Recovery Plants (WWRP) and wastewater sludge should be referred to as biosolids for example. Some wish to actually change the name of wastewater itself!

So why is this happening?

Well, the most interesting components in typical municipal wastewater (we will keep calling it that for now) are the organics, which we will measure using the term chemical oxygen demand (COD) or how much oxygen is needed to remove them, ammonium and phosphate. We want to prevent these components from getting into our rivers and streams. But if instead of just looking at how to get rid of them, we look at how we might be able to extract and use them, we start to see how the term recovery can be used instead of just treatment.

So why are these components worth recovering?

The COD can straight away be converted into useable energy as biogas and ammonium and phosphate are important for fertilizers.

A term that was exciting a few years ago but seems to have dropped off most peoples radar at the same time they forgot about the whole ‘peak oil’ thing was ‘peak phosphorus’.

Looking at the above graph it is interesting to see that although there is a downward trend in searches for ‘peak phosphorus’, the large peaks in interest were followed by chunks of noone searching in the first few years after 2004 while more recently there seems to be a more sustained albeit lower interest.

I hope that this means there are still people keeping an eye on the ‘disappearing nutrient’. In the wastewater industry there are definitely people looking at all sorts of ways of recovering the phosphorus we are flushing down our toilets. Things like ion exchangers, membranes, electrochemistry and algae are all being investigated as possible methods for recovery.

I personally like the idea of capturing the phosphorus and nitrogen in algae. This algae can then be directly used as a fertilizer on crops. But then we don’t call it fertilizer anymore, we call it Biofertilizer…



Wastewater treatment sans oxygen

Anaerobic digestion is the term used to describe the treatment of wastewater in an oxygen free environment. Normally, municipal wastewater is treated with microorganisms mixed through the liquid with lots of air (so aerobic treatment). The oxygen provides lots of energy for the organisms to grow and quickly consume nutrients in the waste, this prevents the nutrients from entering rivers and streams where they can cause all sorts of problems with the ecological balance of the environment. When consuming the same amount of nutrients, the total mass of microorganisms will be 10 times more when they have been provided with oxygen than not.

It is this huge mass of microorganisms combined with left over solids that is becoming an increasing problem in the developing world. They are treating more and more of their wastewater (which is good) using the same aerobic treatment systems that have been typical in the developed world for the last 100 years. These systems need lots of energy to pump oxygen into the waste and produce a huge amount of highly concentrated sludge that either just gets dumped into a landfill, which is just moving the problem around without really solving it, or incinerated.

Lots of research is now being dedicated to analysing the energy efficiency of treating the sludge using anaerobic digestion. But why are we making these systems more and more complicated? Lets look at the path the wastewater can take from when it first arrives at the treatment plant to when the different parts all exit the plant.

Steps in a typical wastewater treatment process

We are pumping energy into the wastewater (aeration) and then trying to remove it all again straight away in our sludge treatment step. Why do we add all that energy to our wastewater in the first place? Why not just remove the energy that is already contained in it and use it for something else?

Two problems. First, anaerobic digestion is slow. All that energy being pumped into the wastewater is being used to quickly remove the nutrients. If we aren’t pumping energy into our system in the form of oxygen the microorganisms need to just use the energy that is available in the wastewater and they grow and reproduce much more slowly. Second, the microorganisms currently being used in anaerobic digestion are not removing nitrogen from the wastewater. This is bad, because the nitrogen can be used by things like algae in our rivers and streams causing all sorts of problems

However, these two problems are being addressed. The slowness of anaerobic digestion can be improved by clever system design and control. By pumping the wastewater around the system very quickly and increasing mixing with the microorganisms great efficiency improvements can be seen, here the problem comes that more energy is needed to pump the wastewater around quickly and mix everything really well, so we are just putting more and more energy into the system again! Therefore, clever system design is required so that we can reduce both time and energy required for the treatment.

Regarding the second problem, at the end of the 90’s researchers at the Delft University in the Netherlands discovered a bacteria that can convert nitrogen in wastewater (in the form of ammonium and nitrite) into nitrogen gas without needing oxygen. This can mean huge energy savings and reduced greenhouse gas emissions. It could also mean a complete anaerobic wastewater treatment system that either requires much less energy than current wastewater treatment plants or, in the best case, a wastewater treatment plant that is actually producing energy from our waste rather than requiring more energy to treat it.


Energy in waste

A by-product of municipal wastewater treatment plants is a waste sludge. This sludge holds potential energy that we might be able to use. If we focus on Europe in this post we can look at the graph below to see how much wastewater is treated by different countries.


We can then compare those numbers with the amount of biogas each country also produces.


Germany is dominating here. However, most of this biogas is being produced by co-digesting energy crops, such as corn from viable farmland. Would it be possible to produce this much energy from the waste we produce anyway?

Let’s keep using Germany as an example. In 2013, Germany had a population of 80.6 million people. After treating all of their waste there was 1.8 million tonnes of sewage sludge remaining. If we read up on what the United Nations has to say on excreta and wastewater sludges we see that if we are being optimistic we can expect over 5kWh of energy per kg from this sludge just by burning it. That means by burning all the crap Germany is producing we have created 9 000 000 MWh or enough to power around 300 000 of their homes.

However, Germany’s sludge production is actually not as high as it might be due to many municipal wastewater treatment facilities treating the sludge in biogas plants before incineration, this reduces the amount of sludge by up to 50%. So let’s now look at how much energy they can pull out of the sludge in the form of methane before incinerating it.

An example biogas plant produces around 4 000 MWh per year of energy from wastewater sludge generated by a city with a population of approximately 100 000 people. So we have another 3 224 000 MWh per year of energy from biogas. That is another 100 000 homes!

Ok, so that only gets us to around 1 million tonnes of oil equivalent, which is well under what Germany is currently producing using energy crops. Also, all that energy is usually getting fed back into the plant used to treat the initial influent (part of the energy is used for heating the biogas plant, but a much greater amount is required for powering the initial treatment plants aeration systems). So actually we have no homes being powered by crap…

But new research is investigating how we can get rid of these aeration systems and treat all of the wastewater using robust and energy efficient variations of the biogas plant.

This might be a bit optimistic! But having wastewater treatment plants that are net energy producers rather than energy consumers could become a real possibility in the near future.