Sunday 5 January 2014

Breathing Detection using Kinect and OpenCV - Part 2 - Peak detection

A few days ago I published a post about how I am using a Microsoft Kinect depth camera and the OpenCV image processing library to identify a test subject from a background, and analyse the series of images from the camera to detect small movements.

The next stage is to calculate the brightness of the test subject at each frame, and turn that into a time series so we can see how it changes with time, and analyse it to detect specific events.

We can use the openCV 'mean' function to work out the average brightness of the test image easily, then just add it onto the end of an array, and trim the first value off the start to keep the length the same.
The resulting image and time series are shown below:

 The image here shows that we can extract the subject from the background quite accurately (this is Benjamin's body and legs as he lies on the floor).  the shading is the movement relative to the average position.










The resulting time series is shown here - the measured data is the blue spiky line.  The red one is the smoothed version (I know I have a half second offset between the two...).

The red dots are peaks detected using a very simple peak searching algorithm.
The chart clearly shows a 'fidget' being detected as a large peak.  There is a breathing event at about 8 seconds that has been detected too.

So, the detection system is looking promising - I have had better breathing detection when I was testing it on myself - I think I will have to change the position of the camera a bit to improve sensitivity.

I have now set up a simple python based web server to allow other applications to connect to this one to request the data.

We are getting there.  The outstanding issues are:

  • Memory Leak - after the application has run for 30 min the computer gets very slow and eventually crashes - I suspect a memory leak somewhere - this will have to be fixed!
  • Optimum camera position - I think I can get better breathing detection sensitivity by altering the camera position - will have to experiment a bit.
  • Add some code to identify whether we are looking at Benjamin or just noise - at the moment I analyse the largest bright subject in the image, and assume that is Benjamin - I should probably have a minimum size limit so it gives up if it can not see Benjamin.
  • Summarise what we are seeing automatically - "normal breathing", "can't see Benjamin", "abnormal breathing", "fidgeting" etc.
  • Modify our monitors that we use to keep an eye on Benjamin to talk to the new web server and display the status messages and raise an alarm if necessary.
The code is available here.







No comments: