S5D9 Anomaly Monitoring – Quick Start Guide

LEVEL 1:  BEGINNER |

This tutorial will show you how to modify the S5D9 Fast Prototyping Kit Quick Start tutorial to add anomaly monitoring workflows.  This tutorial continues from the Diagnostics Intelligence tutorial found here.

When an anomaly is detected, an alert event is generated and reported in the daily reports.

Important Note: This only works on Windows computers!

What you need to get started:

  • Renesas S5D9 IoT Fast Prototyping Kit (Order Here)
  • Renesas IoT Sandbox Getting Start Guide (Optional)
  • diagnostics-intelligence-s5d9.srec file – Download (Last Updated 7/26/2017)

Prerequisite:

  • S5D9 Quick Start Guide (here)

 

STEP 1: Complete the S5D9 Diagnostics Intelligence Quick Start Guide

This tutorial will continue from the Quick Start Guide found here. By default, the board transmits data once every 15 minutes. Follow Appendix A to increase the data transmission rate to once per minute.  Power cycle your board when complete.

Please have your board plugged in and connected to your Renesas IoT Sandbox project with active connection before continuing.

 

Step 2: Create ‘normal_range’ stream

First, we will create a new events stream called “normal_range” which will be used to save the normal sensor ranges.

Click on Config on the left panel and Data Streams.  Then click on Create New Stream.

On the next screen, type normal_range in the name and click on Save Data Stream.

Checkpoint

You should see the newly created stream on your list of data streams on the following screen.

 

Step 3: Create Workflow “Update Normal Ranges”

Next, you will create a workflow to monitor the temperature and vibration data that is being received once every 15 minutes. This workflow will use the sensor data to update normal ranges over a long period of time.

Go to Workflow Studio and click on Create Workflow.  

Name the workflow “Update Normal Ranges”

Click on the Tags and Triggers icon on the right canvas and click on “raw”.

Drag the following tags to the canvas as shown below:

Drag the Base Python module to the canvas.

Double click on the Base Python box.  Expand Inputs/Outputs label to reveal additional buttons.  Click “Add Input” until you see a total 6 inputs on the row.   Then click Save at the bottom of the Base Python window.

 

 

Next, add output event modules to the canvas. Click on “Outputs” on the right and drag “Processed Stream – Single” to the canvas.

Connect the blocks as shown:

Click on one Output box and edit the data streams as shown below.  Be sure to click “Save”.  

Double click on Base Python and add the following python code.  Click “Save and Activate”.

'''
Description:
This program calculates the normal ranges by pulling the last 60 days of events and doing analysis

Step 1: Query last 60 days values for each sensor
Step 2: Filter the list by upto lowest values ranges
Step 3: Compare new normal range to last value and creates update event accordingly

Configuration:
Set the threshold by which the filtering is used to determine outliers
OUTLIER_FILTER_BIN_COUNT_PCT = 10
OUTLIER_FILTER_TOTAL_PCT = 10

This is the search space to determine the normal range, units in days
HISTORY_DATA_RANGE = 60 

MINIMUM_DATA_POINTS: Minimum data points before apply outlier filtering

Last Updated: May 1, 2017

Author: Medium One
'''

OUTLIER_FILTER_BIN_COUNT_PCT = 10
OUTLIER_FILTER_TOTAL_PCT = 10
HISTORY_DATA_RANGE = 60
MINIMUM_DATA_POINTS = 1

import Analytics
import DateRange
import datetime
import Filter
from datetime import timedelta

# This function sorts a dictionary so it can be compared
def ordered(obj):
    if isinstance(obj, dict):
        return sorted((k, ordered(v)) for k, v in obj.items())
    if isinstance(obj, list):
        return sorted(ordered(x) for x in obj)
    else:
        return obj

# This function returns key stats on a list (min, max, event count, number of buckets)
def get_list_stats(list):
    total_count = 0
    total_buckets = len(list)
    absolute_min = None
    absolute_max = None
    for item in list:
        total_count += item['count']
        if absolute_min is None:
            absolute_min = item['value']
        elif item['value'] < absolute_min:
              absolute_min = item['value']

        if absolute_max is None:
              absolute_max = item['value']
        elif item['value'] > absolute_max:
              absolute_max = item['value']
               
    if isinstance(absolute_min, float) == True:
        #absolute_min = int(round(absolute_min))
        absolute_min = round(absolute_min,2)
    if isinstance(absolute_max, float) == True:
        #absolute_max = int(round(absolute_max))
        absolute_max = round(absolute_max,2)
    return [total_count, total_buckets, absolute_min, absolute_max]

# This function will reduce a list by filtering outlier
def filter_outlier(list):
    list.reverse()
    bin_count_tally = 0
    percentage_tally = 0
    total_buckets = len(list)
    for item in list[:]:
      
        # exit if bin_count_tally reached the threshold outlier limit
        if bin_count_tally / total_buckets >= OUTLIER_FILTER_BIN_COUNT_PCT :
            log("existing loop, bucket limit reached")
            break

        # exit if percentage_tally reached the outlier threshold limit
        if percentage_tally+item['percent'] >= OUTLIER_FILTER_TOTAL_PCT:
            log("existing loop, percentage_tally reached")
            break

        # remove list item and update tally
        list.remove(item)
        bin_count_tally += 1
        percentage_tally += item['percent']
       
# get last normal event
try:
    last_normal_range_event = Analytics.events('normal_range', 
                             Filter.string_tag('processed.normal_range'),
                             None, 1, ['event_rcv', 'DESC'])
except Exception:
    last_normal_range_event = {}
    
# check if one exists, otherwise initialize dict
if len(last_normal_range_event) > 0:
    last_normal_range_event = last_normal_range_event[0]['event_data']['normal_range']
else:
    last_normal_range_event = {}

# set query window 
daterange = DateRange.date_range(datetime.datetime.utcnow() - timedelta(days=HISTORY_DATA_RANGE), datetime.datetime.utcnow() )

# query bins 
humidity_list = Analytics.bin_by_value("raw.humidity.avg", daterange)
pressure_list = Analytics.bin_by_value("raw.pressure.avg", daterange)
temperature_list = Analytics.bin_by_value("raw.temp3.avg", daterange)
x_accel_list = Analytics.bin_by_value("raw.x_accel.avg", daterange)
y_accel_list = Analytics.bin_by_value("raw.y_accel.avg", daterange)
z_accel_list = Analytics.bin_by_value("raw.z_accel.avg", daterange)

# find key stats for each sensor type
initial_humidity_list_stats = get_list_stats(humidity_list)
initial_pressure_list_stats = get_list_stats(pressure_list)
initial_temperature_list_stats = get_list_stats(temperature_list)
initial_x_accel_list_stats = get_list_stats(x_accel_list)
initial_y_accel_list_stats = get_list_stats(y_accel_list)
initial_z_accel_list_stats = get_list_stats(z_accel_list)

log("initial_humidity_list_stats "+str(initial_humidity_list_stats))
log("initial_pressure_list_stats "+str(initial_pressure_list_stats))
log("initial_temperature_list_stats "+str(initial_temperature_list_stats))
log("initial_x_accel_list_stats "+str(initial_x_accel_list_stats))
log("initial_y_accel_list_stats "+str(initial_y_accel_list_stats))
log("initial_z_accel_list_stats "+str(initial_z_accel_list_stats))

# filter outliers from stats list
filter_outlier(humidity_list)
filter_outlier(pressure_list)
filter_outlier(temperature_list)
filter_outlier(x_accel_list)
filter_outlier(y_accel_list)
filter_outlier(z_accel_list)

# save new filtered stats
filtered_humidity_list_stats = get_list_stats(humidity_list)
filtered_pressure_list_stats = get_list_stats(pressure_list)
filtered_temperature_list_stats = get_list_stats(temperature_list)
filtered_x_accel_list_stats = get_list_stats(x_accel_list)
filtered_y_accel_list_stats = get_list_stats(y_accel_list)
filtered_z_accel_list_stats = get_list_stats(z_accel_list)

log("filtered_humidity_list_stats "+str(filtered_humidity_list_stats))
log("filtered_pressure_list_stats "+str(filtered_pressure_list_stats))
log("filtered_temperature_list_stats "+str(filtered_temperature_list_stats))
log("filtered_x_accel_list_stats "+str(filtered_x_accel_list_stats))
log("filtered_y_accel_list_stats "+str(filtered_y_accel_list_stats))
log("filtered_z_accel_list_stats "+str(filtered_z_accel_list_stats))

normal_range = {}

# buid normal dict / json
# If initial count is < MINIMUM_DATA_POINTS don't use the filtered stats, otherwise use the filter stats
if initial_humidity_list_stats[2] is not None and initial_humidity_list_stats[0] < MINIMUM_DATA_POINTS:
    # set min and max normal range
    normal_range['humidity'] = [initial_humidity_list_stats[2],initial_humidity_list_stats[3]]
elif initial_humidity_list_stats[2] is not None:
    normal_range['humidity'] = [filtered_humidity_list_stats[2],filtered_humidity_list_stats[3]]

if initial_pressure_list_stats[2] is not None and initial_pressure_list_stats[0] < MINIMUM_DATA_POINTS:
    normal_range['pressure'] = [initial_pressure_list_stats[2],initial_pressure_list_stats[3]]
elif initial_pressure_list_stats[2] is not None:
    log(initial_humidity_list_stats[2])
    normal_range['pressure'] = [filtered_pressure_list_stats[2],filtered_pressure_list_stats[3]]

if initial_x_accel_list_stats[2] is not None and initial_x_accel_list_stats[0] < MINIMUM_DATA_POINTS:
    normal_range['x_accel'] = [initial_x_accel_list_stats[2],initial_x_accel_list_stats[3]]
elif initial_x_accel_list_stats[2] is not None:
    normal_range['x_accel'] = [filtered_x_accel_list_stats[2],filtered_x_accel_list_stats[3]]

if initial_y_accel_list_stats[2] is not None and initial_y_accel_list_stats[0] < MINIMUM_DATA_POINTS:
    normal_range['y_accel'] = [initial_y_accel_list_stats[2],initial_y_accel_list_stats[3]]
elif initial_y_accel_list_stats[2] is not None:
    normal_range['y_accel'] = [filtered_y_accel_list_stats[2],filtered_y_accel_list_stats[3]]

if initial_z_accel_list_stats[2] is not None and initial_z_accel_list_stats[0] < MINIMUM_DATA_POINTS:
    normal_range['z_accel'] = [initial_z_accel_list_stats[2],initial_z_accel_list_stats[3]]
elif initial_z_accel_list_stats[2] is not None:
    normal_range['z_accel'] = [filtered_z_accel_list_stats[2],filtered_z_accel_list_stats[3]]

if initial_temperature_list_stats[2] is not None and initial_temperature_list_stats[0] < MINIMUM_DATA_POINTS:
    normal_range['temperature'] = [initial_temperature_list_stats[2],initial_temperature_list_stats[3]]
elif initial_temperature_list_stats[2] is not None:
    normal_range['temperature'] = [filtered_temperature_list_stats[2],filtered_temperature_list_stats[3]]

# only update the normal range if different
if ordered(last_normal_range_event) != ordered(normal_range):
    IONode.set_output('out1', {'normal_range': normal_range})

Checkpoint

Go to Data Viewer -> Data Streams -> normal_range

Next, you should see at least one event listed.  If not, you may need to wait up to 1 minute for the next event from the board to generate this event.  

This event is the normal range generated by the workflow we just created.  You can see an example of the JSON format of this event.  

You can also add columns clicking on “configure” on the top right.  Then select these tags and “Save”

In the example above: the normal range for temperature is 22.8 to 29.36 degrees C.

Congrats, you’ve just created your first workflow!

 

Step 4: Create Temperature Monitoring Workflow

Following the a similar process in the previous step, we will create a workflow call “Temperature Monitoring” as shown here:

Double click on Base Python and add the following code:

 

'''
Description:  This workflow monitoring the temperature values and compares if its outside
the normal range.  If so, an alert event is generated.

Date: May 1, 2017

Author: Medium One
'''

import Analytics
import Filter
import Store
import json

outputMsgList = []
temperature = IONode.get_input('in1')['event_data']['value']

#~~~~~~~~~~~~~~~~~~~~~~~
#
# Get Last Normal Range
#
#~~~~~~~~~~~~~~~~~~~~~~~~

# Get the normal range needed for one of the rules
normal_range = Analytics.events('normal_range', 
                             Filter.string_tag('normal_range.normal_range.temperature'),
                             None, 1, ['event_rcv', 'DESC'])

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Process each rule
#
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# check if normal range exists
if len(normal_range) > 0:
                
    # check if json is valid
    if 'temperature' in normal_range[0]['event_data']['normal_range']:
        min_threshold = normal_range[0]['event_data']['normal_range']['temperature'][0] 
        max_threshold = normal_range[0]['event_data']['normal_range']['temperature'][1]

        last_state = Store.get("temperature_outside_normal")
        if last_state is None:
            last_state = "false"
            
        if temperature < min_threshold or temperature > max_threshold: 
            log("Outside range")
            if last_state != "true":
                IONode.set_output('out1', {"alert": "Temperature outside normal range"})
                Store.set_data("temperature_outside_normal","true",-1)
                log("Alert transmitted")
        else:
            log("Inside range")
            if last_state != "false":
                Store.set_data("temperature_outside_normal","false",-1)
            log("Alert not transmitted")

 

Check Point

Let’s open the debugger to confirm this workflow successfully executed.

Click on the debugger tool on the right panel to enable debugger mode.  Note: debugger mode uses 1 extra workflow credit so be sure to disable it when it is no longer required.

Refresh the logs after 1 minute and you should see a new log appear.  If so, you’ve successfully executed this workflow!  If you see an error log, repeat this step because there is a problem with your python code (ie. bad syntax during copy and paste).  If there are no logs, then the board may not be connect or transmitting data.

Double click on the most recent log to display the log from the python.

 

Step 5: Create “Vibration Monitoring” Workflow

Similar to the previous step, create a workflow to monitor the vibration as shown.

Add the following python code to Base Python, click Save and Activate.  This workflow has similar functionality to the prior one.

 

import Analytics
import Filter
import Store
import json

outputMsgList = []
x_accel = IONode.get_input('in1')['event_data']['value']
y_accel = IONode.get_input('in2')['event_data']['value']
z_accel = IONode.get_input('in3')['event_data']['value']

#~~~~~~~~~~~~~~~~~~~~~~~
#
# Get Last Normal Range
#
#~~~~~~~~~~~~~~~~~~~~~~~~

# Get the normal range needed for one of the rules
normal_range = Analytics.events('normal_range', 
                             Filter.string_tag('normal_range.normal_range.x_accel'),
                             None, 1, ['event_rcv', 'DESC'])

#~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Process each rule
#
#~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# check if normal range exists
if len(normal_range) > 0:
                
    # check if json is valid
    if 'x_accel' in normal_range[0]['event_data']['normal_range']:
        x_accel_min_threshold = normal_range[0]['event_data']['normal_range']['x_accel'][0] 
        x_accel_max_threshold = normal_range[0]['event_data']['normal_range']['x_accel'][1]

        y_accel_min_threshold = normal_range[0]['event_data']['normal_range']['y_accel'][0] 
        y_accel_max_threshold = normal_range[0]['event_data']['normal_range']['y_accel'][1]

        z_accel_min_threshold = normal_range[0]['event_data']['normal_range']['z_accel'][0] 
        z_accel_max_threshold = normal_range[0]['event_data']['normal_range']['z_accel'][1]

        last_state = Store.get("vibration_outside_normal")
        if last_state is None:
            last_state = "false"
            
        if x_accel < x_accel_min_threshold or x_accel > x_accel_max_threshold \
            or y_accel < y_accel_min_threshold or y_accel > y_accel_max_threshold \
            or z_accel < z_accel_min_threshold or z_accel > z_accel_max_threshold :
            log("Outside range")
            if last_state != "true":
                IONode.set_output('out1', {"alert": "Vibration outside normal range"})
                Store.set_data("vibration_outside_normal","true",-1)
                log("Alert transmitted")
        else:
            log("Inside range")
            if last_state != "false":
                Store.set_data("vibration_outside_normal","false",-1)
            log("Alert not transmitted")

 

Check Point

Repeat the similar debugger check from the previous Step 4.  You should see a successful log.  Try to place the board on a surface that is vibrating or use your hand to vibrate the board for at least 1 minute.  This will force an anomaly message to be created.

 

Step 6: Add “Alerts” table to the Dashboard

Click on Dashboard

 

Scroll to the bottom of the page and add a Single User Table widget.

Notice a table added to your Dashboard, select your device and click the gear box.

Select the processed.alert tag.  This is the event generated by our previous two workflows. If you do not see it right away, you may need to do a hard refresh on the page.

If your workflows created Anomaly alerts, they will be shown here.  If you don’t see any alerts, try to increase the temperature of your board or vibration.

Save your Dashboard view on the top of the page.

 

Step 7: Update Daily Email Reports

In this last step, we will update our Daily Email reports to include alert notifications.  Open the Daily Report workflow and replace the python with this code (here).

Check Point

We will manually trigger this workflow to generate an email.  Open the debugger panel, select “raw” stream and the device in the drop down menus as shown.  

In the payload windows enter {“sample_report”:0}

Enable Debug Logging.

Click “Send”.  This will send the payload event to the target stream and user.  Refresh the logs to confirm that the workflow executed without errors.

Check your email.  You should have received the Daily report email with Alerts listed on the bottom of the email.

What Next?

Congrats, you’ve completed this Tutorial!  Next, try to add monitoring for pressure and humidity following similar steps.