Quantcast
Channel: 懒得折腾
Viewing all articles
Browse latest Browse all 764

Serverless IoT With Particle And Amazon Web Services

$
0
0

Serverless IoT With Particle And Amazon Web Services

The internet of things, or IoT for short, has seen tremendous activity and interest this past year. From enterprises, to healthcare providers, and even automobile companies, the idea of a connected world is beginning to appeal more and more to everyday consumers and businesses alike. This tectonic shift in the way the things around us operate has opened the door to a plethora of new and exciting products, and placed cloud service platforms, such as Amazon Web Services (AWS), at the forefront of scalable support for these new technologies.

Sometime last year I picked up a Particle Photon; a tiny, low powered wifi chip that’s intended for rapid prototyping around IoT projects. The small board, about the side of an eraser, utilized a cloud based development toolset to update the “firmware” of the devices with your custom C++ code. The SDK for the device gives a developer access to a number of pins that allows for extensive capabilities and enhancements.

In this post, I outline the steps necessary to create a simple data capturing device, in this case a sleep sensor, that sends the information collected to AWS for processing and storage. This technique can be used for a number of entry level IoT projects, and all of the tools included in this write up are “off the shelf” and available to everyday consumers.

Particle Photon + AWS

AWS and Particle In Action

AWS and Particle In Action

One of the more impressive features of the Particle platform is its built in webhook integration. The wifi connected circuit boards have the ability to send data to, and receive data from, webhooks setup through the Particle service. This capability opens the door to a number of interesting solutions, including out-of-the-box integrations with IFTTT.

This webhook integration also allows for interacting with custom API endpoints, developed specifically for processing the data that is generated by the Particle. I set out to create a sleep sensor, leveraging an accelerometer in conjunction with the Particle and AWS to process the “quality” of my sleep throughout the night, sending minute-by-minute data about my movements to the cloud for processing. The following section will explain how this worked, and what services must be leveraged to make this possible.

The AWS Configuration

For this project, I used three AWS services; a DynamoDB table to store the information captured, a Lambda function to process the data from the Particle and write it to the table and a API Gateway to publicly expose the Lambda function and perform some authentication before allowing the data to be written. This section will explain how to configure these services.

DynamoDB Configuration

DynamoDB Configuration

DynamoDB is a NoSQL storage service that is great for IoT projects. It’s extremely inexpensive, and performance is fantastic. For this project I created a single table named SleepData with a Primary Partition Key of deviceID and a Primary Sort Key of published_at set to a number. This will allow us to scan or query our DynamoDB table for specific Photon devices at specific time periods when it comes time to read the data.

After creating the DynamoDB table, I began configuring the Lambda function. The function is intended to read a JSON payload delivered via the API Gateway, sanitize and normalize the data, and store it in the DynamoDB table. I also included some logic in the function to capture when it should be recording this information and when it shouldn’t, so I’m only storing information while I’m actually asleep.

https://gist.github.com/mlapida/60748a378d4986170b6f.js","resolvedBy":"manual","resolved":true}” data-block-type=”22″>
from __future__ import print_function
import logging
import boto3
from datetime import *
from boto3.dynamodb.conditions import Key, Attr
# enable basic logging to CloudWatch Logs
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# setup the DynamoDB table
dynamodb = boto3.resource(dynamodb)
table = dynamodb.Table(SleepData)
# setup conversion to epoch
epoch = datetime.utcfromtimestamp(0)
now = datetime.now()
def lambda_handler(event, context):
# determine if the user is asleep
sleepState = sleepCheck()
# if the user has triggered the buttons, perform some logic
if event[data] == True:
# if the user is awake, toggle sleep. If not, visa versa
if sleepState == awake:
table.put_item(
Item={
event_name: event[name],
published_at: int(unix_time_millis(datetime.strptime(event[published_at], %Y-%m-%dT%H:%M:%S.%fZ))),
data: true,
state: asleep,
deviceID : event[source]
}
)
else:
table.put_item(
Item={
event_name: event[name],
published_at: int(unix_time_millis(datetime.strptime(event[published_at], %Y-%m-%dT%H:%M:%S.%fZ))),
data: true,
state : awake,
deviceID : event[source]
}
)
else:
# if the user is a sleep, and the buttons weren’t pressed
# send the data to dynamodb
if sleepState == asleep:
table.put_item(
Item={
event_name: event[name],
# convert the date/time to epoch for storage in the table
published_at: int(unix_time_millis(datetime.strptime(event[published_at], %Y-%m-%dT%H:%M:%S.%fZ))),
data: int(event[data]),
state: asleep,
deviceID : event[source]
}
)
else:
print(Not asleep)
print(event)
return(Success!)
# a function to convert the time to epoch
def unix_time_millis(dt):
return (dt epoch).total_seconds() * 1000.0
# a function to check if the user is currernly “sleeping”
def sleepCheck():
fe = Key(data).eq(true)
pe = published_at, deviceID, #da, #st
ean = { #da: data, #st: state}
esk = None
response = table.scan(
FilterExpression=fe,
ProjectionExpression=pe,
ExpressionAttributeNames=ean
)
x = len(response[Items])
y = 0
for i in response[Items]:
y = y + 1
if y == x:
return(str(i[state]))

Finally, I configured the API Endpoint to send data to the Lambda function. I wanted to limit who was capable of sending the information, so I also required an API Key for all requests. You can link your Lambda function to an API Gateway by navigating to the API endpoints section of your Lambda function in the AWS console. You can then create a new API gateway and set it up accordingly. There is an excellent tutorial on API Gateway and Lambda in the AWS documentation.

For this API Gateway, we’ll want to make sure we have the Method set to PUT andSecurity set to Open with access key. This will ensure that the data we send the API Gateway is secured, and that no one other than those with an access key can send data to our endpoint.

That about sums up what’s going on on the AWS side. In a future project I’ll explore processing the data stored in the DynamoDB table. For now, we’re only concerned with capturing it for future use.

The Particle Photon Configuration

For this project, I leveraged the “Internet Button” shield, which comes packed with an accelerometer, four buttons and a number of LED’s. This gave me the capability to interact with the API, turning data recording on and off and giving me some visual feedback when I triggered an action, such as setting it to sleep, on the device.

In the world of Particle, a webhook is a published endpoint, attached to your account, that contains the information needed to send data to an external API. Setting this up can be a bit of a challenge, as you’ll need to perform the operation through the CLI, and there’s not a whole lot in the form of feedback when something is setup wrong. The following steps can be used to configure a Particle webhook with an AWS API Gateway.

https://gist.github.com/mlapida/6c0646a60a3ddcc661ad.js","resolvedBy":"manual","resolved":true}” data-block-type=”22″>
{
event: sendSleep,
url: https://%5Byourendpoint%5D.execute-api.us-east-1.amazonaws.com/prod/ParticleSleepV1,
requestType: Post,
headers: {
x-api-key : [yourapikey]
},
json: {
name: {{SPARK_EVENT_NAME}},
data: {{SPARK_EVENT_VALUE}},
source: {{SPARK_CORE_ID}},
published_at: {{SPARK_PUBLISHED_AT}}
},
mydevices: true,
noDefaults: true
}

The first thing you’ll need to do is install the Particle CLI and log in. Luckily,Particle has put together a great resource for setting this up. After getting the CLI configured, you’ll need to create a JSON definition file for the webhook. This file contains the API keys sent in the header, a template for the data being sent, and the URL of the endpoint. I have a sample of this file above. Finally, from the CLI, you’ll need to add the webhook API:

particle webhook create SleepAPI.json

This will associate the API, as it is named in the JSON file, with your account. To send data to the API, you’ll use the “Particle.publish()” function from your code snipped with the API name as the first attribute. More information on Particles implementation of Webhooks can be found in their documentation.

The small block of code used for the Particle loops through every 100 milliseconds and capture any movement that took place. I’m using a special function, specific for the Internet Button, that returns the “lowest LED” during each loop. If the “lowest LED” has changed since the previous loop, I record that as a single movement. After one full minute of looping, the total is sent to my AWS API Gateway via the Particle webhook functionality.

https://gist.github.com/mlapida/579affd5395b8cc74eb9.js","resolvedBy":"manual","resolved":true}” data-block-type=”22″>
// Make sure to include the spcial library for the internet button
#include InternetButton/InternetButton.h
// Create a Button named b. It will be your friend, and you two will spend lots of time together.
InternetButton b = InternetButton();
int ledOldPos = 0;
char ledPosTrust[5];
int moveCount = 0;
int loopCount =0;
// The code in setup() runs once when the device is powered on or reset. Used for setting up states, modes, etc
void setup() {
// Tell b to get everything ready to go
// Use b.begin(1); if you have the original SparkButton, which does not have a buzzer or a plastic enclosure
// to use, just add a ‘1’ between the parentheses in the code below.
b.begin();
}
/* loop(), in contrast to setup(), runs all the time. Over and over again.
Remember this particularly if there are things you DON’T want to run a lot. Like Spark.publish() */
void loop() {
// Load up the special “lowestLed” object
int ledPos = b.lowestLed();
// Turn the LEDs off so they don’t all end up on
b.allLedsOff();
// I’m movin’, incriment the counter up
if (ledOldPos != ledPos){
sprintf(ledPosTrust,%d,ledPos);
moveCount++;
}
//The button’s have been triggered! Record this!
if ((b.buttonOn(2) and b.buttonOn(4)) or (b.buttonOn(1) and b.buttonOn(3))) {
b.ledOn(3, 0, 255, 0); // Green
b.ledOn(9, 0, 255, 0); // Green
Particle.publish(sendSleep, True, 60, PRIVATE);
delay(500);
}
// if we’ve looped through x times, fire off the webhook
if (loopCount >= 600){
Particle.publish(sendSleep, String(moveCount), 60, PRIVATE);
moveCount = 0;
loopCount = 0;
}
loopCount++;
ledOldPos = ledPos;
// Wait a mo’
delay(100);
}
view rawsleepsensor.cpp hosted with ❤ by GitHub

The Results

The end result of tying all of these services together is a pretty robust, low cost IoT platform. I was able to create a prototype in a few hours, using off the shelf products and a bit of connectivity code sprinkled throughout. While my example is a movement “sleep” tracker, it’s easy to see how this type of IoT design can be used in a number of applications.

A Night of Sleep Data

A Night of Sleep Data

The chart above is a sample of data extracted from the DynamoDB table after a night of sleep. The data captured is complete, and an exciting first step in the creation of my roll-your-own sleep tracker. There’s still plenty of work to do when it comes to processing the information, but the initial results are inspiring. The serverless architecture is quickly becoming a viable reality.



Viewing all articles
Browse latest Browse all 764

Trending Articles