Talking to Your IoT Projects

Blog / Rhys Hill / December 12, 2019

Products like the Google Home and Amazon Alexa have opened up a new avenue for interacting with our devices—talking to them. In this post we’ll take our first steps towards taking advantage of these new capabilities by wiring up a simple LED to a Raspberry Pi and integrating its control with the Google Assistant. Throughout this process we’ll touch on building a simple server utilising Node.js, setting up a conversation on Dialogflow with a webhook fulfilment and linking these with Ngrok. We’ll unpack each of these technologies and why they’re important along the way. If you’d like to give this example a go you’ll need the following:

  • Raspberry Pi running Raspbian
  • LED
  • Breadboard
  • Resistor/s
  • A device with the Google Assistant (Google Home, Android Smartphone)
Fig 1: LED Circuit Diagram

For our example we’ll use physical pin 16 on the Raspberry Pi as our output which will be the pin referenced in our code snippets, but if you’d like to use a different pin check this pinout guide for alternatives. You can also increase the brightness of your LED by using a smaller resistor.

Installing Node on Your Pi

Once you have your LED connected the next thing we’ll need to do is make sure our Pi has Node and a package manager installed. A quick way to do this is to simply run these four commands:

$ wget ""
$ tar xf node-v10.15.1-linux-armv7l.tar.xz
$ cd node-v10.15.1-linux-armv7l
$ cp -R * /usr/local

It’s worth noting here that JavaScript is only one of many options for setting up this server. If your more comfortable with Python or Java or even C, then have a read through some of our earlier blogs for tips of interacting with hardware in those languages.

Creating a Web Server

So we have an LED wired up, now we need a way to turn it on and off remotely. This can be accomplished using the http library built into Node.js and a library for interaction with the Pi’s GPIO pins called rpio. Let’s make a directory for our server and install this package.

$ mkdir led
$ cd led
$ npm install rpio

We can now create our server by copying the example code below into a file called led.js within our led directory.

$ nano led.js

const http = require("http");
const rpio = require("rpio");, rpio.OUTPUT, rpio.LOW);

const hostname = ''; /* Empty string means local host will be used */
const port = 3000;

const server = http.createServer((req, res) => {
	console.log('Setting LED...');

	if (req.method == 'POST') {
		console.log('Handling post request...');
		const body = '';
		req.on('data', chunk => {
			body += chunk.toString();

		req.on('end', () => {
			console.log('Body; ' + body);
			const request = JSON.parse(body);

			if (request.queryResult.parameters.status == 'on') {
				rpio.write(16, rpio.HIGH);
			} else {
				rpio.write(16, rpio.LOW);

			res.statusCode = 200;
			res.setHeader('Content-Type', 'text/plain');
			res.end("LED set");

server.listen(port, hostname, () => {
	console.log('Listening on port: ' + port);

This example server will listen on port 3000 on your Pi and process any POST requests it receives. If a received request contains the status, “on”, the GPIO pin attached to our LED will be written high, overwise it will be written low. You might have noticed that the check we’re performing seems to have some irrelevant nesting (queryResult -> parameters -> status). We’ll cover this in a bit more detail later, but for now all we need to know is that Dialogflow only makes requests in that format so it’ll save us time later.

We can now test our server by making a post request to your Pi’s IP on port 3000. The request body should be a blob of JSON containing the desired LED status as shown below,

$ curl -X POST -d '{"queryResult":{"parameters":{"status":"on"}}}' 
http://<YOUR_PI_IP>:3000 --header 'Content-Type:application/json'

Account Set Up

Now that we have a method for controlling our LED remotely we can work on integrating the Google Assistant. Google Assistant can be found on a plethora of Google products, most notably the Google Home and Home Mini, but it’s also available on most Android smartphones and tablets. To continue, you’ll need a device that has the assistant installed and you’ll need to have a Google account set up. Given the current setup of Google’s other services, some of which we’re about to explore, you’ll also need and add seperate accounts for a few of these services, all of which need to be linked to the same Google account. We’ll need:

  • Google Assistant on a Google Home, smartphone, or tablet
  • Dialogflow

These both need to be linked to the same Google account, as Dialogflow’s agents are not actually deployed to a physical device, they are simply made accessible to all devices associated with that account. This is convenient, as it actually ends up saving us some time later.

Creating an Agent

We’ll be using Dialogflow to orchestrate our conversations with the Google assistant. It can be used to trigger events based on key phrases, such as in the case of our example, and can also be used to collect information to make decisions, like whether an LED should be turned on or off. An intent, in the context of Dialogflow, is a phrase or series of back and forth phrases which trigger an action and optionally collect some parameters. That’s what we want, an agent with an intent that collects a parameter.

Log onto the Dialogflow console and create a new agent (First option under the dialoflow logo). The name you give this agent will be how you access the functionality you’re creating so make it something relevant and easy to pronounce like, “Saoirse”.

Fig 2: Creating and Agent

Add an entity to this agent via the “Entities” tab on the left. To keep things simple for now, call the entity, “status” and give the two possible values as “on” and “off”. You can also add synonyms for each of these if you like. Hit save.

Fig 3: Adding an Entity

Now create an intent via the “Intents” tab on the left. Call it, “led control” and add a bunch of training phrases. These training phrases are what you would expect a user to say to trigger this behaviour. Eg, “turn off the light”, “switch the light on.” We don’t need to be exhaustive, but the more phrases you add the more likely the model built will detect previously unseen phrases correctly.

While adding these phrases also label the part of the phrase that represents the desired status of the LED. To do this, highlight the word “on” or “off” within a phrase. This presents us with a drop down menu of entities. Select the status entity we created earlier.

Fig 4: Training the Agent

The last thing you’ll need to add to this intent is some response text, in this case, a phrase that will be spoken by the Google Assistant once the command is accepted, e.g.:

“Okay, I’ll switch $status the light”

The $ character allows you to insert the registered value of an entity. Hit save (scroll up, look right) and you will see a few prompts in the bottom right corner, “Intent saved”, “Finished training new intent”. You can now test how the agent will behave via the “try it now” input in the top right. Test a bunch of phrases similar to your training phrases to see how useful the agent is currently. If it’s missing phrases it should detect add some more training phrases. You’ll also notice that it’s picking out the words “on” and “off” as the values for status and setting the response to match.

Adding a Webhook Fulfilment

A fulfilment lets us contact an external backend to carry out a wider range of functionality than the standard back and forth conversation handled by Dialogflow. Enabling a webhook fulfilment means that Dialogflow will contact an endpoint you provide to get a response to use in your conversation and given that we control that endpoint it can also be used to control our LED. As we touched on earlier, our endpoint must be able to accept a request object in a form dictated by Dialogflow. This object contains all of the collected parameters and the intent to which it relates for context. Navigate to the Fulfilment tab on the left and enable Webhook Fulfilment.

Setting Up Ngrok

Here’s where we hit the hurdle that makes the use of Ngrok necessary. One limitation of Dialogflow’s conversations being completely cloud based and not deployed to your local device is that your fulfilment URL cannot reference devices on your local network. Our fulfilment URL must be public, which means that a work around is required to contact the Pi. To get around this, we’ll expose the port on which your server is running publically using Ngrok.

Create an Ngrok account by going to the Ngrok getting started page, and download Ngrok for Linux(Arm). This page also provides you with an auth token that you’ll need later. Next, we copy the zip to our Pi, unzip it, and provide Ngrok with that auth token.

$ scp pi@<YOUR_PI_IP>:~/
$ ssh pi@<YOUR_PI_IP>:~/
$ unzip
$ ./ngrok authtoken <YOUR_AUTH_TOKEN>

Now, we can run Ngrok and forward incoming traffic to port 3000 on our Pi.

$ ./ngrok http 3000

Once this is running on your Pi you’ll be presented with your forwarding address; it’ll look something like this: Simply copy that forwarding address URL in as your webhook fulfilment URL on Dialogflow, hit save.

Fig 5: Fulfillment Flow Diagram

Congratulations, you’ve finished creating the pipeline in Figure 6 below. And, the best part is it’s already available on all your Google devices. To test it, all you need to do is ask your assistant if you can talk to your agent, remember the name you gave it earlier, and then say, “turn on the light”.

User: “Okay Google, can I talk to Saoirse?”

Assistant: “Alright. Here’s the test version of Saoirse. Hi! How are you doing?”

User: “Turn on the light”

Assistant: “Okay, I’ll switch on the light”

Fig 6: Fulfillment Flow Diagram


Now that you have a basis for talking to your IoT projects, you can start to extend the capabilities of your device, and maybe control something more interesting than an LED. Or, you could try adding new intents to your agent to control other devices.

Fig 7: Enabling Webhook Fulfilment for an Intent

One minor hiccup you might encounter is that if you have more than one intent on your agent then you also need to enable webhook fulfilments for each intent individually. To do this Open the intent in the Dialogflow console and scroll to the bottom. Open the fulfilment options and enable webhook fulfilment for this intent. Another option for you Amazon fans might be to experiment with the Alexa Skills Kit to see if she can be included too.

Header image courtesy of Pixabay