Gravio Blog
July 2, 2021

[Tutorial] Realtime interfaces and physical spaces — with WebSockets, Next.js and IoT

Creating a physical user experience with Gravio using a combination of technologies and concepts, including a custom Next.js (a React.js framework) setup, WebSockets and HTTP.
[Tutorial] Realtime interfaces and physical spaces — with WebSockets, Next.js and IoT

This article is about combining various technologies and concepts to create a physical user experience using Gravio and a custom Next.js (a React.js framework) setup, with WebSockets and HTTP APIs. Links to all the code and configuration you’d need to create the same setup are included below.

By the way, our quick definition of IoT (Internet of Things) is about connecting a Thing, like an electronic button or temperature sensor, to the network (or Internet) — enabling all connected Things to communicate, and therefore enabling new forms of interactivity, learning, safety and so on.


The Challenge

Create a simple experience for capturing and responding to user feedback within a physical space.

The experience needs to be…

  • Interactive - through using a physical device or two, like buttons, for a user to push
  • Visual - showing a visual response on a display to the user, acknowledging their interaction in real-time

It might look and work something like this:

Illustration of a person with a choice of two buttons to push and a visual display
A person with a choice of two buttons and a display that shows a question, and will show their response when they push a button.

…and it has to be lightweight too

Due to time constraints this prototype had to be built quickly, with the intent to create the simplest, most direct and real-time solution for getting the information from the devices to a display. To help support that effort, we’d ideally keep the number of moving parts to a minimum too.

Also being a prototype, we could test and take learnings from a real environment that could inform any production build thinking down the line.


User devices:

  • A display — for user messaging
  • Two buttons — for user interaction

How it should work:

  • The display should initially show ‘Did you have a good experience?
  • The first button should be for the ‘Yes’ response
  • The second button should be for the ‘No’ response
  • On pressing either button, the display should show a Yes or No response accordingly for 5 seconds, then revert to the initial message


Approach and thinking

Technically, there are various ways to approach a project like this, but given the limited time, etc — our focus needed to be on the experience. So we prioritised using technologies that we were familiar with to be the most productive.

And for us that meant using web technologies for the visual side (a web app), so given this choice and the real-time requirement, we’d also use WebSockets for the real-time updates.


Architecture

At a high level, we saw the solution having three core components:

1. An IoT Edge environment

That provides the user with:

  • The interactive devices — i.e. buttons to push
  • A means for the data (from the buttons) to be shared elsewhere — e.g. onto the network, and beyond if needed

2. A Middle layer

As mentioned, we chose to use web technologies to speed up the prototype UI build, and that meant using WebSockets for the realtime requirement.

So a Middle layer was needed to transform and pass the data from the Edge to the web app, and because Gravio is flexible and offers different ways to pass the data, we saw two obvious options for the Middle layer (see below).

3. A web-based User Interface / Front end — the UI

This consumes the WebSocket updates and then provides the user with feedback — shown on the visual display — to close the loop.

Middle layer options

MQTT server

In IoT land, MQTT is the de-facto protocol for IoT-based messaging. So our initial thought was to use it for this prototype too, however:

  • Setting up an MQTT broker properly (and everything that comes with that e.g. TLS, etc) does take a fair amount of time and effort
  • We hadn’t yet discovered how or what data we we’d need to provide to the interface
  • We wanted fewer moving parts to help reduce the risk of unexpected complexity in building out the experience
  • We knew that Gravio was flexible enough to provide the data in a range of formats

So we considered a custom approach too…

Custom server(s)

This was essentially a combination of:

  • A HTTP server listening for certain requests to come in from the Edge / Gravio i.e. when a button is pushed
  • A WebSocket server that integrates with the HTTP server’s incoming requests, and pass them on to WebSocket clients

Here’s a rough illustration of our options:


Solution — the final approach

We chose the custom route for the Middle layer to work in hand with Gravio, and although this approach might seem like it had more unknowns, ultimately it was the tighter level of control and fewer moving parts that would allow us to be more flexible and therefore more creative with the final experience.

Next.js

For the custom Middle layer we decided to use Next.js, and were even able to combine both the Middle layer features (HTTP and WebSocket servers) and the UI into just one Next.js project by using: a custom server, referencing the Next.js Express example, and some other WebSocket and Node.js references — see code for final solution:

https://github.com/dfjs/realtime-physical-spaces

Note: You can review the code and setup in more detail to get a deeper understanding, where there are comments to outline what’s going on.

Let’s get started!

Requirements

Note: we won’t cover installing and setting up Gravio in this article, but you can refer to that for macOS here and for Windows here.

  • A Mac or Windows machine — to configure and run Gravio and the Custom server on, and a browser (of course)
  • Node.js (tested with v14.15.0) — for Next.js, the server and front end
  • Git (optional) — for source control (or you can download the server code directly)
  • Gravio HubKit — the Gravio Edge gateway server
  • Gravio Studio — the Gravio UI for configuring HubKit and adding devices
  • Gravio Basic subscription — this includes 4 rental devices of your choice, including the buttons we use here, and a USB Zigbee dongle
  • Gravio USB Zigbee Dongle — used by Gravio HubKit to speak to Zigbee devices (like the buttons)
  • Two Wireless Mini Buttons — aka ‘Switches’

Activity flow to implement

This is the rough flow of activity that we’ll need to implement to support the desired experience:

  1. User pushes a button, and transmits a push event to the Gravio Edge Gateway
  2. Gravio receives the event and calls the Trigger associated with the button pushed
  3. The Trigger then calls the associated Action, which depending on the button, will make an HTTP GET request to the URL defined in the step
  4. The HTTP server is listening for these requests, and on receiving a call will map that call to the relevant handler i.e. yes or no, then will push the according message into the message queue (in memory)
  5. The WebSockets aspect of the server checks the messages queue every 100ms for new messages and on finding a new message will then push it out to the clients
  6. Assuming the web app is running and has connected to the WebSocket server, then on receipt of a message, the appropriate code branch will be followed to show the relevant message for a certain period of time, after which it will reset

Onto the setup!


Part 1 of 2: The IoT Edge environment

First let’s setup the physical (or Edge) environment side using Gravio Studio — remember, you’ll need the two buttons we’re using here to create the same setup.

1. In Gravio Studio, starting in the Device tab, if you don’t already have an Area created, create an Area called Sensors, using the Add Area button in the main toolbar

   1.1 Then add two Layers called ‘Yes’ and ‘No’ with ‘Aqara-Single Button’ as their types

Your Layers list should looks like this:

Screenshot 1: Gravio Studio Devices view, showing the ‘Yes’ and ‘No’ layers for Buttons

2. Next, you’ll need to pair two Buttons — see the pairing reference here

   Note: from Gravio 4.3 these buttons should automatically be assigned to the layers you created in step 1.

3. Next open the Actions modal, and create a new Action by click the ‘+’ button in the top right — (Action Editor reference here)

   3.1 In the create Action modal, name the new Action ‘ResponseYes

   3.2 In the new ResponseYes Action window, click Add Step, select the Network tab, select the HTTP Request step type, and click Add

   3.3 Select the new HTTP Request step and in the URL field we’ll enter the HTTP server address request endpoint (we’ll assume it’s your local machine here): http://127.0.0.1/buttons/yes

Now your action should look something like this — simple:

Screenshot 2: ‘ResponseYes’ action with HTTP Request step

Next we’ll create the ResponseNo action.

4. Close the ResponseYes action, where you’ll now be in the Actions view. Click (to select) the ResponseYes action and click the Duplicate selected Action button in the Toolbar:

‘Duplicated selected Action’ button

   4.1 In the Duplicate Action modal, enter ResponseNo for the action name, then click OK

   4.2 You’ll be taken back to the Actions view, open the new ResponseNo action, select the HTTP Request step, and amend the URL field in the HTTP request to: http://127.0.0.1/buttons/no

Your ResponseNo action should now look something like this:

Screenshot 3: ‘ResponseNo’ action with HTTP Request step

   4.3 Now close the Action, and close the Actions view.

You should now have two actions that will make an HTTP request when they’re triggered, which we’ll setup next

5. Go to the Trigger tab, and click the Add new Trigger button, which looks like this:

Create Trigger button

   5.1 In the New Trigger modal, enter the name PushYesButton

   5.2 Then within the Conditions tab, select Sensors from the Area dropdown, and in the Key Layer dropdown, select Yes (your Yes button device)

   5.3 Then to the right of the Physical Device ID field, click the ‘+’ button, which should then show you a button, tick the check box — and make sure the Button press dropdown is Single press (the default)

Your new trigger should now look like this:

New trigger for Pushing the ‘Yes’ button

   5.4 Now select the Action tab, and for the Action dropdown, select ResponseYes, then click Add (this will create and close the Trigger)

6. Now follow the same steps for creating the PushYesButton trigger, but for the PushNoButton equivalent, your new trigger should look like this:

New trigger for Pushing the ‘No’ button

   6.1 In the the Action tab, and for the Action dropdown, select ResponseNo, then click Add — this will create and close the Trigger

Your Gravio Edge environment is now set up and ready to go!

Part 2 of 2: HTTP, WebSocket and UI server

Here are the steps to download, install and get the server running. We’ll go through what it’s doing and the key areas later.

Steps:
   a) Git clone (or download the Zip) from https://github.com/dfjs/realtime-physical-spaces
   b) From within the project folder, install the server (i.e. its dependencies)
   c) Start the server
   d) Check that it’s running correctly (it’ll say “Ready on…”)

Here are the steps above on the command line:

$ git clone https://github.com/dfjs/realtime-physical-spaces
$ cd realtime-physical-spaces
$ npm install
$ npm start
Ready on http://127.0.0.1:3000

And you’re done!

  • Gravio and your server a now ready and waiting for your user interactions
  • To start testing, simply point your browser at http://127.0.0.1:3000/.
  • Time for feedback!

Notes on the server: this server is built with prototyping in mind, and specifically for the scenario we have here, which both makes the code easier to understand and change, or adapt for your own use case.

Bonus / Stretch goal

We used a development machine to run the Edge environment (Gravio) and the Middle layer (Next.js, etc) and the Browser to show the UI. But if you wanted to put this out into the real world, you’d want to use something more suitable for that, and what we suggest below could be a good start.

A self-contained approach

For a setup that’s closer to real-world use — like our challenge illustration above — you’d likely use a large standalone display to make it visible and readable for customers.

For your computing device, you could use a Raspberry Pi with Ubuntu OS (64 bit) installed, plugged directly into and mounted behind the display.

With the Pi plugged into the display, you could run a browser with the web app in full screen mode (i.e. no browser chrome / ui) — a common approach in retail environments for example.

Finally, simply place (or stick!) the two physical buttons nearby, done!

This setup would create a seamless experience for your users to interact with, which you can monitor, analyse and tweak (then scale!) in the future.

What could you use this for?

This experience has a range of applications! Especially when you start introducing more devices, longer interactions, etc — like:

  • Retail — customer interactions, feedback and other shopping experiences
  • Events — attendees could provide feedback to event organisers or interaction with speakers
  • Travel — travellers could feedback on queues, security, etc in places like Airports and Train Stations

And more! This is the power of IoT — the ability to connect physical spaces and create connected experiences.

All with the benefit of flexible privacy setups, reduced data handling, and no external network dependencies, like the Cloud.

Looking for more info? Get in touch on Twitter or email info@gravio.com 👋

Latest Posts
[Tutorial] Using Eniscope, an Energy Monitoring Device and Gravio to Measure and Log your Energy Consumption with MQTT.
Tutorial on how to use Gravio, MQTT, and Eniscope to build a simple Energy reporting and logging system without coding. Connect data points to Line for notifications and writing to a CSV file for logging.
Thursday, November 14, 2024
Read More