Create an HMI Service
This guide will walk you through the creation of an HMI as a Service. An HMI enables control of a deployed Solution using a fully custom interface.
This guide builds upon the foundation of other Service-related guides. Please ensure that you have run through/understood the prerequisite guides.
This guide is meant to illustrate the concepts around HMI Services and how they would be implemented. Code for a more complete HMI examples can be found on GitHub.
What makes an HMI Service
Installing and accessing an HMI is only supported with on-prem devices available on your local network.
The remainder of this guide will assume that the on-prem device is available at workcell.lan.
For instructions on how to set up local network addressing, see Use the local network.
An HMI Service is a special kind of Service. It provides an interface whith controls and information related to a deployed Solution. The HMI Service has two jobs.
- It serves a frontend that can be accessed in a browser.
- It provides an HTTP REST API that lets the frontend talk with Intrinsic platform services (e.g. the executive).
The frontend makes calls to the API, then the HMI Service handles gRPC communication with the relevant Intrinsic platform services.
Development setup
This guide requires a project with the Intrinsic SDK. Follow the guide on how to set up the development environment if you haven't already.
Bazel workspace
You will need a Bazel workspace to create an HMI Service. The workspace can be
created at the root of the project using inctl.
You can skip this step if you already have a MODULE.bazel file in your project.
inctl bazel init --sdk_repository https://github.com/intrinsic-ai/sdk.git --sdk_version latest
Python or Go
This guide is written with code examples in both Python and Go.
You can use any programming language to implement an HMI Service.
- Python
- Go
Follow the basic Python setup guide for Services: link
Make sure you have the correct setup in MODULE.bazel.
The development container does not contain any tooling for Go in its current default configuration, so a few extra installs and dependencies are required. Begin by installing the offical Go extension for VSCode.
Follow the Editor setup instructions
in the bazelbuild/rules_go repository to instruct VSCode to use Go from
Bazel rather than a locally installed version.
This enables some auto-completion support for the Intrinsic SDK.
Auto-completion will work once you have built any Go target with Bazel and reloaded the VSCode window. Auto-completion for new dependencies may only work after a build with them already succeeded.
Go dependencies in bazel
Later in this guide you will be instructed to use extra Go dependencies. Add them to your workspace now.
Add the following to the end of your MODULE.bazel:
# This first directive should already be present in your MODULE.bazel.
# Uncomment this line if that's not the case.
# bazel_dep(name = "rules_go", version = "0.49.0", repo_name = "io_bazel_rules_go")
go_sdk = use_extension("@io_bazel_rules_go//go:extensions.bzl", "go_sdk")
go_sdk.download(version = "1.23.0")
bazel_dep(name = "gazelle", version = "0.36.0", repo_name = "bazel_gazelle")
go_deps = use_extension("@bazel_gazelle//:extensions.bzl", "go_deps")
go_deps.from_file(go_mod = "//:go.mod")
use_repo(
go_deps,
"com_google_cloud_go_longrunning",
"org_golang_google_grpc"
)
Create three files at the root of your bazel workspace:
cd `bazel info workspace`
touch BUILD
touch go.mod
touch go.sum
Put the following content into the new BUILD file:
load("@bazel_gazelle//:def.bzl", "gazelle")
gazelle(name = "gazelle")
Put the following content into the new go.mod file.
module intrinsic
go 1.22
require (
cloud.google.com/go/longrunning v0.5.5
google.golang.org/grpc v1.66.0
)
Leave go.sum empty.
Package a Service
Every Service runs a binary that provides the Service's functionality. This binary is the entrypoint of the Service container image and usually serves a certain kind of traffic (e.g. HTTP) at a specific port. The container image running the binary is finally packaged as a Service with a manifest to create a deployable unit.
Service binary
Begin by creating a new directory called hmi in your development container, at the root of the project.
This directory will contain all the code for the HMI Service.
- Python
- Go
In the hmi directory, create a new file called server.py.
Put the following code into it:
"""This script works as the binary for the HMI server."""
#!/usr/bin/env python3
def main():
print("Hello world!")
if __name__ == '__main__':
main()
Since every Service is built using Bazel, you must set up the correct
rules for building the binary in a BUILD file.
In the hmi directory, create a file called BUILD alongside server.py.
To make Bazel create a binary for our server:
- Create a file called
BUILDnext toserver.py. - Add the following
py_binaryrule theBUILDfile.
py_binary(
name = "service",
srcs = ["service.py"],
main = "service.py",
)
At this point, your project file tree should look similar to the following:
├── bazel
│ └── content_mirror
│ └── permissive.cfg
├── .bazelignore
├── .bazelrc
├── .bazelversion
├── .devcontainer
│ └── devcontainer.json
├── hmi
│ ├── BUILD
│ └── server.py
├── MODULE.bazel
└── .vscode
└── settings.json
In the hmi directory, create a new file called server.go.
The server.go file will be an executable.
As such it must be part of the main package and have a main() function.
package main
import "fmt"
func main() {
fmt.Println("Hello world!")
}
Since every Service is built using Bazel, you must set up the correct
rules for building the binary in a BUILD file.
In the hmi directory, create a file called BUILD alongside server.go.
To build the server.go into a binary file that can be executed, add a go_binary
rule with server.go as the source to the BUILD file.
It is best practice to name the rule the same as the source file.
load("@io_bazel_rules_go//go:def.bzl", "go_binary")
go_binary(
name = "server",
srcs = ["server.go"],
)
At this point, your project file tree should look similar to the following:
├── bazel
│ └── content_mirror
│ └── permissive.cfg
├── .bazelignore
├── .bazelrc
├── .bazelversion
├── .devcontainer
│ └── devcontainer.json
├── hmi
│ ├── BUILD
│ └── server.[go/py]
├── MODULE.bazel
└── .vscode
└── settings.json
The binary file can now be run with Bazel to check that it builds and executes correctly.
Open a terminal in VSCode and navigate to /workspaces/hmi.
Then run:
bazel run //hmi:server
The initial build may take a while (5+ minutes). Subsequent builds will be much faster.
This should print Hello world!.
Create a container image
All Services run a container that runs the associated binary as its entrypoint. This means that creating a Service necessitates creating a container image. This must be done with Bazel as well.
Container images are created in multiple steps when using Bazel.
- Every container image is made up of layers, where each layer is simply a set of changes to the file system in the container.
The container image for this guide only needs a single layer for the server binary.
The layer is created (as a tar archive) using a
pkg_tarrule. - The layers (or in this case layer) are provided to an
oci_imagerule. This rule creates the container image from a specified base image and the provided layers. It also specifies the entrypoint, i.e. the binary to execute when the container runs. - The image must be wrapped in a tarball using an
oci_loadrule. Tarballs can be loaded directly by container runtimes, such as the container runtime on any on-prem device.
The default bazel workspace has everything we need except for rules_pkg.
Add this directive to MODULE.bazel to get access to rules_pkg.
bazel_dep(name = "rules_pkg", version = "1.0.1")
- Python
- Go
Now add the required load statements in the BUILD file of the hmi directory:
load("@ai_intrinsic_sdks//bazel:python_oci_image.bzl", "python_oci_image")
load("@rules_oci//oci:defs.bzl", "oci_load")
load("@rules_pkg//:pkg.bzl", "pkg_tar")
In order to build the container image, put these rules at the end of the BUILD file.
pkg_tar(
name = "server_layer",
srcs = [":server"],
extension = "tar.gz",
)
python_oci_image(
name = "hmi_image",
binary = "server",
base = "@distroless_python",
entrypoint = ["python3", "-u", "/hmi/server"],
data_path = "/frontend/",
extra_tars = [":server_layer"],
)
oci_load(
name = "hmi_tarball",
image = ":hmi_image",
repo_tags = ["hmi:latest"],
)
Now add the required load statements in the BUILD file of the hmi directory:
load("@rules_oci//oci:defs.bzl", "oci_image", "oci_load")
load("@rules_pkg//:pkg.bzl", "pkg_tar")
In order to build the container image, put these rules at the end of the BUILD file.
pkg_tar(
name = "server_layer",
srcs = [":server"],
extension = "tar.gz",
)
oci_image(
name = "hmi_image",
base = "@distroless_base",
entrypoint = ["/server"],
tars = [":server_layer"],
)
oci_load(
name = "hmi_tarball",
image = ":hmi_image",
repo_tags = ["hmi:latest"],
)
Ensure that your image build setup is valid by building it using Bazel.
bazel build //hmi:hmi_tarball
Create a Service manifest
Each Service requires a Service manifest. The Service manifest contains two key pieces of information about the Service: metadata and the Service definition.
Refer to the Service introduction for more information about the Service manifest.
Service metadata is general information about the Service, such as its unique ID, the vendor, documentation and a display name. You can put whatever is appropriate for you.
The Service definition specifies how the Service behaves. It references the image the Service should run and for an HMI also configures HTTP routing.
Create a manifest next to your BUILD file called manifest.textproto.
You must specify the .textproto extension for the manifest file.
- Python
- Go
# proto-file: https://github.com/intrinsic-ai/sdk/blob/main/intrinsic/assets/services/proto/service_manifest.proto
# proto-message: intrinsic_proto.services.ServiceManifest
metadata {
id {
package: "my.company"
name: "hmi"
}
vendor {
display_name: "My Company"
}
documentation {
description: "A simple HMI for My Company."
}
display_name: "My Company HMI"
}
service_def {
http_config: {}
real_spec {
image {
archive_filename: "hmi_image.tar"
}
}
sim_spec {
image {
archive_filename: "hmi_image.tar"
}
}
}
metadata {
id {
package: "my.company"
name: "hmi"
}
vendor {
display_name: "My Company"
}
documentation {
description: "A simple HMI for My Company."
}
display_name: "My Company HMI"
}
service_def {
http_config: {}
real_spec {
image {
archive_filename: "tarball.tar"
}
}
sim_spec {
image {
archive_filename: "tarball.tar"
}
}
}
Make sure to include an empty value for http_config in the Service definition.
This enables the HMI to receive HTTP traffic and serve a frontend.
Create the deployable Service
The last step required to package the Service is to feed both the tarball from
the oci_load rule and the manifest into a special intrinsic_service build
rule.
This will package the image so that it can be installed.
First load the intrinsic_service rule from the correct repository by adding the correct load statement
at the top of the BUILD file.
load("@ai_intrinsic_sdks//intrinsic/assets/services/build_defs:services.bzl", "intrinsic_service")
- Python
- Go
Add the intrinsic_service build rule to the BUILD file.
filegroup(
name = "hmi_tarball.tar",
srcs = [":hmi_tarball"],
output_group = "tarball",
)
intrinsic_service(
name = "hmi_service",
images = [":hmi_image.tar"],
manifest = "manifest.textproto",
)
Once the rule is loaded, you must add a filegroup rule to wrap the tarball.
The filegroup serves as a predictable input to the intrinsic_service rule.
Add these rules at the of the BUILD file.
filegroup(
name = "hmi_tarball.tar",
srcs = [":hmi_tarball"],
output_group = "tarball",
)
intrinsic_service(
name = "hmi_service",
images = [":hmi_tarball.tar"],
manifest = "manifest.textproto",
)
You can now build your Service using Bazel. This will create a bundle archive that can be installed in a Solution.
bazel build //hmi:hmi_service
Read runtime context
The HMI Service in this example serves a frontend over HTTP. This is the interface that someone (e.g. an operator) interacts with to control the deployment of the Solution. The server binary must serve HTTP traffic at a specific port to provide this interface and the associated functionality.
Learn more about handling HTTP traffic in your Service in the full guide on handling HTTP requests.
An HMI Service can serve HTTP traffic to users through a specific URL exposed on the cluster. The routing to enable this is set up by Intrinsic automatically. In order to serve HTTP traffic on the pre-defined route, the Service must run an HTTP server at a specified port. This port is determined dynamically when the Service starts up and cannot be encoded statically. Instead, every Service can read the HTTP port it should be serving on from the runtime context.
The runtime context contains information that can be relevant to Services at runtime.
It is provided by Intrinsic infrastructure to every Service through a file.
The file is placed in a defined, consistent location inside the Service container.
It contains an encoded RuntimeContext proto.
Service authors can read and decode this proto file to get access to the relevant information in their Service.
The runtime context file is always placed in /etc/intrinsic/runtime_config.pb.
- Python
- Go
Update your server.py file to read the runtime context.
"""This script works as the binary for the HMI server."""
#!/usr/bin/env python3
import logging
import sys
from intrinsic.resources.proto import runtime_context_pb2
def get_runtime_context():
with open('/etc/intrinsic/runtime_config.pb', 'rb') as fin:
return runtime_context_pb2.RuntimeContext.FromString(fin.read())
def main():
context = get_runtime_context()
if __name__ == '__main__':
logging.basicConfig(stream=sys.stderr, level=logging.INFO)
main()
Now that we added the dependency to runtime_context_pb2, the py_binary rule must declare it as a dependency.
Add the dependency to the py_binary() rule.
Your rule should now look like this:
All imports in code must be backed by an entry in the deps attribute of the associated BUILD rule.
py_binary(
name = "service",
srcs = ["service.py"],
main = "service.py",
deps = [
"@ai_intrinsic_sdks//intrinsic/resources/proto:runtime_context_py_pb2",
]
)
The py_binary rule should now build successfully.
You can utilize the protoio utility available from the SDK to read and decode the runtime context file.
Update your server.go file to read the runtime context.
package main
import (
"log"
rcpb "intrinsic/resources/proto/runtime_context_go_proto"
"intrinsic/util/proto/protoio"
)
const (
runtimeContextPath = "/etc/intrinsic/runtime_config.pb"
)
func main() {
rc := new(rcpb.RuntimeContext)
if err := protoio.ReadBinaryProto(runtimeContextPath, rc); err != nil {
log.Fatalf("Failed to read runtime context: %v", err)
}
}
The runtime context is required to serve HTTP traffic, so a failure to read it must be considered fatal.
Now that you added new code dependencies, they must be declared as build dependencies in
the deps attribute of the go_binary rule in the BUILD file.
All imports in code must be backed by an entry in the deps attribute of the associated BUILD rule.
go_binary(
name = "server",
srcs = ["server.go"],
deps = [
"@ai_intrinsic_sdks//intrinsic/util/proto:protoio",
"@ai_intrinsic_sdks//intrinsic/resources/proto:runtime_context_go_proto",
]
)
The go_binary rule should now build successfully.
Running the binary locally will produce errors because the runtime context file does not exist inside the development container. The file will exist when the Service is deployed to an on-prem device.
Create a frontend
Let's start by creating a page for the frontend.
A frontend is usually some HTML, CSS and JavaScript.
The entry file is an index.html.
You can use any JS framework or other method to create your frontend.
You will need to set up BUILD rules for the framework with Bazel so that you can provide static files
(HTML, CSS, JS) to the binary for serving.
Some frameworks document this (e.g. Angular) and some do not.
Begin by creating a new directory under the hmi directory.
Call this frontend.
Now create an index.html file in this directory (hmi/frontend/index.html).
As a first step, simply put a dummy HTML template into the index.html file.
<!DOCTYPE html>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>HMI</title>
<h1>This is the HMI frontend.</h1>
Bundle frontend files
Next, the index.html from the frontend folder needs to be bundled with the
server binary so that it can serve it.
In Bazel this is done through runfiles.
Runfiles are specified for rules using the data field. The files to specify as runfiles
should be wrapped using a filegroup rule.
- Python
- Go
Create the filegroup rule before the existing py_binary rule in the BUILD file.
Then add a data attribute to the existing py_binary rule that references the filegroup.
filegroup(
name = "frontend_files",
srcs = glob(["frontend/**"]),
)
py_binary(
name = "server",
srcs = ["server.py"],
main = "server.py",
data = [":frontend_files"],
deps = [
# deps omitted...
],
)
Create the filegroup rule before the existing go_binary rule in the BUILD file.
Then add a data attribute to the existing go_binary rule that references the file group.
filegroup(
name = "frontend_files",
srcs = glob(["frontend/**"]),
)
go_binary(
name = "server",
srcs = ["server.[go/py]"],
data = [":frontend_files"],
deps = [
# deps omitted...
]
)
Whenever the binary is now built and run with Bazel, the runfiles will be placed
in a special location alongside the compiled binary and can be referenced from
it.
The pkg_tar rule for the server layer will not include runfiles automatically.
You must specify include_runfiles on the existing pkg_tar rule in the BUILD file to enable this.
pkg_tar(
name = "server_layer",
srcs = [":server"],
include_runfiles = True,
extension = "tar.gz",
)
Serve frontend files
With the HTTP port from the runtime context you can set up the HTTP server to serve traffic.
- Python
- Go
The http.server library provides the HTTPServer class.
Our Service will use HTTPServer to listen for HTTP requests, and serve the frontend.
Add the following imports to create the http server in server.py.
from http.server import HTTPServer, SimpleHTTPRequestHandler
try:
from rules_python.python.runfiles import runfiles
except ImportError:
# https://github.com/bazelbuild/rules_python/issues/1679
from python.runfiles import runfiles
Now, in the main function of server.py, retrieve the http_port from the runtime_context, and create an HTTP server that listens on that port.
def main():
context = get_runtime_context()
http_port = context.http_port
logging.info(f" HTTP port provided by runtime context: {http_port}")
logging.info(f" Creating HTTP server.")
http_server = HTTPServer(
server_address=("", http_port),
RequestHandlerClass=MyHandler
)
logging.info(f" Starting HTTP server.")
http_server.serve_forever()
The MyHandler class also needs to be defined.
It dictates how the http server handles requests.
In order to get the index.html into the HMI you must now serve it from the root
path (/) of the HTTP server in the server binary.
The files will be placed in a special runfiles directory by bazel.
Use the runfiles library to find the directory that Bazel put the files in.
Remember to change the Rlocation path to match the name of your bazel package.
This package name is defined in your MODULE.bazel file.
class MyHandler(SimpleHTTPRequestHandler):
"""Handler for the HMI server."""
def __init__(
self,
*args,
**kwargs,
):
# Uses the runfiles library to determine where Bazel put the static files.
r = runfiles.Create()
logging.info("Created runfiles object.")
self.bazel_runfiles_dir = r.Rlocation(path="<package_name>/hmi/frontend")
logging.info(f"Runfiles directory: {self.bazel_runfiles_dir}")
super().__init__(*args, directory=self.bazel_runfiles_dir, **kwargs)
def do_GET(self):
if self.path == "/":
# Serving of HTML file.
self.path='/index.html'
with open(self.frontend_directory + self.path, "r") as f:
file_content = f.read()
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes(file_content, encoding="utf-8"))
else:
# Serve other static files as usual.
super().do_GET()
http_server.serve_forever() blocks further program execution while the HTTP
server is listening.
It must be the very last thing called in the main function
Any code placed after http_server.serve_forever() will not be executed.
This only requires
the net/http package from the standard library.
Creating the correct port-string for starting the server is easiest using the fmt library.
Add the requires imports to the import section in server.go.
import (
// Add to the existing imports:
"net/http"
"fmt"
)
Start the HTTP server using the ListenAndServe method.
This method will listen for HTTP requests at the provided port.
The method expects the port as a string prefixed with a colon.
Add the following to the very end of your main function in server.go:
if err := http.ListenAndServe(fmt.Sprintf(":%d", rc.GetHttpPort()), nil); err != nil {
log.Fatalf("Failed to start HTTP server: %v", err)
}
The ListenAndServe method blocks further program execution while the HTTP server is listening.
It must be the very last thing called in the main function.
Any code placed after ListenAndServe will not be executed.
The HTTP server started here won't be serving any responses yet, but it does receive any traffic that comes in through the pre-configured Service HTTP route.
In order to get this index.html into the HMI you must now serve it from the root
path (/) of the HTTP server in the server binary.
The files will be placed in a special runfiles directory.
The binary is able to reference it from there with the help of the official Go runfiles library.
Begin by importing the runfiles library in the import section of server.go.
import (
// Add to the existing imports:
"github.com/bazelbuild/rules_go/go/runfiles"
)
You need to back this Go dependency with the corresponding Bazel dependency on
the existing go_binary rule in the BUILD file.
go_binary(
name = "server",
srcs = ["server.go"],
data = [":frontend_files"],
deps = [
# Add to the existing deps:
"@io_bazel_rules_go//go/runfiles",
]
)
The runfiles library provides an Rlocation method to look up any path from the runfiles.
This path should be relative (no leading slash) and begin with the repository.
You can disregard the role of the repository for now.
Just know that the repository name you need to use is _main.
The repository is followed by the path to the file/directory you want to reference in runfiles.
This must be relative to the Bazel workspace.
For the frontend files this means the correct path including the repository prefix is _main/hmi/frontend.
Begin by using runfiles.Rlocation with the path to the frontend files to
retrieve the correct directory to serve from.
Then you can specify a handler for the root (/) HTTP path that serves files from that directory.
The http.FileServer makes this very easy.
Add the following right before the ListenAndServe call in the main function of server.go.
frontendDir, err := runfiles.Rlocation("_main/hmi/frontend")
if err != nil {
log.Fatalf("Could not determine frontend directory from runfiles: %v", err)
}
http.Handle("/", http.FileServer(http.Dir(frontendDir)))
Communicate with an Intrinsic platform service
The HMI provides control over a Solution through communication with Intrinsic platform services.
Different services can provide different functionality for the HMI.
The HMI Service talks to Intrinsic platform services through their gRPC API.
The API definition for each service can be found in the service's .proto file in the SDK.
The Executive service
Consider reading the full documentation for the executive service.
The executive service provides the ability to run processes in the form of behavior trees. It also contains methods for stopping, pausing, resuming and stepping through these processes and provides information (such as errors) about each execution.
This guide utilizes the executive service to allow the HMI to display some very basic information to illustrate its use. The example code available from GitHub expands on this by showing how to start and stop processes as well as how to view execution status.
Establish a connection
Communication with any gRPC service requires a client for it. The client is automatically generated from the service definition and can be used directly in any supported language, including Python and Go.
Clients for gRPC services are created using a connection. This connection is a network channel to a certain address and port where the relevant service should be listening. Connections also specify the required credentials for the service. An HMI Service must connect to any Intrinsic platform services using the cluster-internal address of the cluster ingress. Don't worry if these terms are not very meaningful to you, all you need to know is that services you can connect to will be available at a specific address.
Cluster services are available from HMI services at istio-ingressgateway.app-ingress.svc.cluster.local:80.
Connecting to Intrinsic platform services
from an HMI Service does not require credentials (insecure credentials)
because the connection is internal to the on-prem device.
- Python
- Go
Import the grpc library
import grpc
Add the cluster ingress address, then create a function that returns the executive stub. A stub is used to call gRPC service methods.
GRPC_INGRESS_ADDRESS = "istio-ingressgateway.app-ingress.svc.cluster.local:80"
def create_executive_stub(connect_timeout: float):
channel = grpc.insecure_channel(GRPC_INGRESS_ADDRESS)
grpc.channel_ready_future(channel).result(timeout=connect_timeout)
return executive_service_pb2_grpc.ExecutiveServiceStub(channel)
Begin by establishing a gRPC connection using grpc.NewClient.
This receives both the address and the credentials.
As you have done multiple times now, add the imports for the Go packages you're
going to be using in the import section of server.go.
import (
// Add to the existing imports:
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
esvcgrpcpb "intrinsic/executive/proto/executive_service_go_proto"
)
It can be useful to create a const for the address of the cluster ingress.
const (
// Add to the existing const:
ingressAddress = "istio-ingressgateway.app-ingress.svc.cluster.local:80"
)
Now you're ready to set up the gRPC connection and service client.
The connection is a network channel to the gRPC service.
The ExecutiveServiceClient uses this channel to communicate with the service.
Create the connection using grpc.NewClient and then use it to construct an ExecutiveServiceClient.
Add the connection code before the call to ListenAndServe.
conn, err := grpc.NewClient(ingressAddress, grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
log.Fatalf("Failed to create client connection: %v", err)
}
executiveClient := esvcgrpcpb.NewExecutiveServiceClient(conn)
The client provides methods for all the operations defined inside the service. You can find all available methods in the service definition proto file.
Provide a REST API
The HMI frontend communicates with the cluster through an HTTP API. The HTTP API connects the browser frontend with Intrinsic platform services since the frontend cannot communicate with gRPC services (which all Intrinsic platform services are) directly.
Providing an API is as simple as choosing a path and writing a lightweight handler function that performs some logic and returns a response. You will be adding a handler in the section on communication with an Intrinsic platform service involving the executive service.
Each HTTP handler serves as a bridge between the frontend (which can call the HTTP handler) and the Intrinsic platform services that are able to control the deployed Solution.
There are multiple ways to implement handlers, and you may have as many handlers as you like.
With the HTTP server up and running you can now serve a frontend to people that open the HTTP route for the HMI Service in their browser. The frontend is served by the same HTTP server as the API described in the section above.
Use service methods
- Python
- Go
Once you have a stub for the service you're trying to communicate with, you can begin using the service methods for that service. This is done by simply calling one of the methods available on the stub object. For the HMI service, service methods should usually be called inside HTTP handlers. This means that the service is called only when the HMI frontend (i.e. the user) makes a specific request.
Every call to a gRPC service using Python requires a stub and a request message. The stub serves as a client-side representation of a gRPC service. The request message is usually specific to each operation and contains all the information the service needs to process the request.
To provide an example, consider the ListOperations RPC on the ExecutiveService.
It requires a context and a google.longrunning.ListOperationsRequest.
The request message is a proto that can be created using the language-specific implementation generated from its definition.
When handling an HTTP request, you can write plain text or something more advanced like JSON depending on your needs. Below is a json example.
Import the json_format library to be able to convert protos to json.
from google.longrunning.operations_pb2 import ListOperationsRequest # type: ignore
from google.protobuf import json_format
Add the following case to the do_GET method of the MyHandler class.
This adds the first endpoint to the REST API.
When an HTTP GET request is made on the path ../api/executive/operations, the HMI Service will send a ListOperationsRequest
to the executive using the executive service stub.
The executive responds with a proto, which is converted to json.
This json is then sent as a response to the HTTP GET request.
elif self.path == '/api/executive/operations':
# Lists all active operations in the executive.
executive = create_executive_stub(60)
response_proto = executive.ListOperations(request=ListOperationsRequest())
for operation in response_proto.operations:
operation.ClearField('metadata')
response_json = json_format.MessageToJson(response_proto)
logging.info('Operations in the executive: %s', response_json)
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
self.wfile.write(response_json.encode())
Add the relevant dependencies to the existing py_binary rule in your
BUILD file to satisfy the build requirements.
py_binary(
name = "hmi_service_bin",
srcs = ["hmi_service.py"],
main = "hmi_service.py",
data = [":frontend_files"],
deps = [
# Add to the existing deps:
"@com_google_googleapis//google/longrunning:operations_py_proto",
"@ai_intrinsic_sdks//intrinsic/executive/proto:executive_service_py_pb2_grpc",
],
)
Once you have a client for the service you're trying to communicate with, you can begin using the service methods for that service. This is done by simply calling one of the methods available on the client object. For the HMI Service, service methods should usually be called inside HTTP handlers. This means that the service is called only when the HMI frontend (i.e. the user) makes a specific request.
Every call to a gRPC service using Go requires a context and a request message. The context is used to transport metadata for the request, though this guide will not make use of this specifically. The request message is usually specific to each operation and contains all the information the service needs to process the request.
To provide an example, consider the ListOperations RPC on the ExecutiveService.
It requires a context and a google.longrunning.ListOperationsRequest.
Since this service method is called inside an HTTP handler you must use the context from the incoming HTTP request.
The request message is a proto that can be created using the language-specific implementation generated from its definition.
When handling an HTTP request, you must write to the http.Response writer that
is provided to the handler function rather than logging.
You can write plain text or something more advanced like JSON depending on your needs.
Add the imports for JSON handling and the long-running operations gRPC API used by the executive
service to the import section in server.go.
import (
// Add to the existing imports:
"encoding/json"
lropb "cloud.google.com/go/longrunning/autogen/longrunningpb"
)
from google.longrunning.operations_pb2 import ListOperationsRequest
from google.protobuf import json_format
Next you will add the HTTP handler that actually communicates with the executive
service using the gRPC client.
Do this by calling http.HandleFunc with the desired API path the HMI will be using.
Communication with the executive service happens inside the handler implementation.
Place the handler before the ListenAndServe call in the main function.
http.HandleFunc("GET /api/executive/operations", func(w http.ResponseWriter, r *http.Request) {
response, err := executiveClient.ListOperations(r.Context(), &lropb.ListOperationsRequest{
PageSize: 1,
})
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte(fmt.Sprintf("Failed to list operations: %v", err)))
}
// Marshal the proto response to JSON.
b, err := json.Marshal(response.GetOperations())
if err != nil {
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
w.Write([]byte(fmt.Sprintf("Failed to encode response: %v", err)))
}
}
w.Header().Set("content-type", "application/json")
w.Write(b)
})
Remember that the ListenAndServe method must be placed at the end of the
main function, after registering any handlers.
Add the relevant dependencies to the existing go_binary rule in your
BUILD file to satisfy the build requirements.
go_binary(
name = "server",
srcs = ["server.go"],
deps = [
# Add to the existing deps:
"@ai_intrinsic_sdks//intrinsic/executive/proto:executive_service_go_proto",
"@com_google_cloud_go_longrunning//autogen/longrunningpb",
"@org_golang_google_grpc//:go_default_library",
"@org_golang_google_grpc//credentials/insecure",
]
)
Running the binary locally will produce errors because the connection to the service using the specified address is only possible inside the on-prem device cluster.
You may use any of the methods provided on the client and you can freely combine multiple clients to perform actions.
Call from the frontend
There is now an HTTP handler at /api/executive/operations that will make
a call to the executive service when invoked.
The frontend can call this HTTP handler and parse/print the response.
The frontend can call the HTTP handlers at a relative path because they are registered in the same HTTP server under subpaths. The example frontend below uses the HTTP API to retrieve the ID of the first operation returned from the executive service.
- Python
- Go
You can add the required javascript to your html file as shown below, or you can create a script.js
file in the frontend folder, and import it into the html file.
<!DOCTYPE html>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<head>
<title>HMI</title>
</head>
<body>
<h1>This is the HMI frontend.</h1>
<div>
<button id="load-operation-id">Load operation ID</button>
</div>
<p>Latest operation ID: <strong id="operation-id">(press button to load)</strong></p>
</body>
<script>
const loadOperationIdBtn = document.getElementById("load-operation-id");
const operationIdEl = document.getElementById("operation-id");
loadOperationIdBtn.addEventListener("click", async () => {
operationIdEl.textContent = await fetchLatestOperationId();
});
async function fetchLatestOperationId() {
try {
const res = await fetch("api/executive/operations");
const s = await res.json();
if(Array.isArray(s.operations) && s.operations.length > 0) {
return s.operations[0].name;
} else {
return "No operation ID found";
}
} catch (e) {
console.error("Failed to get operations:", e);
return "(error, see console for details)";
}
}
</script>
</html>
<!DOCTYPE html>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>HMI</title>
<h1>This is the HMI frontend.</h1>
<div>
<button id="load-operation-id">Load operation ID</button>
</div>
Latest operation ID: <strong id="operation-id">(press button to load)</strong>
<script>
const loadOperationIdBtn = document.getElementById("load-operation-id");
const operationIdEl = document.getElementById("operation-id");
loadOperationIdBtn.addEventListener("click", async () => {
operationIdEl.textContent = await fetchLatestOperationId();
});
async function fetchLatestOperationId() {
try {
const res = await fetch("api/executive/operations");
const s = await res.json();
if(Array.isArray(s) && s.length > 0) {
return s[0].name;
} else {
return "No operation ID found";
}
} catch (e) {
console.error("Failed to get operations:", e);
return "(error, see console for details)";
}
}
</script>
Any HTTP handler in the HMI can be called this way. The response needs to be parsed appropriately based on what kind of data the handler returns.
Installation
You can now install the HMI Service to a Solution. Installing will make the
Service available to add from the Services panel in Flowstate.
Begin by building the HMI. Do this by running a bazel build for the
intrinsic_service in the BUILD file.
bazel build //hmi:hmi_service
The output of the bazel build will show where the bundle has been written.
This should be something like bazel-bin/hmi/hmi_service.bundle.tar.
You can now use inctl to install the Service. Replace ORGANIZATION_NAME
with your organization name and then run the command.
inctl asset install bazel-bin/hmi/hmi_service.bundle.tar \
--org=ORGANIZATION_NAME \
--address="workcell.lan:17080"
The Service image will be uploaded directly to the on-prem device. Once this is complete, you will be presented with a message like this:
Finished installing the Service: ...
Now open the Solution in Flowstate and follow these steps:
- Find the
Servicestab on the right side. - Select
Add Service. The HMI Service you just installed should be shown in the list with the display name from metadata in the Service manifest. - Select the HMI Service and click
Add. - You will be prompted for an instance name. This can be any unique identifier
you like. Use the name
hmi. - Select
Applyto add the HMI to the Solution. This should be very quick.
The HMI Service will start up and should now be available. Follow the steps in the next section to view and try it.
Access the HMI
The HMI is now installed, added to your Solution and can be accessed in any web browser.
The HMI Service is available on the on-prem device at /ext/services/{name}/,
where {name} is the name chosen during Service deployment in Flowstate.
If your on-prem device is available at workcell.lan:17080 and you chose hmi as
the name when adding the Service, the HMI can be accessed in any web browser
at workcell.lan:17080/ext/services/hmi/.
Ensure that you add a trailing slash (/) to the end of the HMI URL in the browser.
Otherwise networks requests may fail.
Try pressing the Load operation ID button in the HMI. If you open Flowstate and
run any process, you can press the button again and should see the ID changing.
Congratulations, your HMI is working and communicating with Flowstate!
Next steps
The HMI you just built is very basic. Intrinsic provides code for a more advanced example HMI on GitHub.
The HMI example on GitHub offers much more functionality:
- list all saved Processes of a running Solution
- start a process
- stop execution
- view execution status (including errors)
- query and modify states of Service instances in a Solution
It also offers some guidance on local testing of HMIs for faster iteration.