8 Commits

Author SHA1 Message Date
JobvAlewijk
8695f5204f feat: added basic list of 10 gestures
ref: N25B-402
2025-12-18 18:17:06 +01:00
Twirre
1e77548622 Merge branch 'feat/ri-gestures' into 'dev'
feat: gestures to ri

See merge request ics/sp/2025/n25b/pepperplus-ri!21
2025-12-16 08:35:26 +00:00
JobvAlewijk
a8fe887c48 feat: gestures to ri 2025-12-16 08:35:26 +00:00
JobvAlewijk
df702f1e44 Merge branch 'feat/environment-variables' into 'dev'
Add environment variables

See merge request ics/sp/2025/n25b/pepperplus-ri!22
2025-12-13 13:43:52 +00:00
JobvAlewijk
a2cb2ae90a Merge branch 'dev' of ssh://git.science.uu.nl/ics/sp/2025/n25b/pepperplus-ri into feat/environment-variables 2025-12-13 14:43:02 +01:00
Luijkx,S.O.H. (Storm)
017dbfaa28 Merge branch 'docs/extract-installation-instructions' into 'dev'
Improve installation instructions

See merge request ics/sp/2025/n25b/pepperplus-ri!20
2025-12-11 10:58:56 +00:00
Twirre
9ff1d9a4d3 Improve installation instructions 2025-12-11 10:58:56 +00:00
Twirre Meulenbelt
3a259c1170 feat: add environment variables and docs
ref: N25B-352
2025-12-10 13:28:13 +01:00
16 changed files with 1085 additions and 148 deletions

25
.env.example Normal file
View File

@@ -0,0 +1,25 @@
# Example .env file. To use, make a copy, call it ".env" (i.e. removing the ".example" suffix), then you edit values.
# To make a variable apply, uncomment it (remove the "#" in front of the line).
# First, some variables that are likely to be configured:
# The hostname or IP address of the Control Backend.
AGENT__CONTROL_BACKEND_HOST=localhost
# Variables that are unlikely to be configured, you can probably ignore these:
#AGENT__ACTUATION_RECEIVER_PORT=
#AGENT__MAIN_RECEIVER_PORT=
#AGENT__VIDEO_SENDER_PORT=
#AGENT__AUDIO_SENDER_PORT=
#VIDEO__CAMERA_INDEX=
#VIDEO__RESOLUTION=
#VIDEO__COLOR_SPACE=
#VIDEO__FPS=
#VIDEO__STREAM_NAME=
#VIDEO__IMAGE_BUFFER=
#AUDIO__SAMPLE_RATE=
#AUDIO__CHUNK_SIZE=
#AUDIO__CHANNELS=

2
.gitignore vendored
View File

@@ -220,3 +220,5 @@ __marimo__/
# Docs
docs/*
!docs/conf.py
!docs/installation/
!docs/installation/**

113
README.md
View File

@@ -8,90 +8,21 @@ This is an implementation for the Pepper robot, using the Pepper SDK and Python
## Installation
### Linux (or WSL)
- [Linux](./docs/installation/linux.md)
- [macOS](./docs/installation/macos.md)
- [Windows](./docs/installation/windows.md)
Start off by installing [Pyenv](https://github.com/pyenv/pyenv?tab=readme-ov-file#installation) and walk through the steps outlined there (be sure to also add it to PATH). Also install the [Python build requirements](https://github.com/pyenv/pyenv/wiki#suggested-build-environment). Afterwards, install Python 2.7 and activate it for your current shell:
### Git Hooks
To activate automatic linting, formatting, branch name checks and commit message checks, run (after installing requirements):
```bash
pyenv install 2.7
pyenv shell 2.7
pre-commit install
pre-commit install --hook-type commit-msg
```
You can check that this worked by typing
```bash
python -V
```
Which should return `Python 2.7.18`.
Next, `cd` into this repository and create (and activate) a virtual environment:
```bash
cd <path to project>/
python -m pip install virtualenv
python -m virtualenv .venv
source .venv/bin/activate
```
We depend on PortAudio for the `pyaudio` package, so install it with:
```bash
sudo apt install -y portaudio19-dev
```
On WSL, also install:
```bash
sudo apt install -y libasound2-plugins
```
Install the required packages with
```bash
pip install -r requirements.txt
```
Now we need to install the NaoQi SDK into our virtual environment, which we need to do manually. Begin by downloading the SDK:
```bash
wget https://community-static.aldebaran.com/resources/2.5.10/Python%20SDK/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
```
Next, move into the `site-packages` directory and extract the file you just downloaded:
```bash
cd .venv/lib/python2.7/site-packages/
tar xvfz <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
rm <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
```
Lastly, we need to inform our virtual environment where to find our newly installed package:
```bash
echo <path to project>/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-linux64/lib/python2.7/site-packages/ > pynaoqi-python2.7.pth
```
That's it! Verify that it works with
```bash
python -c "import qi; print(qi)"
```
You should now be able to run this project.
### macOS
Similar to Linux, but don't bother installing `pyenv` as it won't be able to install Python 2 on Apple Silicon. Instead, install Python 2.7.18 from the [Python website](https://www.python.org/downloads/release/python-2718/).
Create the virtual environment as described above in the Linux section. Stop at the point where it shows you how to download the NaoQi SDK. Instead, use:
```shell
curl -OL https://community-static.aldebaran.com/resources/2.5.10/Python%20SDK/pynaoqi-python2.7-2.5.7.1-mac64.tar.gz
```
Then resume the steps from above.
## Usage
@@ -108,10 +39,15 @@ On Windows:
$env:PYTHONPATH="src"; python -m robot_interface.main
```
With both, if you want to connect to the actual robot (or simulator), pass the `--qi-url` argument.
### Program Arguments
If you want to connect to the actual robot (or simulator), pass the `--qi-url` argument.
There's also a `--microphone` argument that can be used to choose a microphone to use. If not given, the program will try the default microphone. If you don't know the name of the microphone, pass the argument with any value, and it will list the names of available microphones.
### Environment Variables
You may use environment variables to change settings. Make a copy of the [`.env.example`](.env.example) file, name it `.env` and put it in the root directory. The file itself describes how to do the configuration.
## Testing
@@ -134,23 +70,6 @@ For coverage, add `--cov=robot_interface` as an argument to `pytest`.
## Git Hooks
To activate automatic linting, formatting, branch name checks and commit message checks, run (after installing requirements):
```bash
pre-commit install
pre-commit install --hook-type commit-msg
```
You might get an error along the lines of `Can't install pre-commit with core.hooksPath` set. To fix this, simply unset the hooksPath by running:
```bash
git config --local --unset core.hooksPath
```
Then run the pre-commit install commands again.
## Documentation
Generate documentation web pages using:

View File

@@ -0,0 +1,75 @@
# Installation
Of the Pepper Robot Interface on Linux (or WSL).
Start off by installing [Pyenv](https://github.com/pyenv/pyenv?tab=readme-ov-file#installation) and walk through the steps outlined there (be sure to also add it to PATH). Also install the [Python build requirements](https://github.com/pyenv/pyenv/wiki#suggested-build-environment). Afterwards, install Python 2.7 and activate it for your current shell:
```bash
pyenv install 2.7
pyenv shell 2.7
```
You can check that this worked by typing
```bash
python -V
```
Which should return `Python 2.7.18`.
Next, `cd` into this repository and create (and activate) a virtual environment:
```bash
cd <path to project>/
python -m pip install virtualenv
python -m virtualenv .venv
source .venv/bin/activate
```
We depend on PortAudio for the `pyaudio` package, so install it with:
```bash
sudo apt install -y portaudio19-dev
```
On WSL, also install:
```bash
sudo apt install -y libasound2-plugins
```
Install the required packages with
```bash
pip install -r requirements.txt
```
Now we need to install the NaoQi SDK into our virtual environment, which we need to do manually. Begin by downloading the SDK:
```bash
wget https://community-static.aldebaran.com/resources/2.5.10/Python%20SDK/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
```
Next, move into the `site-packages` directory and extract the file you just downloaded:
```bash
cd .venv/lib/python2.7/site-packages/
tar xvfz <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
rm <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
```
Lastly, we need to inform our virtual environment where to find our newly installed package:
```bash
echo <path to project>/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-linux64/lib/python2.7/site-packages/ > pynaoqi-python2.7.pth
```
That's it! Verify that it works with
```bash
python -c "import qi; print(qi)"
```
You should now be able to run this project.
See the README for how to run.

106
docs/installation/macos.md Normal file
View File

@@ -0,0 +1,106 @@
# Installation
Of the Pepper Robot Interface on macOS.
## Python 2.7
Install Python 2.7.18 from the [Python website](https://www.python.org/downloads/release/python-2718/).
Check that it worked by executing
```shell
python2 -V
```
Which should return Python 2.7.18.
## Virtual Environment
Next, cd into this repository and create (and activate) a virtual environment:
```shell
cd /path/to/project/
python2 -m pip install virtualenv
python2 -m virtualenv .venv
source .venv/bin/activate
```
We depend on PortAudio for the `pyaudio` package. If on Intel, run `brew install portaudio`. If on Apple Silicon, compile manually using the steps described in [the YouTrack article](https://utrechtuniversity.youtrack.cloud/articles/N25B-A-22/Install-PyAudio-for-Python-2-on-Apple-Silicon).
Then install the required Python packages with
```shell
pip install -r requirements.txt
```
## NaoQi SDK
We need to manually install the NaoQi SDK into our virtual environment. There are two options:
1. Install a newer version (2.8) which will make running easier, but compatibility is uncertain.
2. Install the version expected by the robot (2.5). This will complicate running slightly.
### Option 1
Download the SDK from [twirre.io](https://twirre.io/files/pynaoqi-python2.7-2.8.6.23-mac64-20191127_144231.tar.gz), or find one on the Aldebaran website, or an archived version on Web Archive.
Extract it to `/path/to/project/.venv/lib/python2.7/site-packages/`.
We need to inform our virtual environment where to find our newly installed package:
```bash
echo "/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.8.6.23-mac64-20191127_144231/lib/python2.7/site-packages/" > /path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7.pth
```
Now continue with [verifying](#verifying).
### Option 2
This method of installation requires setting the `DYLD_LIBRARY_PATH` environment variable before running. How will be explained.
Download the SDK from [twirre.io](https://twirre.io/files/pynaoqi-2.5.7.1-mac64-deps.tar.gz). This is a modified version of the one from Aldebaran, this one including required Choregraphe dependencies.
Extract it to `/path/to/project/.venv/lib/python2.7/site-packages/`.
We need to inform our virtual environment where to find our newly installed package:
```shell
echo "/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-mac64/lib/python2.7/site-packages/" > /path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7.pth
```
Now, anytime before running you need to set the `DYLD_LIBRARY_PATH` environment variable.
```shell
export DYLD_LIBRARY_PATH="/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-mac64/choregraphe_lib:${DYLD_LIBRARY_PATH}"
```
You may want to simplify environment activation with a script `activate.sh` like:
```shell
#!/bin/zsh
export DYLD_LIBRARY_PATH="/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.8.6.23-mac64-20191127_144231/choregraphe_lib:${DYLD_LIBRARY_PATH}"
source .venv/bin/activate
```
[Verify](#verifying) if it works.
## Verifying
Verify that the NaoQI SDK installation works with
```bash
python -c "import qi; print(qi)"
```
If so, you should now be able to run this project.
See the README for how to run.

View File

@@ -0,0 +1,44 @@
# Installation
Of the Pepper Robot Interface on Windows.
Install Python 2.7.18 from [the Python website](https://www.python.org/downloads/release/python-2718/), choose the x86-64 installer (at the bottom of the page).
To see if it worked:
```shell
py -2 -V
```
Which should return `Python 2.7.18`.
Next, `cd` into this repository and create (and activate) a virtual environment:
```bash
cd <path to project>/
py -2 -m pip install virtualenv
py -2 -m virtualenv .venv
.\.venv\Scripts\activate
```
Install the required packages with
```bash
pip install -r requirements.txt
```
Now we need to install the NaoQi SDK into our virtual environment, which we need to do manually. Download the SDK from [Aldebaran](https://community-static.aldebaran.com/resources/2.5.5/sdk-python/pynaoqi-python2.7-2.5.5.5-win32-vs2013.zip), [Web Archive](https://web.archive.org/web/20240120111043/https://community-static.aldebaran.com/resources/2.5.5/sdk-python/pynaoqi-python2.7-2.5.5.5-win32-vs2013.zip) or [twirre.io](https://twirre.io/files/pynaoqi-python2.7-2.8.6.23-win64-vs2015-20191127_152649.zip).
Extract to `.\.venv\Lib\site-packages`.
Create a file `.venv\Lib\site-packages\pynaoqi-python2.7.pth`, put the full path of `pynaoqi-python2.7-2.8.6.23-win64-vs2015-20191127_152649\lib\python2.7\Lib\site-packages` in there.
Test if it worked by running:
```bash
python -c "import qi; print(qi)"
```
You should now be able to run this project.
See the README for how to run.

View File

@@ -6,3 +6,4 @@ pytest-cov<3.0.0
sphinx
sphinx_rtd_theme
pre-commit
python-dotenv

View File

@@ -1,95 +1,101 @@
from __future__ import unicode_literals
from robot_interface.utils.get_config import get_config
class AgentSettings(object):
"""
Agent port configuration.
:ivar actuating_receiver_port: Port for receiving actuation commands.
:vartype actuating_receiver_port: int
:ivar main_receiver_port: Port for receiving main messages.
:ivar control_backend_host: Hostname of the control backend, defaults to "localhost".
:vartype control_backend_host: string
:ivar actuation_receiver_port: Port for receiving actuation commands, defaults to 5557.
:vartype actuation_receiver_port: int
:ivar main_receiver_port: Port for receiving main messages, defaults to 5555.
:vartype main_receiver_port: int
:ivar video_sender_port: Port used for sending video frames.
:ivar video_sender_port: Port used for sending video frames, defaults to 5556.
:vartype video_sender_port: int
:ivar audio_sender_port: Port used for sending audio data.
:ivar audio_sender_port: Port used for sending audio data, defaults to 5558.
:vartype audio_sender_port: int
"""
def __init__(
self,
actuating_receiver_port=5557,
main_receiver_port=5555,
video_sender_port=5556,
audio_sender_port=5558,
self,
control_backend_host=None,
actuation_receiver_port=None,
main_receiver_port=None,
video_sender_port=None,
audio_sender_port=None,
):
self.actuating_receiver_port = actuating_receiver_port
self.main_receiver_port = main_receiver_port
self.video_sender_port = video_sender_port
self.audio_sender_port = audio_sender_port
self.control_backend_host = get_config(control_backend_host, "AGENT__CONTROL_BACKEND_HOST", "localhost")
self.actuation_receiver_port = get_config(actuation_receiver_port, "AGENT__ACTUATION_RECEIVER_PORT", 5557, int)
self.main_receiver_port = get_config(main_receiver_port, "AGENT__MAIN_RECEIVER_PORT", 5555, int)
self.video_sender_port = get_config(video_sender_port, "AGENT__VIDEO_SENDER_PORT", 5556, int)
self.audio_sender_port = get_config(audio_sender_port, "AGENT__AUDIO_SENDER_PORT", 5558, int)
class VideoConfig(object):
"""
Video configuration constants.
:ivar camera_index: Index of the camera used.
:ivar camera_index: Index of the camera used, defaults to 0.
:vartype camera_index: int
:ivar resolution: Video resolution mode.
:ivar resolution: Video resolution mode, defaults to 2.
:vartype resolution: int
:ivar color_space: Color space identifier.
:ivar color_space: Color space identifier, defaults to 11.
:vartype color_space: int
:ivar fps: Frames per second of the video stream.
:ivar fps: Frames per second of the video stream, defaults to 15.
:vartype fps: int
:ivar stream_name: Name of the video stream.
:ivar stream_name: Name of the video stream, defaults to "Pepper Video".
:vartype stream_name: str
:ivar image_buffer: Internal buffer size for video frames.
:ivar image_buffer: Internal buffer size for video frames, defaults to 6.
:vartype image_buffer: int
"""
def __init__(
self,
camera_index=0,
resolution=2,
color_space=11,
fps=15,
stream_name="Pepper Video",
image_buffer=6,
camera_index=None,
resolution=None,
color_space=None,
fps=None,
stream_name=None,
image_buffer=None,
):
self.camera_index = camera_index
self.resolution = resolution
self.color_space = color_space
self.fps = fps
self.stream_name = stream_name
self.image_buffer = image_buffer
self.camera_index = get_config(camera_index, "VIDEO__CAMERA_INDEX", 0, int)
self.resolution = get_config(resolution, "VIDEO__RESOLUTION", 2, int)
self.color_space = get_config(color_space, "VIDEO__COLOR_SPACE", 11, int)
self.fps = get_config(fps, "VIDEO__FPS", 15, int)
self.stream_name = get_config(stream_name, "VIDEO__STREAM_NAME", "Pepper Video")
self.image_buffer = get_config(image_buffer, "VIDEO__IMAGE_BUFFER", 6, int)
class AudioConfig(object):
"""
Audio configuration constants.
:ivar sample_rate: Audio sampling rate in Hz.
:ivar sample_rate: Audio sampling rate in Hz, defaults to 16000.
:vartype sample_rate: int
:ivar chunk_size: Size of audio chunks to capture/process.
:ivar chunk_size: Size of audio chunks to capture/process, defaults to 512.
:vartype chunk_size: int
:ivar channels: Number of audio channels.
:ivar channels: Number of audio channels, defaults to 1.
:vartype channels: int
"""
def __init__(self, sample_rate=16000, chunk_size=512, channels=1):
self.sample_rate = sample_rate
self.chunk_size = chunk_size
self.channels = channels
def __init__(self, sample_rate=None, chunk_size=None, channels=None):
self.sample_rate = get_config(sample_rate, "AUDIO__SAMPLE_RATE", 16000, int)
self.chunk_size = get_config(chunk_size, "AUDIO__CHUNK_SIZE", 512, int)
self.channels = get_config(channels, "AUDIO__CHANNELS", 1, int)
class MainConfig(object):
"""
Main system configuration.
:ivar poll_timeout_ms: Timeout for polling events, in milliseconds.
:ivar poll_timeout_ms: Timeout for polling events, in milliseconds, defaults to 100.
:vartype poll_timeout_ms: int
:ivar max_handler_time_ms: Maximum allowed handler time, in milliseconds.
:ivar max_handler_time_ms: Maximum allowed handler time, in milliseconds, defaults to 50.
:vartype max_handler_time_ms: int
"""
def __init__(self, poll_timeout_ms=100, max_handler_time_ms=50):
self.poll_timeout_ms = poll_timeout_ms
self.max_handler_time_ms = max_handler_time_ms
def __init__(self, poll_timeout_ms=None, max_handler_time_ms=None):
self.poll_timeout_ms = get_config(poll_timeout_ms, "MAIN__POLL_TIMEOUT_MS", 100, int)
self.max_handler_time_ms = get_config(max_handler_time_ms, "MAIN__MAX_HANDLER_TIME_MS", 50, int)
class Settings(object):

View File

@@ -7,6 +7,7 @@ from robot_interface.endpoints.receiver_base import ReceiverBase
from robot_interface.state import state
from robot_interface.core.config import settings
from robot_interface.endpoints.gesture_settings import GestureTags
class ActuationReceiver(ReceiverBase):
@@ -21,12 +22,16 @@ class ActuationReceiver(ReceiverBase):
:ivar _tts_service: The text-to-speech service object from the Qi session.
:vartype _tts_service: qi.Session | None
:ivar _animation_service: The animation/gesture service object from the Qi session.
:vartype _animation_service: qi.Session | None
"""
def __init__(self, zmq_context, port=settings.agent_settings.actuating_receiver_port):
def __init__(self, zmq_context, port=settings.agent_settings.actuation_receiver_port):
super(ActuationReceiver, self).__init__("actuation")
self.create_socket(zmq_context, zmq.SUB, port)
self.socket.setsockopt_string(zmq.SUBSCRIBE, u"") # Causes block if given in options
self._tts_service = None
self._animation_service = None
def _handle_speech(self, message):
"""
@@ -54,7 +59,54 @@ class ActuationReceiver(ReceiverBase):
self._tts_service = state.qi_session.service("ALTextToSpeech")
# Returns instantly. Messages received while speaking will be queued.
qi.async(self._tts_service.say, text)
getattr(qi, "async")(self._tts_service.say, text)
def _handle_gesture(self, message, is_single):
"""
Handle a gesture actuation request.
:param message: The gesture to do, must contain properties "endpoint" and "data".
:type message: dict
:param is_single: Whether it's a specific single gesture or a gesture tag.
:type is_single: bool
"""
gesture = message.get("data")
if not gesture:
logging.warn("Received gesture to do, but it lacks data.")
return
if not isinstance(gesture, (str, unicode)):
logging.warn("Received gesture to do but it is not a string.")
return
logging.debug("Received gesture to do: {}".format(gesture))
if is_single:
if gesture not in GestureTags.single_gestures:
logging.warn("Received single gesture to do, but it does not exist in settings")
return
else:
if gesture not in GestureTags.tags:
logging.warn("Received single tag to do, but it does not exist in settings")
return
if not state.qi_session: return
# If state has a qi_session, we know that we can import qi
import qi # Takes a while only the first time it's imported
if not self._animation_service:
self._animation_service = state.qi_session.service("ALAnimationPlayer")
# Play the gesture. Pepper comes with predefined animations like "Wave", "Greet", "Clap"
# You can also create custom animations using Choregraphe and upload them to the robot.
if is_single:
logging.debug("Playing single gesture: {}".format(gesture))
getattr(qi, "async")(self._animation_service.run, gesture)
else:
logging.debug("Playing tag gesture: {}".format(gesture))
getattr(qi, "async")(self._animation_service.runTag, gesture)
def handle_message(self, message):
"""
@@ -65,3 +117,18 @@ class ActuationReceiver(ReceiverBase):
"""
if message["endpoint"] == "actuate/speech":
self._handle_speech(message)
if message["endpoint"] == "actuate/gesture/tag":
self._handle_gesture(message, False)
if message["endpoint"] == "actuate/gesture/single":
self._handle_gesture(message, True)
def endpoint_description(self):
"""
Extend the default endpoint description with gesture tags.
Returned during negotiate/ports so the CB knows available gestures.
"""
desc = super(ActuationReceiver, self).endpoint_description()
desc["gestures"] = GestureTags.tags
desc["basic_gestures"] = GestureTags.basic_gestures
desc["single_gestures"] = GestureTags.single_gestures
return desc

View File

@@ -0,0 +1,425 @@
class GestureTags:
tags = ["above", "affirmative", "afford", "agitated", "all", "allright", "alright", "any",
"assuage", "assuage", "attemper", "back", "bashful", "beg", "beseech", "blank",
"body language", "bored", "bow", "but", "call", "calm", "choose", "choice", "cloud",
"cogitate", "cool", "crazy", "disappointed", "down", "earth", "empty", "embarrassed",
"enthusiastic", "entire", "estimate", "except", "exalted", "excited", "explain", "far",
"field", "floor", "forlorn", "friendly", "front", "frustrated", "gentle", "gift",
"give", "ground", "happy", "hello", "her", "here", "hey", "hi", "him", "hopeless",
"hysterical", "I", "implore", "indicate", "joyful", "me", "meditate", "modest",
"negative", "nervous", "no", "not know", "nothing", "offer", "ok", "once upon a time",
"oppose", "or", "pacify", "pick", "placate", "please", "present", "proffer", "quiet",
"reason", "refute", "reject", "rousing", "sad", "select", "shamefaced", "show",
"show sky", "sky", "soothe", "sun", "supplicate", "tablet", "tall", "them", "there",
"think", "timid", "top", "unless", "up", "upstairs", "void", "warm", "winner", "yeah",
"yes", "yoo-hoo", "you", "your", "zero", "zestful"]
basic_gestures = [
"animations/Stand/Gestures/Hey_1",
"animations/Stand/Emotions/Neutral/Puzzled_1",
"animations/Stand/Gestures/Explain_4",
"animations/Stand/Gestures/You_1"
"animations/Stand/Emotions/Positive/Happy_1",
"animations/Stand/Emotions/Positive/Laugh_2",
"animations/Stand/Emotions/Neutral/Lonely_1",
"animations/Stand/Emotions/Negative/Surprise_1",
"animations/Stand/Emotions/Negative/Hurt_2",
"animations/Stand/Emotions/Negative/Angry_4",
]
single_gestures = [
"animations/Stand/BodyTalk/Listening/Listening_1",
"animations/Stand/BodyTalk/Listening/Listening_2",
"animations/Stand/BodyTalk/Listening/Listening_3",
"animations/Stand/BodyTalk/Listening/Listening_4",
"animations/Stand/BodyTalk/Listening/Listening_5",
"animations/Stand/BodyTalk/Listening/Listening_6",
"animations/Stand/BodyTalk/Listening/Listening_7",
"animations/Stand/BodyTalk/Speaking/BodyTalk_1",
"animations/Stand/BodyTalk/Speaking/BodyTalk_10",
"animations/Stand/BodyTalk/Speaking/BodyTalk_11",
"animations/Stand/BodyTalk/Speaking/BodyTalk_12",
"animations/Stand/BodyTalk/Speaking/BodyTalk_13",
"animations/Stand/BodyTalk/Speaking/BodyTalk_14",
"animations/Stand/BodyTalk/Speaking/BodyTalk_15",
"animations/Stand/BodyTalk/Speaking/BodyTalk_16",
"animations/Stand/BodyTalk/Speaking/BodyTalk_2",
"animations/Stand/BodyTalk/Speaking/BodyTalk_3",
"animations/Stand/BodyTalk/Speaking/BodyTalk_4",
"animations/Stand/BodyTalk/Speaking/BodyTalk_5",
"animations/Stand/BodyTalk/Speaking/BodyTalk_6",
"animations/Stand/BodyTalk/Speaking/BodyTalk_7",
"animations/Stand/BodyTalk/Speaking/BodyTalk_8",
"animations/Stand/BodyTalk/Speaking/BodyTalk_9",
"animations/Stand/BodyTalk/Thinking/Remember_1",
"animations/Stand/BodyTalk/Thinking/Remember_2",
"animations/Stand/BodyTalk/Thinking/Remember_3",
"animations/Stand/BodyTalk/Thinking/ThinkingLoop_1",
"animations/Stand/BodyTalk/Thinking/ThinkingLoop_2",
"animations/Stand/Emotions/Negative/Angry_1",
"animations/Stand/Emotions/Negative/Angry_2",
"animations/Stand/Emotions/Negative/Angry_3",
"animations/Stand/Emotions/Negative/Angry_4",
"animations/Stand/Emotions/Negative/Anxious_1",
"animations/Stand/Emotions/Negative/Bored_1",
"animations/Stand/Emotions/Negative/Bored_2",
"animations/Stand/Emotions/Negative/Disappointed_1",
"animations/Stand/Emotions/Negative/Exhausted_1",
"animations/Stand/Emotions/Negative/Exhausted_2",
"animations/Stand/Emotions/Negative/Fear_1",
"animations/Stand/Emotions/Negative/Fear_2",
"animations/Stand/Emotions/Negative/Fearful_1",
"animations/Stand/Emotions/Negative/Frustrated_1",
"animations/Stand/Emotions/Negative/Humiliated_1",
"animations/Stand/Emotions/Negative/Hurt_1",
"animations/Stand/Emotions/Negative/Hurt_2",
"animations/Stand/Emotions/Negative/Late_1",
"animations/Stand/Emotions/Negative/Sad_1",
"animations/Stand/Emotions/Negative/Sad_2",
"animations/Stand/Emotions/Negative/Shocked_1",
"animations/Stand/Emotions/Negative/Sorry_1",
"animations/Stand/Emotions/Negative/Surprise_1",
"animations/Stand/Emotions/Negative/Surprise_2",
"animations/Stand/Emotions/Negative/Surprise_3",
"animations/Stand/Emotions/Neutral/Alienated_1",
"animations/Stand/Emotions/Neutral/AskForAttention_1",
"animations/Stand/Emotions/Neutral/AskForAttention_2",
"animations/Stand/Emotions/Neutral/AskForAttention_3",
"animations/Stand/Emotions/Neutral/Cautious_1",
"animations/Stand/Emotions/Neutral/Confused_1",
"animations/Stand/Emotions/Neutral/Determined_1",
"animations/Stand/Emotions/Neutral/Embarrassed_1",
"animations/Stand/Emotions/Neutral/Hesitation_1",
"animations/Stand/Emotions/Neutral/Innocent_1",
"animations/Stand/Emotions/Neutral/Lonely_1",
"animations/Stand/Emotions/Neutral/Mischievous_1",
"animations/Stand/Emotions/Neutral/Puzzled_1",
"animations/Stand/Emotions/Neutral/Sneeze",
"animations/Stand/Emotions/Neutral/Stubborn_1",
"animations/Stand/Emotions/Neutral/Suspicious_1",
"animations/Stand/Emotions/Positive/Amused_1",
"animations/Stand/Emotions/Positive/Confident_1",
"animations/Stand/Emotions/Positive/Ecstatic_1",
"animations/Stand/Emotions/Positive/Enthusiastic_1",
"animations/Stand/Emotions/Positive/Excited_1",
"animations/Stand/Emotions/Positive/Excited_2",
"animations/Stand/Emotions/Positive/Excited_3",
"animations/Stand/Emotions/Positive/Happy_1",
"animations/Stand/Emotions/Positive/Happy_2",
"animations/Stand/Emotions/Positive/Happy_3",
"animations/Stand/Emotions/Positive/Happy_4",
"animations/Stand/Emotions/Positive/Hungry_1",
"animations/Stand/Emotions/Positive/Hysterical_1",
"animations/Stand/Emotions/Positive/Interested_1",
"animations/Stand/Emotions/Positive/Interested_2",
"animations/Stand/Emotions/Positive/Laugh_1",
"animations/Stand/Emotions/Positive/Laugh_2",
"animations/Stand/Emotions/Positive/Laugh_3",
"animations/Stand/Emotions/Positive/Mocker_1",
"animations/Stand/Emotions/Positive/Optimistic_1",
"animations/Stand/Emotions/Positive/Peaceful_1",
"animations/Stand/Emotions/Positive/Proud_1",
"animations/Stand/Emotions/Positive/Proud_2",
"animations/Stand/Emotions/Positive/Proud_3",
"animations/Stand/Emotions/Positive/Relieved_1",
"animations/Stand/Emotions/Positive/Shy_1",
"animations/Stand/Emotions/Positive/Shy_2",
"animations/Stand/Emotions/Positive/Sure_1",
"animations/Stand/Emotions/Positive/Winner_1",
"animations/Stand/Emotions/Positive/Winner_2",
"animations/Stand/Gestures/Angry_1",
"animations/Stand/Gestures/Angry_2",
"animations/Stand/Gestures/Angry_3",
"animations/Stand/Gestures/BowShort_1",
"animations/Stand/Gestures/BowShort_2",
"animations/Stand/Gestures/BowShort_3",
"animations/Stand/Gestures/But_1",
"animations/Stand/Gestures/CalmDown_1",
"animations/Stand/Gestures/CalmDown_2",
"animations/Stand/Gestures/CalmDown_3",
"animations/Stand/Gestures/CalmDown_4",
"animations/Stand/Gestures/CalmDown_5",
"animations/Stand/Gestures/CalmDown_6",
"animations/Stand/Gestures/Choice_1",
"animations/Stand/Gestures/ComeOn_1",
"animations/Stand/Gestures/Confused_1",
"animations/Stand/Gestures/Confused_2",
"animations/Stand/Gestures/CountFive_1",
"animations/Stand/Gestures/CountFour_1",
"animations/Stand/Gestures/CountMore_1",
"animations/Stand/Gestures/CountOne_1",
"animations/Stand/Gestures/CountThree_1",
"animations/Stand/Gestures/CountTwo_1",
"animations/Stand/Gestures/Desperate_1",
"animations/Stand/Gestures/Desperate_2",
"animations/Stand/Gestures/Desperate_3",
"animations/Stand/Gestures/Desperate_4",
"animations/Stand/Gestures/Desperate_5",
"animations/Stand/Gestures/DontUnderstand_1",
"animations/Stand/Gestures/Enthusiastic_3",
"animations/Stand/Gestures/Enthusiastic_4",
"animations/Stand/Gestures/Enthusiastic_5",
"animations/Stand/Gestures/Everything_1",
"animations/Stand/Gestures/Everything_2",
"animations/Stand/Gestures/Everything_3",
"animations/Stand/Gestures/Everything_4",
"animations/Stand/Gestures/Everything_6",
"animations/Stand/Gestures/Excited_1",
"animations/Stand/Gestures/Explain_1",
"animations/Stand/Gestures/Explain_10",
"animations/Stand/Gestures/Explain_11",
"animations/Stand/Gestures/Explain_2",
"animations/Stand/Gestures/Explain_3",
"animations/Stand/Gestures/Explain_4",
"animations/Stand/Gestures/Explain_5",
"animations/Stand/Gestures/Explain_6",
"animations/Stand/Gestures/Explain_7",
"animations/Stand/Gestures/Explain_8",
"animations/Stand/Gestures/Far_1",
"animations/Stand/Gestures/Far_2",
"animations/Stand/Gestures/Far_3",
"animations/Stand/Gestures/Follow_1",
"animations/Stand/Gestures/Give_1",
"animations/Stand/Gestures/Give_2",
"animations/Stand/Gestures/Give_3",
"animations/Stand/Gestures/Give_4",
"animations/Stand/Gestures/Give_5",
"animations/Stand/Gestures/Give_6",
"animations/Stand/Gestures/Great_1",
"animations/Stand/Gestures/HeSays_1",
"animations/Stand/Gestures/HeSays_2",
"animations/Stand/Gestures/HeSays_3",
"animations/Stand/Gestures/Hey_1",
"animations/Stand/Gestures/Hey_10",
"animations/Stand/Gestures/Hey_2",
"animations/Stand/Gestures/Hey_3",
"animations/Stand/Gestures/Hey_4",
"animations/Stand/Gestures/Hey_6",
"animations/Stand/Gestures/Hey_7",
"animations/Stand/Gestures/Hey_8",
"animations/Stand/Gestures/Hey_9",
"animations/Stand/Gestures/Hide_1",
"animations/Stand/Gestures/Hot_1",
"animations/Stand/Gestures/Hot_2",
"animations/Stand/Gestures/IDontKnow_1",
"animations/Stand/Gestures/IDontKnow_2",
"animations/Stand/Gestures/IDontKnow_3",
"animations/Stand/Gestures/IDontKnow_4",
"animations/Stand/Gestures/IDontKnow_5",
"animations/Stand/Gestures/IDontKnow_6",
"animations/Stand/Gestures/Joy_1",
"animations/Stand/Gestures/Kisses_1",
"animations/Stand/Gestures/Look_1",
"animations/Stand/Gestures/Look_2",
"animations/Stand/Gestures/Maybe_1",
"animations/Stand/Gestures/Me_1",
"animations/Stand/Gestures/Me_2",
"animations/Stand/Gestures/Me_4",
"animations/Stand/Gestures/Me_7",
"animations/Stand/Gestures/Me_8",
"animations/Stand/Gestures/Mime_1",
"animations/Stand/Gestures/Mime_2",
"animations/Stand/Gestures/Next_1",
"animations/Stand/Gestures/No_1",
"animations/Stand/Gestures/No_2",
"animations/Stand/Gestures/No_3",
"animations/Stand/Gestures/No_4",
"animations/Stand/Gestures/No_5",
"animations/Stand/Gestures/No_6",
"animations/Stand/Gestures/No_7",
"animations/Stand/Gestures/No_8",
"animations/Stand/Gestures/No_9",
"animations/Stand/Gestures/Nothing_1",
"animations/Stand/Gestures/Nothing_2",
"animations/Stand/Gestures/OnTheEvening_1",
"animations/Stand/Gestures/OnTheEvening_2",
"animations/Stand/Gestures/OnTheEvening_3",
"animations/Stand/Gestures/OnTheEvening_4",
"animations/Stand/Gestures/OnTheEvening_5",
"animations/Stand/Gestures/Please_1",
"animations/Stand/Gestures/Please_2",
"animations/Stand/Gestures/Please_3",
"animations/Stand/Gestures/Reject_1",
"animations/Stand/Gestures/Reject_2",
"animations/Stand/Gestures/Reject_3",
"animations/Stand/Gestures/Reject_4",
"animations/Stand/Gestures/Reject_5",
"animations/Stand/Gestures/Reject_6",
"animations/Stand/Gestures/Salute_1",
"animations/Stand/Gestures/Salute_2",
"animations/Stand/Gestures/Salute_3",
"animations/Stand/Gestures/ShowFloor_1",
"animations/Stand/Gestures/ShowFloor_2",
"animations/Stand/Gestures/ShowFloor_3",
"animations/Stand/Gestures/ShowFloor_4",
"animations/Stand/Gestures/ShowFloor_5",
"animations/Stand/Gestures/ShowSky_1",
"animations/Stand/Gestures/ShowSky_10",
"animations/Stand/Gestures/ShowSky_11",
"animations/Stand/Gestures/ShowSky_12",
"animations/Stand/Gestures/ShowSky_2",
"animations/Stand/Gestures/ShowSky_3",
"animations/Stand/Gestures/ShowSky_4",
"animations/Stand/Gestures/ShowSky_5",
"animations/Stand/Gestures/ShowSky_6",
"animations/Stand/Gestures/ShowSky_7",
"animations/Stand/Gestures/ShowSky_8",
"animations/Stand/Gestures/ShowSky_9",
"animations/Stand/Gestures/ShowTablet_1",
"animations/Stand/Gestures/ShowTablet_2",
"animations/Stand/Gestures/ShowTablet_3",
"animations/Stand/Gestures/Shy_1",
"animations/Stand/Gestures/Stretch_1",
"animations/Stand/Gestures/Stretch_2",
"animations/Stand/Gestures/Surprised_1",
"animations/Stand/Gestures/TakePlace_1",
"animations/Stand/Gestures/TakePlace_2",
"animations/Stand/Gestures/Take_1",
"animations/Stand/Gestures/Thinking_1",
"animations/Stand/Gestures/Thinking_2",
"animations/Stand/Gestures/Thinking_3",
"animations/Stand/Gestures/Thinking_4",
"animations/Stand/Gestures/Thinking_5",
"animations/Stand/Gestures/Thinking_6",
"animations/Stand/Gestures/Thinking_7",
"animations/Stand/Gestures/Thinking_8",
"animations/Stand/Gestures/This_1",
"animations/Stand/Gestures/This_10",
"animations/Stand/Gestures/This_11",
"animations/Stand/Gestures/This_12",
"animations/Stand/Gestures/This_13",
"animations/Stand/Gestures/This_14",
"animations/Stand/Gestures/This_15",
"animations/Stand/Gestures/This_2",
"animations/Stand/Gestures/This_3",
"animations/Stand/Gestures/This_4",
"animations/Stand/Gestures/This_5",
"animations/Stand/Gestures/This_6",
"animations/Stand/Gestures/This_7",
"animations/Stand/Gestures/This_8",
"animations/Stand/Gestures/This_9",
"animations/Stand/Gestures/WhatSThis_1",
"animations/Stand/Gestures/WhatSThis_10",
"animations/Stand/Gestures/WhatSThis_11",
"animations/Stand/Gestures/WhatSThis_12",
"animations/Stand/Gestures/WhatSThis_13",
"animations/Stand/Gestures/WhatSThis_14",
"animations/Stand/Gestures/WhatSThis_15",
"animations/Stand/Gestures/WhatSThis_16",
"animations/Stand/Gestures/WhatSThis_2",
"animations/Stand/Gestures/WhatSThis_3",
"animations/Stand/Gestures/WhatSThis_4",
"animations/Stand/Gestures/WhatSThis_5",
"animations/Stand/Gestures/WhatSThis_6",
"animations/Stand/Gestures/WhatSThis_7",
"animations/Stand/Gestures/WhatSThis_8",
"animations/Stand/Gestures/WhatSThis_9",
"animations/Stand/Gestures/Whisper_1",
"animations/Stand/Gestures/Wings_1",
"animations/Stand/Gestures/Wings_2",
"animations/Stand/Gestures/Wings_3",
"animations/Stand/Gestures/Wings_4",
"animations/Stand/Gestures/Wings_5",
"animations/Stand/Gestures/Yes_1",
"animations/Stand/Gestures/Yes_2",
"animations/Stand/Gestures/Yes_3",
"animations/Stand/Gestures/YouKnowWhat_1",
"animations/Stand/Gestures/YouKnowWhat_2",
"animations/Stand/Gestures/YouKnowWhat_3",
"animations/Stand/Gestures/YouKnowWhat_4",
"animations/Stand/Gestures/YouKnowWhat_5",
"animations/Stand/Gestures/YouKnowWhat_6",
"animations/Stand/Gestures/You_1",
"animations/Stand/Gestures/You_2",
"animations/Stand/Gestures/You_3",
"animations/Stand/Gestures/You_4",
"animations/Stand/Gestures/You_5",
"animations/Stand/Gestures/Yum_1",
"animations/Stand/Reactions/EthernetOff_1",
"animations/Stand/Reactions/EthernetOn_1",
"animations/Stand/Reactions/Heat_1",
"animations/Stand/Reactions/Heat_2",
"animations/Stand/Reactions/LightShine_1",
"animations/Stand/Reactions/LightShine_2",
"animations/Stand/Reactions/LightShine_3",
"animations/Stand/Reactions/LightShine_4",
"animations/Stand/Reactions/SeeColor_1",
"animations/Stand/Reactions/SeeColor_2",
"animations/Stand/Reactions/SeeColor_3",
"animations/Stand/Reactions/SeeSomething_1",
"animations/Stand/Reactions/SeeSomething_3",
"animations/Stand/Reactions/SeeSomething_4",
"animations/Stand/Reactions/SeeSomething_5",
"animations/Stand/Reactions/SeeSomething_6",
"animations/Stand/Reactions/SeeSomething_7",
"animations/Stand/Reactions/SeeSomething_8",
"animations/Stand/Reactions/ShakeBody_1",
"animations/Stand/Reactions/ShakeBody_2",
"animations/Stand/Reactions/ShakeBody_3",
"animations/Stand/Reactions/TouchHead_1",
"animations/Stand/Reactions/TouchHead_2",
"animations/Stand/Reactions/TouchHead_3",
"animations/Stand/Reactions/TouchHead_4",
"animations/Stand/Waiting/AirGuitar_1",
"animations/Stand/Waiting/BackRubs_1",
"animations/Stand/Waiting/Bandmaster_1",
"animations/Stand/Waiting/Binoculars_1",
"animations/Stand/Waiting/BreathLoop_1",
"animations/Stand/Waiting/BreathLoop_2",
"animations/Stand/Waiting/BreathLoop_3",
"animations/Stand/Waiting/CallSomeone_1",
"animations/Stand/Waiting/Drink_1",
"animations/Stand/Waiting/DriveCar_1",
"animations/Stand/Waiting/Fitness_1",
"animations/Stand/Waiting/Fitness_2",
"animations/Stand/Waiting/Fitness_3",
"animations/Stand/Waiting/FunnyDancer_1",
"animations/Stand/Waiting/HappyBirthday_1",
"animations/Stand/Waiting/Helicopter_1",
"animations/Stand/Waiting/HideEyes_1",
"animations/Stand/Waiting/HideHands_1",
"animations/Stand/Waiting/Innocent_1",
"animations/Stand/Waiting/Knight_1",
"animations/Stand/Waiting/KnockEye_1",
"animations/Stand/Waiting/KungFu_1",
"animations/Stand/Waiting/LookHand_1",
"animations/Stand/Waiting/LookHand_2",
"animations/Stand/Waiting/LoveYou_1",
"animations/Stand/Waiting/Monster_1",
"animations/Stand/Waiting/MysticalPower_1",
"animations/Stand/Waiting/PlayHands_1",
"animations/Stand/Waiting/PlayHands_2",
"animations/Stand/Waiting/PlayHands_3",
"animations/Stand/Waiting/Relaxation_1",
"animations/Stand/Waiting/Relaxation_2",
"animations/Stand/Waiting/Relaxation_3",
"animations/Stand/Waiting/Relaxation_4",
"animations/Stand/Waiting/Rest_1",
"animations/Stand/Waiting/Robot_1",
"animations/Stand/Waiting/ScratchBack_1",
"animations/Stand/Waiting/ScratchBottom_1",
"animations/Stand/Waiting/ScratchEye_1",
"animations/Stand/Waiting/ScratchHand_1",
"animations/Stand/Waiting/ScratchHead_1",
"animations/Stand/Waiting/ScratchLeg_1",
"animations/Stand/Waiting/ScratchTorso_1",
"animations/Stand/Waiting/ShowMuscles_1",
"animations/Stand/Waiting/ShowMuscles_2",
"animations/Stand/Waiting/ShowMuscles_3",
"animations/Stand/Waiting/ShowMuscles_4",
"animations/Stand/Waiting/ShowMuscles_5",
"animations/Stand/Waiting/ShowSky_1",
"animations/Stand/Waiting/ShowSky_2",
"animations/Stand/Waiting/SpaceShuttle_1",
"animations/Stand/Waiting/Stretch_1",
"animations/Stand/Waiting/Stretch_2",
"animations/Stand/Waiting/TakePicture_1",
"animations/Stand/Waiting/Taxi_1",
"animations/Stand/Waiting/Think_1",
"animations/Stand/Waiting/Think_2",
"animations/Stand/Waiting/Think_3",
"animations/Stand/Waiting/Think_4",
"animations/Stand/Waiting/Waddle_1",
"animations/Stand/Waiting/Waddle_2",
"animations/Stand/Waiting/WakeUp_1",
"animations/Stand/Waiting/Zombie_1"]

View File

@@ -5,6 +5,7 @@ from robot_interface.state import state
from robot_interface.core.config import settings
class MainReceiver(ReceiverBase):
"""
The main receiver endpoint, responsible for handling ping and negotiation requests.
@@ -12,10 +13,12 @@ class MainReceiver(ReceiverBase):
:param zmq_context: The ZeroMQ context to use.
:type zmq_context: zmq.Context
:param port: The port to use.
:param port: The port to use, defaults to value in `settings.agent_settings.main_receiver_port`.
:type port: int
"""
def __init__(self, zmq_context, port=settings.agent_settings.main_receiver_port):
def __init__(self, zmq_context, port=None):
if port is None:
port = settings.agent_settings.main_receiver_port
super(MainReceiver, self).__init__("main")
self.create_socket(zmq_context, zmq.REP, port, bind=False)

View File

@@ -2,6 +2,8 @@ from abc import ABCMeta
import zmq
from robot_interface.core.config import settings
class SocketBase(object):
"""
@@ -59,7 +61,7 @@ class SocketBase(object):
if bind:
self.socket.bind("tcp://*:{}".format(port))
else:
self.socket.connect("tcp://localhost:{}".format(port))
self.socket.connect("tcp://{}:{}".format(settings.agent_settings.control_backend_host, port))
def close(self):
"""Close the ZeroMQ socket."""

View File

@@ -0,0 +1,32 @@
import os
from dotenv import load_dotenv
load_dotenv()
def get_config(value, env, default, cast=None):
"""
Small utility to get a configuration value, returns `value` if it is not None, else it will try to get the
environment variable cast with `cast`. If the environment variable is not set, it will return `default`.
:param value: The value to check.
:type value: Any
:param env: The environment variable to check.
:type env: string
:param default: The default value to return if the environment variable is not set.
:type default: Any
:param cast: A function to use to cast the environment variable. Must support string input.
:type cast: Callable[[Any], Any], optional
:return: The value, the environment variable value, or the default.
:rtype: Any
"""
if value is not None:
return value
env = os.environ.get(env, default)
if cast is None:
return env
return cast(env)

View File

@@ -0,0 +1,32 @@
from mock import patch, mock
from robot_interface.core.config import Settings
from robot_interface.endpoints.main_receiver import MainReceiver
def test_environment_variables(monkeypatch):
"""
When environment variables are set, creating settings should use these.
"""
monkeypatch.setenv("AGENT__CONTROL_BACKEND_HOST", "some_value_that_should_be_different")
settings = Settings()
assert settings.agent_settings.control_backend_host == "some_value_that_should_be_different"
@patch("robot_interface.endpoints.main_receiver.settings")
@patch("robot_interface.endpoints.socket_base.settings")
def test_create_endpoint_custom_host(base_settings, main_settings):
"""
When a custom host is given in the settings, check that an endpoint's socket connects to it.
"""
fake_context = mock.Mock()
fake_socket = mock.Mock()
fake_context.socket.return_value = fake_socket
base_settings.agent_settings.control_backend_host = "not_localhost"
main_settings.agent_settings.main_receiver_port = 9999
_ = MainReceiver(fake_context)
fake_socket.connect.assert_called_once_with("tcp://not_localhost:9999")

View File

@@ -5,6 +5,7 @@ import pytest
import zmq
from robot_interface.endpoints.actuation_receiver import ActuationReceiver
from robot_interface.endpoints.gesture_settings import GestureTags
@pytest.fixture
@@ -94,7 +95,159 @@ def test_speech(zmq_context, mocker):
mock_state.qi_session.service.assert_called_once_with("ALTextToSpeech")
mock_qi.async.assert_called_once()
call_args = mock_qi.async.call_args[0]
getattr(mock_qi, "async").assert_called_once()
call_args = getattr(mock_qi, "async").call_args[0]
assert call_args[0] == mock_tts_service.say
assert call_args[1] == "Some message to speak."
def test_gesture_no_data(zmq_context, mocker):
receiver = ActuationReceiver(zmq_context)
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": ""}, True)
# Just ensuring no crash
def test_gesture_invalid_data(zmq_context, mocker):
receiver = ActuationReceiver(zmq_context)
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": 123}, True)
# No crash expected
def test_gesture_single_not_found(zmq_context, mocker):
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
mock_tags.single_gestures = ["wave", "bow"] # allowed single gestures
receiver = ActuationReceiver(zmq_context)
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": "unknown_gesture"}, True)
# No crash expected
def test_gesture_tag_not_found(zmq_context, mocker):
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
mock_tags.tags = ["happy", "sad"]
receiver = ActuationReceiver(zmq_context)
receiver._handle_gesture({"endpoint": "actuate/gesture/tag", "data": "not_a_tag"}, False)
# No crash expected
def test_gesture_no_qi_session(zmq_context, mocker):
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
mock_state.qi_session = None
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
mock_tags.single_gestures = ["hello"]
receiver = ActuationReceiver(zmq_context)
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": "hello"}, True)
# No crash, path returns early
def test_gesture_single_success(zmq_context, mocker):
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
mock_qi = mock.Mock()
sys.modules["qi"] = mock_qi
# Setup gesture settings
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
mock_tags.single_gestures = ["wave"]
mock_animation_service = mock.Mock()
mock_state.qi_session = mock.Mock()
mock_state.qi_session.service.return_value = mock_animation_service
receiver = ActuationReceiver(zmq_context)
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": "wave"}, True)
mock_state.qi_session.service.assert_called_once_with("ALAnimationPlayer")
getattr(mock_qi, "async").assert_called_once()
assert getattr(mock_qi, "async").call_args[0][0] == mock_animation_service.run
assert getattr(mock_qi, "async").call_args[0][1] == "wave"
def test_gesture_tag_success(zmq_context, mocker):
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
mock_qi = mock.Mock()
sys.modules["qi"] = mock_qi
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
mock_tags.tags = ["greeting"]
mock_animation_service = mock.Mock()
mock_state.qi_session = mock.Mock()
mock_state.qi_session.service.return_value = mock_animation_service
receiver = ActuationReceiver(zmq_context)
receiver._handle_gesture({"endpoint": "actuate/gesture/tag", "data": "greeting"}, False)
mock_state.qi_session.service.assert_called_once_with("ALAnimationPlayer")
getattr(mock_qi, "async").assert_called_once()
assert getattr(mock_qi, "async").call_args[0][0] == mock_animation_service.runTag
assert getattr(mock_qi, "async").call_args[0][1] == "greeting"
def test_handle_message_all_routes(zmq_context, mocker):
"""
Ensures all handle_message endpoint branches route correctly.
"""
receiver = ActuationReceiver(zmq_context)
mock_speech = mocker.patch.object(receiver, "_handle_speech")
mock_gesture = mocker.patch.object(receiver, "_handle_gesture")
receiver.handle_message({"endpoint": "actuate/speech", "data": "hi"})
receiver.handle_message({"endpoint": "actuate/gesture/tag", "data": "greeting"})
receiver.handle_message({"endpoint": "actuate/gesture/single", "data": "wave"})
mock_speech.assert_called_once()
assert mock_gesture.call_count == 2
def test_endpoint_description(zmq_context, mocker):
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
mock_tags.tags = ["happy"]
mock_tags.single_gestures = ["wave"]
receiver = ActuationReceiver(zmq_context)
desc = receiver.endpoint_description()
assert "gestures" in desc
assert desc["gestures"] == ["happy"]
assert "single_gestures" in desc
assert desc["single_gestures"] == ["wave"]
def test_gesture_single_real_gesturetags(zmq_context, mocker):
"""
Uses the real GestureTags (no mocking) to ensure the receiver
references GestureTags.single_gestures correctly.
"""
# Ensure qi session exists so we pass the early return
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
mock_state.qi_session = mock.Mock()
# Mock qi.async to avoid real async calls
mock_qi = mock.Mock()
sys.modules["qi"] = mock_qi
# Mock animation service
mock_animation_service = mock.Mock()
mock_state.qi_session.service.return_value = mock_animation_service
receiver = ActuationReceiver(zmq_context)
# Pick a real gesture from GestureTags.single_gestures
assert len(GestureTags.single_gestures) > 0, "GestureTags.single_gestures must not be empty"
gesture = GestureTags.single_gestures[0]
receiver._handle_gesture(
{"endpoint": "actuate/gesture/single", "data": gesture},
is_single=True,
)
mock_state.qi_session.service.assert_called_once_with("ALAnimationPlayer")
getattr(mock_qi, "async").assert_called_once()
assert getattr(mock_qi, "async").call_args[0][0] == mock_animation_service.run
assert getattr(mock_qi, "async").call_args[0][1] == gesture

View File

@@ -0,0 +1,45 @@
from robot_interface.utils.get_config import get_config
def test_get_config_prefers_explicit_value(monkeypatch):
"""
When a direct value is provided it should be returned without reading the environment.
"""
monkeypatch.setenv("GET_CONFIG_TEST", "from-env")
result = get_config("explicit", "GET_CONFIG_TEST", "default")
assert result == "explicit"
def test_get_config_returns_env_value(monkeypatch):
"""
If value is None the environment variable should be used.
"""
monkeypatch.setenv("GET_CONFIG_TEST", "from-env")
result = get_config(None, "GET_CONFIG_TEST", "default")
assert result == "from-env"
def test_get_config_casts_env_value(monkeypatch):
"""
The env value should be cast when a cast function is provided.
"""
monkeypatch.setenv("GET_CONFIG_PORT", "1234")
result = get_config(None, "GET_CONFIG_PORT", 0, int)
assert result == 1234
def test_get_config_casts_default_when_env_missing(monkeypatch):
"""
When the env var is missing it should fall back to the default and still apply the cast.
"""
monkeypatch.delenv("GET_CONFIG_MISSING", raising=False)
result = get_config(None, "GET_CONFIG_MISSING", "42", int)
assert result == 42