Compare commits
10 Commits
feat/ci-cd
...
feat/video
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
19b7efec05 | ||
|
|
da97eb8a1a | ||
|
|
e51cf8fe65 | ||
|
|
1e77548622 | ||
|
|
a8fe887c48 | ||
|
|
df702f1e44 | ||
|
|
a2cb2ae90a | ||
|
|
017dbfaa28 | ||
|
|
9ff1d9a4d3 | ||
|
|
3a259c1170 |
25
.env.example
Normal file
25
.env.example
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# Example .env file. To use, make a copy, call it ".env" (i.e. removing the ".example" suffix), then you edit values.
|
||||||
|
# To make a variable apply, uncomment it (remove the "#" in front of the line).
|
||||||
|
|
||||||
|
# First, some variables that are likely to be configured:
|
||||||
|
|
||||||
|
# The hostname or IP address of the Control Backend.
|
||||||
|
AGENT__CONTROL_BACKEND_HOST=localhost
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Variables that are unlikely to be configured, you can probably ignore these:
|
||||||
|
|
||||||
|
#AGENT__ACTUATION_RECEIVER_PORT=
|
||||||
|
#AGENT__MAIN_RECEIVER_PORT=
|
||||||
|
#AGENT__VIDEO_SENDER_PORT=
|
||||||
|
#AGENT__AUDIO_SENDER_PORT=
|
||||||
|
#VIDEO__CAMERA_INDEX=
|
||||||
|
#VIDEO__RESOLUTION=
|
||||||
|
#VIDEO__COLOR_SPACE=
|
||||||
|
#VIDEO__FPS=
|
||||||
|
#VIDEO__STREAM_NAME=
|
||||||
|
#VIDEO__IMAGE_BUFFER=
|
||||||
|
#AUDIO__SAMPLE_RATE=
|
||||||
|
#AUDIO__CHUNK_SIZE=
|
||||||
|
#AUDIO__CHANNELS=
|
||||||
2
.gitignore
vendored
2
.gitignore
vendored
@@ -220,3 +220,5 @@ __marimo__/
|
|||||||
# Docs
|
# Docs
|
||||||
docs/*
|
docs/*
|
||||||
!docs/conf.py
|
!docs/conf.py
|
||||||
|
!docs/installation/
|
||||||
|
!docs/installation/**
|
||||||
|
|||||||
113
README.md
113
README.md
@@ -8,90 +8,21 @@ This is an implementation for the Pepper robot, using the Pepper SDK and Python
|
|||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
### Linux (or WSL)
|
- [Linux](./docs/installation/linux.md)
|
||||||
|
- [macOS](./docs/installation/macos.md)
|
||||||
|
- [Windows](./docs/installation/windows.md)
|
||||||
|
|
||||||
Start off by installing [Pyenv](https://github.com/pyenv/pyenv?tab=readme-ov-file#installation) and walk through the steps outlined there (be sure to also add it to PATH). Also install the [Python build requirements](https://github.com/pyenv/pyenv/wiki#suggested-build-environment). Afterwards, install Python 2.7 and activate it for your current shell:
|
|
||||||
|
|
||||||
|
### Git Hooks
|
||||||
|
|
||||||
|
To activate automatic linting, formatting, branch name checks and commit message checks, run (after installing requirements):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pyenv install 2.7
|
pre-commit install
|
||||||
pyenv shell 2.7
|
pre-commit install --hook-type commit-msg
|
||||||
```
|
```
|
||||||
|
|
||||||
You can check that this worked by typing
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -V
|
|
||||||
```
|
|
||||||
|
|
||||||
Which should return `Python 2.7.18`.
|
|
||||||
|
|
||||||
Next, `cd` into this repository and create (and activate) a virtual environment:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd <path to project>/
|
|
||||||
python -m pip install virtualenv
|
|
||||||
python -m virtualenv .venv
|
|
||||||
source .venv/bin/activate
|
|
||||||
```
|
|
||||||
|
|
||||||
We depend on PortAudio for the `pyaudio` package, so install it with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo apt install -y portaudio19-dev
|
|
||||||
```
|
|
||||||
|
|
||||||
On WSL, also install:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo apt install -y libasound2-plugins
|
|
||||||
```
|
|
||||||
|
|
||||||
Install the required packages with
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pip install -r requirements.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
Now we need to install the NaoQi SDK into our virtual environment, which we need to do manually. Begin by downloading the SDK:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
wget https://community-static.aldebaran.com/resources/2.5.10/Python%20SDK/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, move into the `site-packages` directory and extract the file you just downloaded:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd .venv/lib/python2.7/site-packages/
|
|
||||||
tar xvfz <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
|
|
||||||
rm <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
Lastly, we need to inform our virtual environment where to find our newly installed package:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo <path to project>/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-linux64/lib/python2.7/site-packages/ > pynaoqi-python2.7.pth
|
|
||||||
```
|
|
||||||
|
|
||||||
That's it! Verify that it works with
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -c "import qi; print(qi)"
|
|
||||||
```
|
|
||||||
|
|
||||||
You should now be able to run this project.
|
|
||||||
|
|
||||||
### macOS
|
|
||||||
|
|
||||||
Similar to Linux, but don't bother installing `pyenv` as it won't be able to install Python 2 on Apple Silicon. Instead, install Python 2.7.18 from the [Python website](https://www.python.org/downloads/release/python-2718/).
|
|
||||||
|
|
||||||
Create the virtual environment as described above in the Linux section. Stop at the point where it shows you how to download the NaoQi SDK. Instead, use:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
curl -OL https://community-static.aldebaran.com/resources/2.5.10/Python%20SDK/pynaoqi-python2.7-2.5.7.1-mac64.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
Then resume the steps from above.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
@@ -108,10 +39,15 @@ On Windows:
|
|||||||
$env:PYTHONPATH="src"; python -m robot_interface.main
|
$env:PYTHONPATH="src"; python -m robot_interface.main
|
||||||
```
|
```
|
||||||
|
|
||||||
With both, if you want to connect to the actual robot (or simulator), pass the `--qi-url` argument.
|
### Program Arguments
|
||||||
|
|
||||||
|
If you want to connect to the actual robot (or simulator), pass the `--qi-url` argument.
|
||||||
|
|
||||||
There's also a `--microphone` argument that can be used to choose a microphone to use. If not given, the program will try the default microphone. If you don't know the name of the microphone, pass the argument with any value, and it will list the names of available microphones.
|
There's also a `--microphone` argument that can be used to choose a microphone to use. If not given, the program will try the default microphone. If you don't know the name of the microphone, pass the argument with any value, and it will list the names of available microphones.
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
You may use environment variables to change settings. Make a copy of the [`.env.example`](.env.example) file, name it `.env` and put it in the root directory. The file itself describes how to do the configuration.
|
||||||
|
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
@@ -134,23 +70,6 @@ For coverage, add `--cov=robot_interface` as an argument to `pytest`.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Git Hooks
|
|
||||||
|
|
||||||
To activate automatic linting, formatting, branch name checks and commit message checks, run (after installing requirements):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
pre-commit install
|
|
||||||
pre-commit install --hook-type commit-msg
|
|
||||||
```
|
|
||||||
|
|
||||||
You might get an error along the lines of `Can't install pre-commit with core.hooksPath` set. To fix this, simply unset the hooksPath by running:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
git config --local --unset core.hooksPath
|
|
||||||
```
|
|
||||||
|
|
||||||
Then run the pre-commit install commands again.
|
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
Generate documentation web pages using:
|
Generate documentation web pages using:
|
||||||
|
|
||||||
|
|||||||
75
docs/installation/linux.md
Normal file
75
docs/installation/linux.md
Normal file
@@ -0,0 +1,75 @@
|
|||||||
|
# Installation
|
||||||
|
|
||||||
|
Of the Pepper Robot Interface on Linux (or WSL).
|
||||||
|
|
||||||
|
Start off by installing [Pyenv](https://github.com/pyenv/pyenv?tab=readme-ov-file#installation) and walk through the steps outlined there (be sure to also add it to PATH). Also install the [Python build requirements](https://github.com/pyenv/pyenv/wiki#suggested-build-environment). Afterwards, install Python 2.7 and activate it for your current shell:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pyenv install 2.7
|
||||||
|
pyenv shell 2.7
|
||||||
|
```
|
||||||
|
|
||||||
|
You can check that this worked by typing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -V
|
||||||
|
```
|
||||||
|
|
||||||
|
Which should return `Python 2.7.18`.
|
||||||
|
|
||||||
|
Next, `cd` into this repository and create (and activate) a virtual environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd <path to project>/
|
||||||
|
python -m pip install virtualenv
|
||||||
|
python -m virtualenv .venv
|
||||||
|
source .venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
We depend on PortAudio for the `pyaudio` package, so install it with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt install -y portaudio19-dev
|
||||||
|
```
|
||||||
|
|
||||||
|
On WSL, also install:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo apt install -y libasound2-plugins
|
||||||
|
```
|
||||||
|
|
||||||
|
Install the required packages with
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we need to install the NaoQi SDK into our virtual environment, which we need to do manually. Begin by downloading the SDK:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget https://community-static.aldebaran.com/resources/2.5.10/Python%20SDK/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
Next, move into the `site-packages` directory and extract the file you just downloaded:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd .venv/lib/python2.7/site-packages/
|
||||||
|
tar xvfz <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
|
||||||
|
rm <path to SDK>/pynaoqi-python2.7-2.5.7.1-linux64.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
Lastly, we need to inform our virtual environment where to find our newly installed package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo <path to project>/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-linux64/lib/python2.7/site-packages/ > pynaoqi-python2.7.pth
|
||||||
|
```
|
||||||
|
|
||||||
|
That's it! Verify that it works with
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -c "import qi; print(qi)"
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now be able to run this project.
|
||||||
|
|
||||||
|
See the README for how to run.
|
||||||
106
docs/installation/macos.md
Normal file
106
docs/installation/macos.md
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
# Installation
|
||||||
|
|
||||||
|
Of the Pepper Robot Interface on macOS.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Python 2.7
|
||||||
|
|
||||||
|
Install Python 2.7.18 from the [Python website](https://www.python.org/downloads/release/python-2718/).
|
||||||
|
|
||||||
|
Check that it worked by executing
|
||||||
|
|
||||||
|
```shell
|
||||||
|
python2 -V
|
||||||
|
```
|
||||||
|
|
||||||
|
Which should return Python 2.7.18.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Virtual Environment
|
||||||
|
|
||||||
|
Next, cd into this repository and create (and activate) a virtual environment:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cd /path/to/project/
|
||||||
|
python2 -m pip install virtualenv
|
||||||
|
python2 -m virtualenv .venv
|
||||||
|
source .venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
We depend on PortAudio for the `pyaudio` package. If on Intel, run `brew install portaudio`. If on Apple Silicon, compile manually using the steps described in [the YouTrack article](https://utrechtuniversity.youtrack.cloud/articles/N25B-A-22/Install-PyAudio-for-Python-2-on-Apple-Silicon).
|
||||||
|
|
||||||
|
Then install the required Python packages with
|
||||||
|
|
||||||
|
```shell
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## NaoQi SDK
|
||||||
|
|
||||||
|
We need to manually install the NaoQi SDK into our virtual environment. There are two options:
|
||||||
|
|
||||||
|
1. Install a newer version (2.8) which will make running easier, but compatibility is uncertain.
|
||||||
|
2. Install the version expected by the robot (2.5). This will complicate running slightly.
|
||||||
|
|
||||||
|
### Option 1
|
||||||
|
|
||||||
|
Download the SDK from [twirre.io](https://twirre.io/files/pynaoqi-python2.7-2.8.6.23-mac64-20191127_144231.tar.gz), or find one on the Aldebaran website, or an archived version on Web Archive.
|
||||||
|
|
||||||
|
Extract it to `/path/to/project/.venv/lib/python2.7/site-packages/`.
|
||||||
|
|
||||||
|
We need to inform our virtual environment where to find our newly installed package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.8.6.23-mac64-20191127_144231/lib/python2.7/site-packages/" > /path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7.pth
|
||||||
|
```
|
||||||
|
|
||||||
|
Now continue with [verifying](#verifying).
|
||||||
|
|
||||||
|
### Option 2
|
||||||
|
|
||||||
|
This method of installation requires setting the `DYLD_LIBRARY_PATH` environment variable before running. How will be explained.
|
||||||
|
|
||||||
|
Download the SDK from [twirre.io](https://twirre.io/files/pynaoqi-2.5.7.1-mac64-deps.tar.gz). This is a modified version of the one from Aldebaran, this one including required Choregraphe dependencies.
|
||||||
|
|
||||||
|
Extract it to `/path/to/project/.venv/lib/python2.7/site-packages/`.
|
||||||
|
|
||||||
|
We need to inform our virtual environment where to find our newly installed package:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
echo "/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-mac64/lib/python2.7/site-packages/" > /path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7.pth
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, anytime before running you need to set the `DYLD_LIBRARY_PATH` environment variable.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
export DYLD_LIBRARY_PATH="/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.5.7.1-mac64/choregraphe_lib:${DYLD_LIBRARY_PATH}"
|
||||||
|
```
|
||||||
|
|
||||||
|
You may want to simplify environment activation with a script `activate.sh` like:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
#!/bin/zsh
|
||||||
|
|
||||||
|
export DYLD_LIBRARY_PATH="/path/to/project/.venv/lib/python2.7/site-packages/pynaoqi-python2.7-2.8.6.23-mac64-20191127_144231/choregraphe_lib:${DYLD_LIBRARY_PATH}"
|
||||||
|
source .venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
[Verify](#verifying) if it works.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Verifying
|
||||||
|
|
||||||
|
Verify that the NaoQI SDK installation works with
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -c "import qi; print(qi)"
|
||||||
|
```
|
||||||
|
|
||||||
|
If so, you should now be able to run this project.
|
||||||
|
|
||||||
|
See the README for how to run.
|
||||||
44
docs/installation/windows.md
Normal file
44
docs/installation/windows.md
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
# Installation
|
||||||
|
|
||||||
|
Of the Pepper Robot Interface on Windows.
|
||||||
|
|
||||||
|
Install Python 2.7.18 from [the Python website](https://www.python.org/downloads/release/python-2718/), choose the x86-64 installer (at the bottom of the page).
|
||||||
|
|
||||||
|
To see if it worked:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
py -2 -V
|
||||||
|
```
|
||||||
|
|
||||||
|
Which should return `Python 2.7.18`.
|
||||||
|
|
||||||
|
Next, `cd` into this repository and create (and activate) a virtual environment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd <path to project>/
|
||||||
|
py -2 -m pip install virtualenv
|
||||||
|
py -2 -m virtualenv .venv
|
||||||
|
.\.venv\Scripts\activate
|
||||||
|
```
|
||||||
|
|
||||||
|
Install the required packages with
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
Now we need to install the NaoQi SDK into our virtual environment, which we need to do manually. Download the SDK from [Aldebaran](https://community-static.aldebaran.com/resources/2.5.5/sdk-python/pynaoqi-python2.7-2.5.5.5-win32-vs2013.zip), [Web Archive](https://web.archive.org/web/20240120111043/https://community-static.aldebaran.com/resources/2.5.5/sdk-python/pynaoqi-python2.7-2.5.5.5-win32-vs2013.zip) or [twirre.io](https://twirre.io/files/pynaoqi-python2.7-2.8.6.23-win64-vs2015-20191127_152649.zip).
|
||||||
|
|
||||||
|
Extract to `.\.venv\Lib\site-packages`.
|
||||||
|
|
||||||
|
Create a file `.venv\Lib\site-packages\pynaoqi-python2.7.pth`, put the full path of `pynaoqi-python2.7-2.8.6.23-win64-vs2015-20191127_152649\lib\python2.7\Lib\site-packages` in there.
|
||||||
|
|
||||||
|
Test if it worked by running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
python -c "import qi; print(qi)"
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now be able to run this project.
|
||||||
|
|
||||||
|
See the README for how to run.
|
||||||
@@ -6,3 +6,5 @@ pytest-cov<3.0.0
|
|||||||
sphinx
|
sphinx
|
||||||
sphinx_rtd_theme
|
sphinx_rtd_theme
|
||||||
pre-commit
|
pre-commit
|
||||||
|
python-dotenv
|
||||||
|
opencv-python==4.1.2.30
|
||||||
@@ -1,95 +1,101 @@
|
|||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
|
from robot_interface.utils.get_config import get_config
|
||||||
|
|
||||||
|
|
||||||
class AgentSettings(object):
|
class AgentSettings(object):
|
||||||
"""
|
"""
|
||||||
Agent port configuration.
|
Agent port configuration.
|
||||||
|
|
||||||
:ivar actuating_receiver_port: Port for receiving actuation commands.
|
:ivar control_backend_host: Hostname of the control backend, defaults to "localhost".
|
||||||
:vartype actuating_receiver_port: int
|
:vartype control_backend_host: string
|
||||||
:ivar main_receiver_port: Port for receiving main messages.
|
:ivar actuation_receiver_port: Port for receiving actuation commands, defaults to 5557.
|
||||||
|
:vartype actuation_receiver_port: int
|
||||||
|
:ivar main_receiver_port: Port for receiving main messages, defaults to 5555.
|
||||||
:vartype main_receiver_port: int
|
:vartype main_receiver_port: int
|
||||||
:ivar video_sender_port: Port used for sending video frames.
|
:ivar video_sender_port: Port used for sending video frames, defaults to 5556.
|
||||||
:vartype video_sender_port: int
|
:vartype video_sender_port: int
|
||||||
:ivar audio_sender_port: Port used for sending audio data.
|
:ivar audio_sender_port: Port used for sending audio data, defaults to 5558.
|
||||||
:vartype audio_sender_port: int
|
:vartype audio_sender_port: int
|
||||||
"""
|
"""
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
actuating_receiver_port=5557,
|
control_backend_host=None,
|
||||||
main_receiver_port=5555,
|
actuation_receiver_port=None,
|
||||||
video_sender_port=5556,
|
main_receiver_port=None,
|
||||||
audio_sender_port=5558,
|
video_sender_port=None,
|
||||||
|
audio_sender_port=None,
|
||||||
):
|
):
|
||||||
self.actuating_receiver_port = actuating_receiver_port
|
self.control_backend_host = get_config(control_backend_host, "AGENT__CONTROL_BACKEND_HOST", "localhost")
|
||||||
self.main_receiver_port = main_receiver_port
|
self.actuation_receiver_port = get_config(actuation_receiver_port, "AGENT__ACTUATION_RECEIVER_PORT", 5557, int)
|
||||||
self.video_sender_port = video_sender_port
|
self.main_receiver_port = get_config(main_receiver_port, "AGENT__MAIN_RECEIVER_PORT", 5555, int)
|
||||||
self.audio_sender_port = audio_sender_port
|
self.video_sender_port = get_config(video_sender_port, "AGENT__VIDEO_SENDER_PORT", 5556, int)
|
||||||
|
self.audio_sender_port = get_config(audio_sender_port, "AGENT__AUDIO_SENDER_PORT", 5558, int)
|
||||||
|
|
||||||
|
|
||||||
class VideoConfig(object):
|
class VideoConfig(object):
|
||||||
"""
|
"""
|
||||||
Video configuration constants.
|
Video configuration constants.
|
||||||
|
|
||||||
:ivar camera_index: Index of the camera used.
|
:ivar camera_index: Index of the camera used, defaults to 0.
|
||||||
:vartype camera_index: int
|
:vartype camera_index: int
|
||||||
:ivar resolution: Video resolution mode.
|
:ivar resolution: Video resolution mode, defaults to 2.
|
||||||
:vartype resolution: int
|
:vartype resolution: int
|
||||||
:ivar color_space: Color space identifier.
|
:ivar color_space: Color space identifier, defaults to 11.
|
||||||
:vartype color_space: int
|
:vartype color_space: int
|
||||||
:ivar fps: Frames per second of the video stream.
|
:ivar fps: Frames per second of the video stream, defaults to 15.
|
||||||
:vartype fps: int
|
:vartype fps: int
|
||||||
:ivar stream_name: Name of the video stream.
|
:ivar stream_name: Name of the video stream, defaults to "Pepper Video".
|
||||||
:vartype stream_name: str
|
:vartype stream_name: str
|
||||||
:ivar image_buffer: Internal buffer size for video frames.
|
:ivar image_buffer: Internal buffer size for video frames, defaults to 6.
|
||||||
:vartype image_buffer: int
|
:vartype image_buffer: int
|
||||||
"""
|
"""
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
camera_index=0,
|
camera_index=None,
|
||||||
resolution=2,
|
resolution=None,
|
||||||
color_space=11,
|
color_space=None,
|
||||||
fps=15,
|
fps=None,
|
||||||
stream_name="Pepper Video",
|
stream_name=None,
|
||||||
image_buffer=6,
|
image_buffer=None,
|
||||||
):
|
):
|
||||||
self.camera_index = camera_index
|
self.camera_index = get_config(camera_index, "VIDEO__CAMERA_INDEX", 0, int)
|
||||||
self.resolution = resolution
|
self.resolution = get_config(resolution, "VIDEO__RESOLUTION", 2, int)
|
||||||
self.color_space = color_space
|
self.color_space = get_config(color_space, "VIDEO__COLOR_SPACE", 11, int)
|
||||||
self.fps = fps
|
self.fps = get_config(fps, "VIDEO__FPS", 15, int)
|
||||||
self.stream_name = stream_name
|
self.stream_name = get_config(stream_name, "VIDEO__STREAM_NAME", "Pepper Video")
|
||||||
self.image_buffer = image_buffer
|
self.image_buffer = get_config(image_buffer, "VIDEO__IMAGE_BUFFER", 6, int)
|
||||||
|
|
||||||
|
|
||||||
class AudioConfig(object):
|
class AudioConfig(object):
|
||||||
"""
|
"""
|
||||||
Audio configuration constants.
|
Audio configuration constants.
|
||||||
|
|
||||||
:ivar sample_rate: Audio sampling rate in Hz.
|
:ivar sample_rate: Audio sampling rate in Hz, defaults to 16000.
|
||||||
:vartype sample_rate: int
|
:vartype sample_rate: int
|
||||||
:ivar chunk_size: Size of audio chunks to capture/process.
|
:ivar chunk_size: Size of audio chunks to capture/process, defaults to 512.
|
||||||
:vartype chunk_size: int
|
:vartype chunk_size: int
|
||||||
:ivar channels: Number of audio channels.
|
:ivar channels: Number of audio channels, defaults to 1.
|
||||||
:vartype channels: int
|
:vartype channels: int
|
||||||
"""
|
"""
|
||||||
def __init__(self, sample_rate=16000, chunk_size=512, channels=1):
|
def __init__(self, sample_rate=None, chunk_size=None, channels=None):
|
||||||
self.sample_rate = sample_rate
|
self.sample_rate = get_config(sample_rate, "AUDIO__SAMPLE_RATE", 16000, int)
|
||||||
self.chunk_size = chunk_size
|
self.chunk_size = get_config(chunk_size, "AUDIO__CHUNK_SIZE", 512, int)
|
||||||
self.channels = channels
|
self.channels = get_config(channels, "AUDIO__CHANNELS", 1, int)
|
||||||
|
|
||||||
|
|
||||||
class MainConfig(object):
|
class MainConfig(object):
|
||||||
"""
|
"""
|
||||||
Main system configuration.
|
Main system configuration.
|
||||||
|
|
||||||
:ivar poll_timeout_ms: Timeout for polling events, in milliseconds.
|
:ivar poll_timeout_ms: Timeout for polling events, in milliseconds, defaults to 100.
|
||||||
:vartype poll_timeout_ms: int
|
:vartype poll_timeout_ms: int
|
||||||
:ivar max_handler_time_ms: Maximum allowed handler time, in milliseconds.
|
:ivar max_handler_time_ms: Maximum allowed handler time, in milliseconds, defaults to 50.
|
||||||
:vartype max_handler_time_ms: int
|
:vartype max_handler_time_ms: int
|
||||||
"""
|
"""
|
||||||
def __init__(self, poll_timeout_ms=100, max_handler_time_ms=50):
|
def __init__(self, poll_timeout_ms=None, max_handler_time_ms=None):
|
||||||
self.poll_timeout_ms = poll_timeout_ms
|
self.poll_timeout_ms = get_config(poll_timeout_ms, "MAIN__POLL_TIMEOUT_MS", 100, int)
|
||||||
self.max_handler_time_ms = max_handler_time_ms
|
self.max_handler_time_ms = get_config(max_handler_time_ms, "MAIN__MAX_HANDLER_TIME_MS", 50, int)
|
||||||
|
|
||||||
|
|
||||||
class Settings(object):
|
class Settings(object):
|
||||||
|
|||||||
@@ -1,12 +1,14 @@
|
|||||||
from __future__ import unicode_literals # So that we can log texts with Unicode characters
|
from __future__ import unicode_literals # So that we can log texts with Unicode characters
|
||||||
import logging
|
import logging
|
||||||
|
from threading import Thread
|
||||||
|
|
||||||
|
import Queue
|
||||||
import zmq
|
import zmq
|
||||||
|
|
||||||
from robot_interface.endpoints.receiver_base import ReceiverBase
|
from robot_interface.endpoints.receiver_base import ReceiverBase
|
||||||
from robot_interface.state import state
|
from robot_interface.state import state
|
||||||
|
|
||||||
from robot_interface.core.config import settings
|
from robot_interface.core.config import settings
|
||||||
|
from robot_interface.endpoints.gesture_settings import GestureTags
|
||||||
|
|
||||||
|
|
||||||
class ActuationReceiver(ReceiverBase):
|
class ActuationReceiver(ReceiverBase):
|
||||||
@@ -21,12 +23,19 @@ class ActuationReceiver(ReceiverBase):
|
|||||||
|
|
||||||
:ivar _tts_service: The text-to-speech service object from the Qi session.
|
:ivar _tts_service: The text-to-speech service object from the Qi session.
|
||||||
:vartype _tts_service: qi.Session | None
|
:vartype _tts_service: qi.Session | None
|
||||||
|
|
||||||
|
:ivar _animation_service: The animation/gesture service object from the Qi session.
|
||||||
|
:vartype _animation_service: qi.Session | None
|
||||||
"""
|
"""
|
||||||
def __init__(self, zmq_context, port=settings.agent_settings.actuating_receiver_port):
|
def __init__(self, zmq_context, port=settings.agent_settings.actuation_receiver_port):
|
||||||
super(ActuationReceiver, self).__init__("actuation")
|
super(ActuationReceiver, self).__init__("actuation")
|
||||||
self.create_socket(zmq_context, zmq.SUB, port)
|
self.create_socket(zmq_context, zmq.SUB, port)
|
||||||
self.socket.setsockopt_string(zmq.SUBSCRIBE, u"") # Causes block if given in options
|
self.socket.setsockopt_string(zmq.SUBSCRIBE, u"") # Causes block if given in options
|
||||||
self._tts_service = None
|
self._tts_service = None
|
||||||
|
self._animation_service = None
|
||||||
|
self._message_queue = Queue.Queue()
|
||||||
|
self.message_thread = Thread(target=self._handle_messages)
|
||||||
|
self.message_thread.start()
|
||||||
|
|
||||||
def _handle_speech(self, message):
|
def _handle_speech(self, message):
|
||||||
"""
|
"""
|
||||||
@@ -53,8 +62,73 @@ class ActuationReceiver(ReceiverBase):
|
|||||||
if not self._tts_service:
|
if not self._tts_service:
|
||||||
self._tts_service = state.qi_session.service("ALTextToSpeech")
|
self._tts_service = state.qi_session.service("ALTextToSpeech")
|
||||||
|
|
||||||
# Returns instantly. Messages received while speaking will be queued.
|
if message.get("is_priority"):
|
||||||
qi.async(self._tts_service.say, text)
|
# Bypass queue and speak immediately
|
||||||
|
self.clear_queue()
|
||||||
|
self._message_queue.put(text)
|
||||||
|
logging.debug("Force speaking immediately: {}".format(text))
|
||||||
|
else:
|
||||||
|
self._message_queue.put(text)
|
||||||
|
|
||||||
|
def clear_queue(self):
|
||||||
|
"""
|
||||||
|
Safely drains all pending messages from the queue.
|
||||||
|
"""
|
||||||
|
logging.info("Message queue size: {}".format(self._message_queue.qsize()))
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
# Remove items one by one without waiting
|
||||||
|
self._message_queue.get_nowait()
|
||||||
|
except Queue.Empty:
|
||||||
|
pass
|
||||||
|
logging.info("Message queue cleared.")
|
||||||
|
|
||||||
|
def _handle_gesture(self, message, is_single):
|
||||||
|
"""
|
||||||
|
Handle a gesture actuation request.
|
||||||
|
|
||||||
|
:param message: The gesture to do, must contain properties "endpoint" and "data".
|
||||||
|
:type message: dict
|
||||||
|
|
||||||
|
:param is_single: Whether it's a specific single gesture or a gesture tag.
|
||||||
|
:type is_single: bool
|
||||||
|
"""
|
||||||
|
|
||||||
|
gesture = message.get("data")
|
||||||
|
if not gesture:
|
||||||
|
logging.warn("Received gesture to do, but it lacks data.")
|
||||||
|
return
|
||||||
|
|
||||||
|
if not isinstance(gesture, (str, unicode)):
|
||||||
|
logging.warn("Received gesture to do but it is not a string.")
|
||||||
|
return
|
||||||
|
|
||||||
|
logging.debug("Received gesture to do: {}".format(gesture))
|
||||||
|
|
||||||
|
if is_single:
|
||||||
|
if gesture not in GestureTags.single_gestures:
|
||||||
|
logging.warn("Received single gesture to do, but it does not exist in settings")
|
||||||
|
return
|
||||||
|
else:
|
||||||
|
if gesture not in GestureTags.tags:
|
||||||
|
logging.warn("Received single tag to do, but it does not exist in settings")
|
||||||
|
return
|
||||||
|
|
||||||
|
if not state.qi_session: return
|
||||||
|
# If state has a qi_session, we know that we can import qi
|
||||||
|
import qi # Takes a while only the first time it's imported
|
||||||
|
|
||||||
|
if not self._animation_service:
|
||||||
|
self._animation_service = state.qi_session.service("ALAnimationPlayer")
|
||||||
|
|
||||||
|
# Play the gesture. Pepper comes with predefined animations like "Wave", "Greet", "Clap"
|
||||||
|
# You can also create custom animations using Choregraphe and upload them to the robot.
|
||||||
|
if is_single:
|
||||||
|
logging.debug("Playing single gesture: {}".format(gesture))
|
||||||
|
getattr(qi, "async")(self._animation_service.run, gesture)
|
||||||
|
else:
|
||||||
|
logging.debug("Playing tag gesture: {}".format(gesture))
|
||||||
|
getattr(qi, "async")(self._animation_service.runTag, gesture)
|
||||||
|
|
||||||
def handle_message(self, message):
|
def handle_message(self, message):
|
||||||
"""
|
"""
|
||||||
@@ -65,3 +139,30 @@ class ActuationReceiver(ReceiverBase):
|
|||||||
"""
|
"""
|
||||||
if message["endpoint"] == "actuate/speech":
|
if message["endpoint"] == "actuate/speech":
|
||||||
self._handle_speech(message)
|
self._handle_speech(message)
|
||||||
|
if message["endpoint"] == "actuate/gesture/tag":
|
||||||
|
self._handle_gesture(message, False)
|
||||||
|
if message["endpoint"] == "actuate/gesture/single":
|
||||||
|
self._handle_gesture(message, True)
|
||||||
|
|
||||||
|
def _handle_messages(self):
|
||||||
|
while not state.exit_event.is_set():
|
||||||
|
try:
|
||||||
|
text = self._message_queue.get(timeout=0.1)
|
||||||
|
state.is_speaking = True
|
||||||
|
self._tts_service.say(text)
|
||||||
|
except Queue.Empty:
|
||||||
|
state.is_speaking = False
|
||||||
|
except RuntimeError:
|
||||||
|
logging.error("Lost connection to Pepper. Please check if you're connected to the "
|
||||||
|
"local WiFi and restart this application.")
|
||||||
|
state.exit_event.set()
|
||||||
|
|
||||||
|
def endpoint_description(self):
|
||||||
|
"""
|
||||||
|
Extend the default endpoint description with gesture tags.
|
||||||
|
Returned during negotiate/ports so the CB knows available gestures.
|
||||||
|
"""
|
||||||
|
desc = super(ActuationReceiver, self).endpoint_description()
|
||||||
|
desc["gestures"] = GestureTags.tags
|
||||||
|
desc["single_gestures"] = GestureTags.single_gestures
|
||||||
|
return desc
|
||||||
|
|||||||
@@ -89,6 +89,7 @@ class AudioSender(SocketBase):
|
|||||||
try:
|
try:
|
||||||
while not state.exit_event.is_set():
|
while not state.exit_event.is_set():
|
||||||
data = stream.read(chunk)
|
data = stream.read(chunk)
|
||||||
|
if (state.is_speaking): continue # Do not send audio while the robot is speaking
|
||||||
self.socket.send(data)
|
self.socket.send(data)
|
||||||
except IOError as e:
|
except IOError as e:
|
||||||
logger.error("Stopped listening: failed to get audio from microphone.", exc_info=e)
|
logger.error("Stopped listening: failed to get audio from microphone.", exc_info=e)
|
||||||
|
|||||||
412
src/robot_interface/endpoints/gesture_settings.py
Normal file
412
src/robot_interface/endpoints/gesture_settings.py
Normal file
@@ -0,0 +1,412 @@
|
|||||||
|
class GestureTags:
|
||||||
|
tags = ["above", "affirmative", "afford", "agitated", "all", "allright", "alright", "any",
|
||||||
|
"assuage", "assuage", "attemper", "back", "bashful", "beg", "beseech", "blank",
|
||||||
|
"body language", "bored", "bow", "but", "call", "calm", "choose", "choice", "cloud",
|
||||||
|
"cogitate", "cool", "crazy", "disappointed", "down", "earth", "empty", "embarrassed",
|
||||||
|
"enthusiastic", "entire", "estimate", "except", "exalted", "excited", "explain", "far",
|
||||||
|
"field", "floor", "forlorn", "friendly", "front", "frustrated", "gentle", "gift",
|
||||||
|
"give", "ground", "happy", "hello", "her", "here", "hey", "hi", "him", "hopeless",
|
||||||
|
"hysterical", "I", "implore", "indicate", "joyful", "me", "meditate", "modest",
|
||||||
|
"negative", "nervous", "no", "not know", "nothing", "offer", "ok", "once upon a time",
|
||||||
|
"oppose", "or", "pacify", "pick", "placate", "please", "present", "proffer", "quiet",
|
||||||
|
"reason", "refute", "reject", "rousing", "sad", "select", "shamefaced", "show",
|
||||||
|
"show sky", "sky", "soothe", "sun", "supplicate", "tablet", "tall", "them", "there",
|
||||||
|
"think", "timid", "top", "unless", "up", "upstairs", "void", "warm", "winner", "yeah",
|
||||||
|
"yes", "yoo-hoo", "you", "your", "zero", "zestful"]
|
||||||
|
|
||||||
|
single_gestures = [
|
||||||
|
"animations/Stand/BodyTalk/Listening/Listening_1",
|
||||||
|
"animations/Stand/BodyTalk/Listening/Listening_2",
|
||||||
|
"animations/Stand/BodyTalk/Listening/Listening_3",
|
||||||
|
"animations/Stand/BodyTalk/Listening/Listening_4",
|
||||||
|
"animations/Stand/BodyTalk/Listening/Listening_5",
|
||||||
|
"animations/Stand/BodyTalk/Listening/Listening_6",
|
||||||
|
"animations/Stand/BodyTalk/Listening/Listening_7",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_1",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_10",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_11",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_12",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_13",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_14",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_15",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_16",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_2",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_3",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_4",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_5",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_6",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_7",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_8",
|
||||||
|
"animations/Stand/BodyTalk/Speaking/BodyTalk_9",
|
||||||
|
"animations/Stand/BodyTalk/Thinking/Remember_1",
|
||||||
|
"animations/Stand/BodyTalk/Thinking/Remember_2",
|
||||||
|
"animations/Stand/BodyTalk/Thinking/Remember_3",
|
||||||
|
"animations/Stand/BodyTalk/Thinking/ThinkingLoop_1",
|
||||||
|
"animations/Stand/BodyTalk/Thinking/ThinkingLoop_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Angry_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Angry_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Angry_3",
|
||||||
|
"animations/Stand/Emotions/Negative/Angry_4",
|
||||||
|
"animations/Stand/Emotions/Negative/Anxious_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Bored_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Bored_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Disappointed_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Exhausted_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Exhausted_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Fear_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Fear_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Fearful_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Frustrated_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Humiliated_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Hurt_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Hurt_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Late_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Sad_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Sad_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Shocked_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Sorry_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Surprise_1",
|
||||||
|
"animations/Stand/Emotions/Negative/Surprise_2",
|
||||||
|
"animations/Stand/Emotions/Negative/Surprise_3",
|
||||||
|
"animations/Stand/Emotions/Neutral/Alienated_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/AskForAttention_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/AskForAttention_2",
|
||||||
|
"animations/Stand/Emotions/Neutral/AskForAttention_3",
|
||||||
|
"animations/Stand/Emotions/Neutral/Cautious_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Confused_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Determined_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Embarrassed_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Hesitation_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Innocent_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Lonely_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Mischievous_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Puzzled_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Sneeze",
|
||||||
|
"animations/Stand/Emotions/Neutral/Stubborn_1",
|
||||||
|
"animations/Stand/Emotions/Neutral/Suspicious_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Amused_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Confident_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Ecstatic_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Enthusiastic_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Excited_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Excited_2",
|
||||||
|
"animations/Stand/Emotions/Positive/Excited_3",
|
||||||
|
"animations/Stand/Emotions/Positive/Happy_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Happy_2",
|
||||||
|
"animations/Stand/Emotions/Positive/Happy_3",
|
||||||
|
"animations/Stand/Emotions/Positive/Happy_4",
|
||||||
|
"animations/Stand/Emotions/Positive/Hungry_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Hysterical_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Interested_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Interested_2",
|
||||||
|
"animations/Stand/Emotions/Positive/Laugh_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Laugh_2",
|
||||||
|
"animations/Stand/Emotions/Positive/Laugh_3",
|
||||||
|
"animations/Stand/Emotions/Positive/Mocker_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Optimistic_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Peaceful_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Proud_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Proud_2",
|
||||||
|
"animations/Stand/Emotions/Positive/Proud_3",
|
||||||
|
"animations/Stand/Emotions/Positive/Relieved_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Shy_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Shy_2",
|
||||||
|
"animations/Stand/Emotions/Positive/Sure_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Winner_1",
|
||||||
|
"animations/Stand/Emotions/Positive/Winner_2",
|
||||||
|
"animations/Stand/Gestures/Angry_1",
|
||||||
|
"animations/Stand/Gestures/Angry_2",
|
||||||
|
"animations/Stand/Gestures/Angry_3",
|
||||||
|
"animations/Stand/Gestures/BowShort_1",
|
||||||
|
"animations/Stand/Gestures/BowShort_2",
|
||||||
|
"animations/Stand/Gestures/BowShort_3",
|
||||||
|
"animations/Stand/Gestures/But_1",
|
||||||
|
"animations/Stand/Gestures/CalmDown_1",
|
||||||
|
"animations/Stand/Gestures/CalmDown_2",
|
||||||
|
"animations/Stand/Gestures/CalmDown_3",
|
||||||
|
"animations/Stand/Gestures/CalmDown_4",
|
||||||
|
"animations/Stand/Gestures/CalmDown_5",
|
||||||
|
"animations/Stand/Gestures/CalmDown_6",
|
||||||
|
"animations/Stand/Gestures/Choice_1",
|
||||||
|
"animations/Stand/Gestures/ComeOn_1",
|
||||||
|
"animations/Stand/Gestures/Confused_1",
|
||||||
|
"animations/Stand/Gestures/Confused_2",
|
||||||
|
"animations/Stand/Gestures/CountFive_1",
|
||||||
|
"animations/Stand/Gestures/CountFour_1",
|
||||||
|
"animations/Stand/Gestures/CountMore_1",
|
||||||
|
"animations/Stand/Gestures/CountOne_1",
|
||||||
|
"animations/Stand/Gestures/CountThree_1",
|
||||||
|
"animations/Stand/Gestures/CountTwo_1",
|
||||||
|
"animations/Stand/Gestures/Desperate_1",
|
||||||
|
"animations/Stand/Gestures/Desperate_2",
|
||||||
|
"animations/Stand/Gestures/Desperate_3",
|
||||||
|
"animations/Stand/Gestures/Desperate_4",
|
||||||
|
"animations/Stand/Gestures/Desperate_5",
|
||||||
|
"animations/Stand/Gestures/DontUnderstand_1",
|
||||||
|
"animations/Stand/Gestures/Enthusiastic_3",
|
||||||
|
"animations/Stand/Gestures/Enthusiastic_4",
|
||||||
|
"animations/Stand/Gestures/Enthusiastic_5",
|
||||||
|
"animations/Stand/Gestures/Everything_1",
|
||||||
|
"animations/Stand/Gestures/Everything_2",
|
||||||
|
"animations/Stand/Gestures/Everything_3",
|
||||||
|
"animations/Stand/Gestures/Everything_4",
|
||||||
|
"animations/Stand/Gestures/Everything_6",
|
||||||
|
"animations/Stand/Gestures/Excited_1",
|
||||||
|
"animations/Stand/Gestures/Explain_1",
|
||||||
|
"animations/Stand/Gestures/Explain_10",
|
||||||
|
"animations/Stand/Gestures/Explain_11",
|
||||||
|
"animations/Stand/Gestures/Explain_2",
|
||||||
|
"animations/Stand/Gestures/Explain_3",
|
||||||
|
"animations/Stand/Gestures/Explain_4",
|
||||||
|
"animations/Stand/Gestures/Explain_5",
|
||||||
|
"animations/Stand/Gestures/Explain_6",
|
||||||
|
"animations/Stand/Gestures/Explain_7",
|
||||||
|
"animations/Stand/Gestures/Explain_8",
|
||||||
|
"animations/Stand/Gestures/Far_1",
|
||||||
|
"animations/Stand/Gestures/Far_2",
|
||||||
|
"animations/Stand/Gestures/Far_3",
|
||||||
|
"animations/Stand/Gestures/Follow_1",
|
||||||
|
"animations/Stand/Gestures/Give_1",
|
||||||
|
"animations/Stand/Gestures/Give_2",
|
||||||
|
"animations/Stand/Gestures/Give_3",
|
||||||
|
"animations/Stand/Gestures/Give_4",
|
||||||
|
"animations/Stand/Gestures/Give_5",
|
||||||
|
"animations/Stand/Gestures/Give_6",
|
||||||
|
"animations/Stand/Gestures/Great_1",
|
||||||
|
"animations/Stand/Gestures/HeSays_1",
|
||||||
|
"animations/Stand/Gestures/HeSays_2",
|
||||||
|
"animations/Stand/Gestures/HeSays_3",
|
||||||
|
"animations/Stand/Gestures/Hey_1",
|
||||||
|
"animations/Stand/Gestures/Hey_10",
|
||||||
|
"animations/Stand/Gestures/Hey_2",
|
||||||
|
"animations/Stand/Gestures/Hey_3",
|
||||||
|
"animations/Stand/Gestures/Hey_4",
|
||||||
|
"animations/Stand/Gestures/Hey_6",
|
||||||
|
"animations/Stand/Gestures/Hey_7",
|
||||||
|
"animations/Stand/Gestures/Hey_8",
|
||||||
|
"animations/Stand/Gestures/Hey_9",
|
||||||
|
"animations/Stand/Gestures/Hide_1",
|
||||||
|
"animations/Stand/Gestures/Hot_1",
|
||||||
|
"animations/Stand/Gestures/Hot_2",
|
||||||
|
"animations/Stand/Gestures/IDontKnow_1",
|
||||||
|
"animations/Stand/Gestures/IDontKnow_2",
|
||||||
|
"animations/Stand/Gestures/IDontKnow_3",
|
||||||
|
"animations/Stand/Gestures/IDontKnow_4",
|
||||||
|
"animations/Stand/Gestures/IDontKnow_5",
|
||||||
|
"animations/Stand/Gestures/IDontKnow_6",
|
||||||
|
"animations/Stand/Gestures/Joy_1",
|
||||||
|
"animations/Stand/Gestures/Kisses_1",
|
||||||
|
"animations/Stand/Gestures/Look_1",
|
||||||
|
"animations/Stand/Gestures/Look_2",
|
||||||
|
"animations/Stand/Gestures/Maybe_1",
|
||||||
|
"animations/Stand/Gestures/Me_1",
|
||||||
|
"animations/Stand/Gestures/Me_2",
|
||||||
|
"animations/Stand/Gestures/Me_4",
|
||||||
|
"animations/Stand/Gestures/Me_7",
|
||||||
|
"animations/Stand/Gestures/Me_8",
|
||||||
|
"animations/Stand/Gestures/Mime_1",
|
||||||
|
"animations/Stand/Gestures/Mime_2",
|
||||||
|
"animations/Stand/Gestures/Next_1",
|
||||||
|
"animations/Stand/Gestures/No_1",
|
||||||
|
"animations/Stand/Gestures/No_2",
|
||||||
|
"animations/Stand/Gestures/No_3",
|
||||||
|
"animations/Stand/Gestures/No_4",
|
||||||
|
"animations/Stand/Gestures/No_5",
|
||||||
|
"animations/Stand/Gestures/No_6",
|
||||||
|
"animations/Stand/Gestures/No_7",
|
||||||
|
"animations/Stand/Gestures/No_8",
|
||||||
|
"animations/Stand/Gestures/No_9",
|
||||||
|
"animations/Stand/Gestures/Nothing_1",
|
||||||
|
"animations/Stand/Gestures/Nothing_2",
|
||||||
|
"animations/Stand/Gestures/OnTheEvening_1",
|
||||||
|
"animations/Stand/Gestures/OnTheEvening_2",
|
||||||
|
"animations/Stand/Gestures/OnTheEvening_3",
|
||||||
|
"animations/Stand/Gestures/OnTheEvening_4",
|
||||||
|
"animations/Stand/Gestures/OnTheEvening_5",
|
||||||
|
"animations/Stand/Gestures/Please_1",
|
||||||
|
"animations/Stand/Gestures/Please_2",
|
||||||
|
"animations/Stand/Gestures/Please_3",
|
||||||
|
"animations/Stand/Gestures/Reject_1",
|
||||||
|
"animations/Stand/Gestures/Reject_2",
|
||||||
|
"animations/Stand/Gestures/Reject_3",
|
||||||
|
"animations/Stand/Gestures/Reject_4",
|
||||||
|
"animations/Stand/Gestures/Reject_5",
|
||||||
|
"animations/Stand/Gestures/Reject_6",
|
||||||
|
"animations/Stand/Gestures/Salute_1",
|
||||||
|
"animations/Stand/Gestures/Salute_2",
|
||||||
|
"animations/Stand/Gestures/Salute_3",
|
||||||
|
"animations/Stand/Gestures/ShowFloor_1",
|
||||||
|
"animations/Stand/Gestures/ShowFloor_2",
|
||||||
|
"animations/Stand/Gestures/ShowFloor_3",
|
||||||
|
"animations/Stand/Gestures/ShowFloor_4",
|
||||||
|
"animations/Stand/Gestures/ShowFloor_5",
|
||||||
|
"animations/Stand/Gestures/ShowSky_1",
|
||||||
|
"animations/Stand/Gestures/ShowSky_10",
|
||||||
|
"animations/Stand/Gestures/ShowSky_11",
|
||||||
|
"animations/Stand/Gestures/ShowSky_12",
|
||||||
|
"animations/Stand/Gestures/ShowSky_2",
|
||||||
|
"animations/Stand/Gestures/ShowSky_3",
|
||||||
|
"animations/Stand/Gestures/ShowSky_4",
|
||||||
|
"animations/Stand/Gestures/ShowSky_5",
|
||||||
|
"animations/Stand/Gestures/ShowSky_6",
|
||||||
|
"animations/Stand/Gestures/ShowSky_7",
|
||||||
|
"animations/Stand/Gestures/ShowSky_8",
|
||||||
|
"animations/Stand/Gestures/ShowSky_9",
|
||||||
|
"animations/Stand/Gestures/ShowTablet_1",
|
||||||
|
"animations/Stand/Gestures/ShowTablet_2",
|
||||||
|
"animations/Stand/Gestures/ShowTablet_3",
|
||||||
|
"animations/Stand/Gestures/Shy_1",
|
||||||
|
"animations/Stand/Gestures/Stretch_1",
|
||||||
|
"animations/Stand/Gestures/Stretch_2",
|
||||||
|
"animations/Stand/Gestures/Surprised_1",
|
||||||
|
"animations/Stand/Gestures/TakePlace_1",
|
||||||
|
"animations/Stand/Gestures/TakePlace_2",
|
||||||
|
"animations/Stand/Gestures/Take_1",
|
||||||
|
"animations/Stand/Gestures/Thinking_1",
|
||||||
|
"animations/Stand/Gestures/Thinking_2",
|
||||||
|
"animations/Stand/Gestures/Thinking_3",
|
||||||
|
"animations/Stand/Gestures/Thinking_4",
|
||||||
|
"animations/Stand/Gestures/Thinking_5",
|
||||||
|
"animations/Stand/Gestures/Thinking_6",
|
||||||
|
"animations/Stand/Gestures/Thinking_7",
|
||||||
|
"animations/Stand/Gestures/Thinking_8",
|
||||||
|
"animations/Stand/Gestures/This_1",
|
||||||
|
"animations/Stand/Gestures/This_10",
|
||||||
|
"animations/Stand/Gestures/This_11",
|
||||||
|
"animations/Stand/Gestures/This_12",
|
||||||
|
"animations/Stand/Gestures/This_13",
|
||||||
|
"animations/Stand/Gestures/This_14",
|
||||||
|
"animations/Stand/Gestures/This_15",
|
||||||
|
"animations/Stand/Gestures/This_2",
|
||||||
|
"animations/Stand/Gestures/This_3",
|
||||||
|
"animations/Stand/Gestures/This_4",
|
||||||
|
"animations/Stand/Gestures/This_5",
|
||||||
|
"animations/Stand/Gestures/This_6",
|
||||||
|
"animations/Stand/Gestures/This_7",
|
||||||
|
"animations/Stand/Gestures/This_8",
|
||||||
|
"animations/Stand/Gestures/This_9",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_1",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_10",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_11",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_12",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_13",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_14",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_15",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_16",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_2",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_3",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_4",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_5",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_6",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_7",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_8",
|
||||||
|
"animations/Stand/Gestures/WhatSThis_9",
|
||||||
|
"animations/Stand/Gestures/Whisper_1",
|
||||||
|
"animations/Stand/Gestures/Wings_1",
|
||||||
|
"animations/Stand/Gestures/Wings_2",
|
||||||
|
"animations/Stand/Gestures/Wings_3",
|
||||||
|
"animations/Stand/Gestures/Wings_4",
|
||||||
|
"animations/Stand/Gestures/Wings_5",
|
||||||
|
"animations/Stand/Gestures/Yes_1",
|
||||||
|
"animations/Stand/Gestures/Yes_2",
|
||||||
|
"animations/Stand/Gestures/Yes_3",
|
||||||
|
"animations/Stand/Gestures/YouKnowWhat_1",
|
||||||
|
"animations/Stand/Gestures/YouKnowWhat_2",
|
||||||
|
"animations/Stand/Gestures/YouKnowWhat_3",
|
||||||
|
"animations/Stand/Gestures/YouKnowWhat_4",
|
||||||
|
"animations/Stand/Gestures/YouKnowWhat_5",
|
||||||
|
"animations/Stand/Gestures/YouKnowWhat_6",
|
||||||
|
"animations/Stand/Gestures/You_1",
|
||||||
|
"animations/Stand/Gestures/You_2",
|
||||||
|
"animations/Stand/Gestures/You_3",
|
||||||
|
"animations/Stand/Gestures/You_4",
|
||||||
|
"animations/Stand/Gestures/You_5",
|
||||||
|
"animations/Stand/Gestures/Yum_1",
|
||||||
|
"animations/Stand/Reactions/EthernetOff_1",
|
||||||
|
"animations/Stand/Reactions/EthernetOn_1",
|
||||||
|
"animations/Stand/Reactions/Heat_1",
|
||||||
|
"animations/Stand/Reactions/Heat_2",
|
||||||
|
"animations/Stand/Reactions/LightShine_1",
|
||||||
|
"animations/Stand/Reactions/LightShine_2",
|
||||||
|
"animations/Stand/Reactions/LightShine_3",
|
||||||
|
"animations/Stand/Reactions/LightShine_4",
|
||||||
|
"animations/Stand/Reactions/SeeColor_1",
|
||||||
|
"animations/Stand/Reactions/SeeColor_2",
|
||||||
|
"animations/Stand/Reactions/SeeColor_3",
|
||||||
|
"animations/Stand/Reactions/SeeSomething_1",
|
||||||
|
"animations/Stand/Reactions/SeeSomething_3",
|
||||||
|
"animations/Stand/Reactions/SeeSomething_4",
|
||||||
|
"animations/Stand/Reactions/SeeSomething_5",
|
||||||
|
"animations/Stand/Reactions/SeeSomething_6",
|
||||||
|
"animations/Stand/Reactions/SeeSomething_7",
|
||||||
|
"animations/Stand/Reactions/SeeSomething_8",
|
||||||
|
"animations/Stand/Reactions/ShakeBody_1",
|
||||||
|
"animations/Stand/Reactions/ShakeBody_2",
|
||||||
|
"animations/Stand/Reactions/ShakeBody_3",
|
||||||
|
"animations/Stand/Reactions/TouchHead_1",
|
||||||
|
"animations/Stand/Reactions/TouchHead_2",
|
||||||
|
"animations/Stand/Reactions/TouchHead_3",
|
||||||
|
"animations/Stand/Reactions/TouchHead_4",
|
||||||
|
"animations/Stand/Waiting/AirGuitar_1",
|
||||||
|
"animations/Stand/Waiting/BackRubs_1",
|
||||||
|
"animations/Stand/Waiting/Bandmaster_1",
|
||||||
|
"animations/Stand/Waiting/Binoculars_1",
|
||||||
|
"animations/Stand/Waiting/BreathLoop_1",
|
||||||
|
"animations/Stand/Waiting/BreathLoop_2",
|
||||||
|
"animations/Stand/Waiting/BreathLoop_3",
|
||||||
|
"animations/Stand/Waiting/CallSomeone_1",
|
||||||
|
"animations/Stand/Waiting/Drink_1",
|
||||||
|
"animations/Stand/Waiting/DriveCar_1",
|
||||||
|
"animations/Stand/Waiting/Fitness_1",
|
||||||
|
"animations/Stand/Waiting/Fitness_2",
|
||||||
|
"animations/Stand/Waiting/Fitness_3",
|
||||||
|
"animations/Stand/Waiting/FunnyDancer_1",
|
||||||
|
"animations/Stand/Waiting/HappyBirthday_1",
|
||||||
|
"animations/Stand/Waiting/Helicopter_1",
|
||||||
|
"animations/Stand/Waiting/HideEyes_1",
|
||||||
|
"animations/Stand/Waiting/HideHands_1",
|
||||||
|
"animations/Stand/Waiting/Innocent_1",
|
||||||
|
"animations/Stand/Waiting/Knight_1",
|
||||||
|
"animations/Stand/Waiting/KnockEye_1",
|
||||||
|
"animations/Stand/Waiting/KungFu_1",
|
||||||
|
"animations/Stand/Waiting/LookHand_1",
|
||||||
|
"animations/Stand/Waiting/LookHand_2",
|
||||||
|
"animations/Stand/Waiting/LoveYou_1",
|
||||||
|
"animations/Stand/Waiting/Monster_1",
|
||||||
|
"animations/Stand/Waiting/MysticalPower_1",
|
||||||
|
"animations/Stand/Waiting/PlayHands_1",
|
||||||
|
"animations/Stand/Waiting/PlayHands_2",
|
||||||
|
"animations/Stand/Waiting/PlayHands_3",
|
||||||
|
"animations/Stand/Waiting/Relaxation_1",
|
||||||
|
"animations/Stand/Waiting/Relaxation_2",
|
||||||
|
"animations/Stand/Waiting/Relaxation_3",
|
||||||
|
"animations/Stand/Waiting/Relaxation_4",
|
||||||
|
"animations/Stand/Waiting/Rest_1",
|
||||||
|
"animations/Stand/Waiting/Robot_1",
|
||||||
|
"animations/Stand/Waiting/ScratchBack_1",
|
||||||
|
"animations/Stand/Waiting/ScratchBottom_1",
|
||||||
|
"animations/Stand/Waiting/ScratchEye_1",
|
||||||
|
"animations/Stand/Waiting/ScratchHand_1",
|
||||||
|
"animations/Stand/Waiting/ScratchHead_1",
|
||||||
|
"animations/Stand/Waiting/ScratchLeg_1",
|
||||||
|
"animations/Stand/Waiting/ScratchTorso_1",
|
||||||
|
"animations/Stand/Waiting/ShowMuscles_1",
|
||||||
|
"animations/Stand/Waiting/ShowMuscles_2",
|
||||||
|
"animations/Stand/Waiting/ShowMuscles_3",
|
||||||
|
"animations/Stand/Waiting/ShowMuscles_4",
|
||||||
|
"animations/Stand/Waiting/ShowMuscles_5",
|
||||||
|
"animations/Stand/Waiting/ShowSky_1",
|
||||||
|
"animations/Stand/Waiting/ShowSky_2",
|
||||||
|
"animations/Stand/Waiting/SpaceShuttle_1",
|
||||||
|
"animations/Stand/Waiting/Stretch_1",
|
||||||
|
"animations/Stand/Waiting/Stretch_2",
|
||||||
|
"animations/Stand/Waiting/TakePicture_1",
|
||||||
|
"animations/Stand/Waiting/Taxi_1",
|
||||||
|
"animations/Stand/Waiting/Think_1",
|
||||||
|
"animations/Stand/Waiting/Think_2",
|
||||||
|
"animations/Stand/Waiting/Think_3",
|
||||||
|
"animations/Stand/Waiting/Think_4",
|
||||||
|
"animations/Stand/Waiting/Waddle_1",
|
||||||
|
"animations/Stand/Waiting/Waddle_2",
|
||||||
|
"animations/Stand/Waiting/WakeUp_1",
|
||||||
|
"animations/Stand/Waiting/Zombie_1"]
|
||||||
@@ -5,6 +5,7 @@ from robot_interface.state import state
|
|||||||
|
|
||||||
from robot_interface.core.config import settings
|
from robot_interface.core.config import settings
|
||||||
|
|
||||||
|
|
||||||
class MainReceiver(ReceiverBase):
|
class MainReceiver(ReceiverBase):
|
||||||
"""
|
"""
|
||||||
The main receiver endpoint, responsible for handling ping and negotiation requests.
|
The main receiver endpoint, responsible for handling ping and negotiation requests.
|
||||||
@@ -12,10 +13,12 @@ class MainReceiver(ReceiverBase):
|
|||||||
:param zmq_context: The ZeroMQ context to use.
|
:param zmq_context: The ZeroMQ context to use.
|
||||||
:type zmq_context: zmq.Context
|
:type zmq_context: zmq.Context
|
||||||
|
|
||||||
:param port: The port to use.
|
:param port: The port to use, defaults to value in `settings.agent_settings.main_receiver_port`.
|
||||||
:type port: int
|
:type port: int
|
||||||
"""
|
"""
|
||||||
def __init__(self, zmq_context, port=settings.agent_settings.main_receiver_port):
|
def __init__(self, zmq_context, port=None):
|
||||||
|
if port is None:
|
||||||
|
port = settings.agent_settings.main_receiver_port
|
||||||
super(MainReceiver, self).__init__("main")
|
super(MainReceiver, self).__init__("main")
|
||||||
self.create_socket(zmq_context, zmq.REP, port, bind=False)
|
self.create_socket(zmq_context, zmq.REP, port, bind=False)
|
||||||
|
|
||||||
|
|||||||
@@ -2,6 +2,8 @@ from abc import ABCMeta
|
|||||||
|
|
||||||
import zmq
|
import zmq
|
||||||
|
|
||||||
|
from robot_interface.core.config import settings
|
||||||
|
|
||||||
|
|
||||||
class SocketBase(object):
|
class SocketBase(object):
|
||||||
"""
|
"""
|
||||||
@@ -59,7 +61,7 @@ class SocketBase(object):
|
|||||||
if bind:
|
if bind:
|
||||||
self.socket.bind("tcp://*:{}".format(port))
|
self.socket.bind("tcp://*:{}".format(port))
|
||||||
else:
|
else:
|
||||||
self.socket.connect("tcp://localhost:{}".format(port))
|
self.socket.connect("tcp://{}:{}".format(settings.agent_settings.control_backend_host, port))
|
||||||
|
|
||||||
def close(self):
|
def close(self):
|
||||||
"""Close the ZeroMQ socket."""
|
"""Close the ZeroMQ socket."""
|
||||||
|
|||||||
@@ -1,11 +1,13 @@
|
|||||||
import zmq
|
import zmq
|
||||||
import threading
|
import threading
|
||||||
import logging
|
import logging
|
||||||
|
import cv2
|
||||||
|
|
||||||
from robot_interface.endpoints.socket_base import SocketBase
|
from robot_interface.endpoints.socket_base import SocketBase
|
||||||
from robot_interface.state import state
|
from robot_interface.state import state
|
||||||
from robot_interface.core.config import settings
|
from robot_interface.core.config import settings
|
||||||
|
|
||||||
|
|
||||||
class VideoSender(SocketBase):
|
class VideoSender(SocketBase):
|
||||||
"""
|
"""
|
||||||
Video sender endpoint, responsible for sending video frames.
|
Video sender endpoint, responsible for sending video frames.
|
||||||
@@ -27,7 +29,9 @@ class VideoSender(SocketBase):
|
|||||||
Will not start if no qi session is available.
|
Will not start if no qi session is available.
|
||||||
"""
|
"""
|
||||||
if not state.qi_session:
|
if not state.qi_session:
|
||||||
logging.info("No Qi session available. Not starting video loop.")
|
logging.info("No Qi session available. Starting video from webcam.")
|
||||||
|
thread = threading.Thread(target=self.test_video_stream)
|
||||||
|
thread.start()
|
||||||
return
|
return
|
||||||
|
|
||||||
video = state.qi_session.service("ALVideoDevice")
|
video = state.qi_session.service("ALVideoDevice")
|
||||||
@@ -58,3 +62,29 @@ class VideoSender(SocketBase):
|
|||||||
self.socket.send(img[settings.video_config.image_buffer])
|
self.socket.send(img[settings.video_config.image_buffer])
|
||||||
except:
|
except:
|
||||||
logging.warn("Failed to retrieve video image from robot.")
|
logging.warn("Failed to retrieve video image from robot.")
|
||||||
|
|
||||||
|
def test_video_stream(self):
|
||||||
|
"""
|
||||||
|
Test function to send video from local webcam instead of the robot.
|
||||||
|
"""
|
||||||
|
cap = cv2.VideoCapture(0)
|
||||||
|
if not cap.isOpened():
|
||||||
|
logging.error("Could not open webcam for video stream test.")
|
||||||
|
return
|
||||||
|
|
||||||
|
while not state.exit_event.is_set():
|
||||||
|
|
||||||
|
ret, frame = cap.read()
|
||||||
|
if not ret:
|
||||||
|
logging.warning("Failed to read frame from webcam.")
|
||||||
|
continue
|
||||||
|
|
||||||
|
cv2.waitKey(1)
|
||||||
|
|
||||||
|
small_frame = cv2.resize(frame, (320, 240), interpolation=cv2.INTER_AREA)
|
||||||
|
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 70]
|
||||||
|
_, buffer = cv2.imencode('.jpg', small_frame, encode_param)
|
||||||
|
|
||||||
|
self.socket.send(buffer.tobytes())
|
||||||
|
|
||||||
|
cap.release()
|
||||||
|
|||||||
@@ -30,6 +30,7 @@ class State(object):
|
|||||||
self.exit_event = None
|
self.exit_event = None
|
||||||
self.sockets = []
|
self.sockets = []
|
||||||
self.qi_session = None
|
self.qi_session = None
|
||||||
|
self.is_speaking = False
|
||||||
|
|
||||||
def initialize(self):
|
def initialize(self):
|
||||||
"""
|
"""
|
||||||
|
|||||||
32
src/robot_interface/utils/get_config.py
Normal file
32
src/robot_interface/utils/get_config.py
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
import os
|
||||||
|
from dotenv import load_dotenv
|
||||||
|
|
||||||
|
load_dotenv()
|
||||||
|
|
||||||
|
|
||||||
|
def get_config(value, env, default, cast=None):
|
||||||
|
"""
|
||||||
|
Small utility to get a configuration value, returns `value` if it is not None, else it will try to get the
|
||||||
|
environment variable cast with `cast`. If the environment variable is not set, it will return `default`.
|
||||||
|
|
||||||
|
:param value: The value to check.
|
||||||
|
:type value: Any
|
||||||
|
:param env: The environment variable to check.
|
||||||
|
:type env: string
|
||||||
|
:param default: The default value to return if the environment variable is not set.
|
||||||
|
:type default: Any
|
||||||
|
:param cast: A function to use to cast the environment variable. Must support string input.
|
||||||
|
:type cast: Callable[[Any], Any], optional
|
||||||
|
|
||||||
|
:return: The value, the environment variable value, or the default.
|
||||||
|
:rtype: Any
|
||||||
|
"""
|
||||||
|
if value is not None:
|
||||||
|
return value
|
||||||
|
|
||||||
|
env = os.environ.get(env, default)
|
||||||
|
|
||||||
|
if cast is None:
|
||||||
|
return env
|
||||||
|
|
||||||
|
return cast(env)
|
||||||
10
test/conftest.py
Normal file
10
test/conftest.py
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
from mock import patch, MagicMock
|
||||||
|
|
||||||
|
import pytest
|
||||||
|
|
||||||
|
|
||||||
|
@pytest.fixture(autouse=True)
|
||||||
|
def mock_zmq_context():
|
||||||
|
with patch("zmq.Context") as mock:
|
||||||
|
mock.instance.return_value = MagicMock()
|
||||||
|
yield mock
|
||||||
32
test/integration/test_config.py
Normal file
32
test/integration/test_config.py
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
from mock import patch, mock
|
||||||
|
|
||||||
|
from robot_interface.core.config import Settings
|
||||||
|
from robot_interface.endpoints.main_receiver import MainReceiver
|
||||||
|
|
||||||
|
|
||||||
|
def test_environment_variables(monkeypatch):
|
||||||
|
"""
|
||||||
|
When environment variables are set, creating settings should use these.
|
||||||
|
"""
|
||||||
|
monkeypatch.setenv("AGENT__CONTROL_BACKEND_HOST", "some_value_that_should_be_different")
|
||||||
|
|
||||||
|
settings = Settings()
|
||||||
|
|
||||||
|
assert settings.agent_settings.control_backend_host == "some_value_that_should_be_different"
|
||||||
|
|
||||||
|
|
||||||
|
@patch("robot_interface.endpoints.main_receiver.settings")
|
||||||
|
@patch("robot_interface.endpoints.socket_base.settings")
|
||||||
|
def test_create_endpoint_custom_host(base_settings, main_settings):
|
||||||
|
"""
|
||||||
|
When a custom host is given in the settings, check that an endpoint's socket connects to it.
|
||||||
|
"""
|
||||||
|
fake_context = mock.Mock()
|
||||||
|
fake_socket = mock.Mock()
|
||||||
|
fake_context.socket.return_value = fake_socket
|
||||||
|
base_settings.agent_settings.control_backend_host = "not_localhost"
|
||||||
|
main_settings.agent_settings.main_receiver_port = 9999
|
||||||
|
|
||||||
|
_ = MainReceiver(fake_context)
|
||||||
|
|
||||||
|
fake_socket.connect.assert_called_once_with("tcp://not_localhost:9999")
|
||||||
@@ -5,6 +5,7 @@ import pytest
|
|||||||
import zmq
|
import zmq
|
||||||
|
|
||||||
from robot_interface.endpoints.actuation_receiver import ActuationReceiver
|
from robot_interface.endpoints.actuation_receiver import ActuationReceiver
|
||||||
|
from robot_interface.endpoints.gesture_settings import GestureTags
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
@@ -19,46 +20,109 @@ def zmq_context():
|
|||||||
yield context
|
yield context
|
||||||
|
|
||||||
|
|
||||||
def test_handle_unimplemented_endpoint(zmq_context):
|
def test_force_speech_clears_queue(mocker):
|
||||||
"""
|
"""
|
||||||
Tests that the ``ActuationReceiver.handle_message`` method can
|
Tests that a force speech message clears the existing queue
|
||||||
handle an unknown or unimplemented endpoint without raising an error.
|
and places the high-priority message at the front.
|
||||||
"""
|
"""
|
||||||
receiver = ActuationReceiver(zmq_context)
|
mocker.patch("threading.Thread")
|
||||||
# Should not error
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
|
mock_qi = mock.Mock()
|
||||||
|
sys.modules["qi"] = mock_qi
|
||||||
|
|
||||||
|
mock_tts_service = mock.Mock()
|
||||||
|
mock_state.qi_session.service.return_value = mock_tts_service
|
||||||
|
|
||||||
|
# Use Mock Context
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
|
receiver._message_queue.put("old_message_1")
|
||||||
|
receiver._message_queue.put("old_message_2")
|
||||||
|
|
||||||
|
assert receiver._message_queue.qsize() == 2
|
||||||
|
|
||||||
|
force_msg = {
|
||||||
|
"endpoint": "actuate/speech",
|
||||||
|
"data": "Emergency Notification",
|
||||||
|
"is_priority": True,
|
||||||
|
}
|
||||||
|
receiver.handle_message(force_msg)
|
||||||
|
|
||||||
|
assert receiver._message_queue.qsize() == 1
|
||||||
|
queued_item = receiver._message_queue.get()
|
||||||
|
assert queued_item == "Emergency Notification"
|
||||||
|
|
||||||
|
|
||||||
|
def test_handle_unimplemented_endpoint(mocker):
|
||||||
|
"""
|
||||||
|
Tests handling of unknown endpoints.
|
||||||
|
"""
|
||||||
|
mocker.patch("threading.Thread")
|
||||||
|
|
||||||
|
# Use Mock Context
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
receiver.handle_message({
|
receiver.handle_message({
|
||||||
"endpoint": "some_endpoint_that_definitely_does_not_exist",
|
"endpoint": "some_endpoint_that_definitely_does_not_exist",
|
||||||
"data": None,
|
"data": None,
|
||||||
})
|
})
|
||||||
|
|
||||||
|
|
||||||
def test_speech_message_no_data(zmq_context, mocker):
|
def test_speech_message_no_data(mocker):
|
||||||
"""
|
"""
|
||||||
Tests that the message handler logs a warning when a speech actuation
|
Tests that if the message data is empty, the receiver returns immediately
|
||||||
request (`actuate/speech`) is received but contains empty string data.
|
WITHOUT attempting to access the global robot state or session.
|
||||||
"""
|
"""
|
||||||
mock_warn = mocker.patch("logging.warn")
|
# 1. Prevent background threads from running
|
||||||
|
mocker.patch("threading.Thread")
|
||||||
|
|
||||||
|
# 2. Mock the global state object
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
receiver = ActuationReceiver(zmq_context)
|
# 3. Create a PropertyMock to track whenever 'qi_session' is accessed
|
||||||
|
# We attach it to the class type of the mock so it acts like a real property
|
||||||
|
mock_session_prop = mock.PropertyMock(return_value=None)
|
||||||
|
type(mock_state).qi_session = mock_session_prop
|
||||||
|
|
||||||
|
# 4. Initialize Receiver (Mocking the context to avoid ZMQ errors)
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
|
# 5. Send empty data
|
||||||
receiver.handle_message({"endpoint": "actuate/speech", "data": ""})
|
receiver.handle_message({"endpoint": "actuate/speech", "data": ""})
|
||||||
|
|
||||||
mock_warn.assert_called_with(mock.ANY)
|
# 6. Assertion:
|
||||||
|
# Because the code does `if not text: return` BEFORE `if not state.qi_session`,
|
||||||
|
# the state property should NEVER be read.
|
||||||
|
mock_session_prop.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
def test_speech_message_invalid_data(zmq_context, mocker):
|
def test_speech_message_invalid_data(mocker):
|
||||||
"""
|
"""
|
||||||
Tests that the message handler logs a warning when a speech actuation
|
Tests that if the message data is not a string, the function returns.
|
||||||
request (`actuate/speech`) is received with data that is not a string (e.g., a boolean).
|
:param mocker: Description
|
||||||
"""
|
"""
|
||||||
mock_warn = mocker.patch("logging.warn")
|
mocker.patch("threading.Thread")
|
||||||
|
|
||||||
receiver = ActuationReceiver(zmq_context)
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
|
mock_session_prop = mock.PropertyMock(return_value=None)
|
||||||
|
type(mock_state).qi_session = mock_session_prop
|
||||||
|
|
||||||
|
# Use Mock Context
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
receiver.handle_message({"endpoint": "actuate/speech", "data": True})
|
receiver.handle_message({"endpoint": "actuate/speech", "data": True})
|
||||||
|
|
||||||
mock_warn.assert_called_with(mock.ANY)
|
# Because the code does `if not text: return` BEFORE `if not state.qi_session`,
|
||||||
|
# the state property should NEVER be read.
|
||||||
|
mock_session_prop.assert_not_called()
|
||||||
|
|
||||||
|
def test_speech_no_qi(mocker):
|
||||||
def test_speech_no_qi(zmq_context, mocker):
|
|
||||||
"""
|
"""
|
||||||
Tests the actuation receiver's behavior when processing a speech request
|
Tests the actuation receiver's behavior when processing a speech request
|
||||||
but the global state does not have an active QI session.
|
but the global state does not have an active QI session.
|
||||||
@@ -68,16 +132,21 @@ def test_speech_no_qi(zmq_context, mocker):
|
|||||||
mock_qi_session = mock.PropertyMock(return_value=None)
|
mock_qi_session = mock.PropertyMock(return_value=None)
|
||||||
type(mock_state).qi_session = mock_qi_session
|
type(mock_state).qi_session = mock_qi_session
|
||||||
|
|
||||||
receiver = ActuationReceiver(zmq_context)
|
mock_tts_service = mock.Mock()
|
||||||
|
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
receiver._tts_service = mock_tts_service
|
||||||
|
|
||||||
receiver._handle_speech({"endpoint": "actuate/speech", "data": "Some message to speak."})
|
receiver._handle_speech({"endpoint": "actuate/speech", "data": "Some message to speak."})
|
||||||
|
|
||||||
mock_qi_session.assert_called()
|
receiver._tts_service.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
def test_speech(zmq_context, mocker):
|
def test_speech(mocker):
|
||||||
"""
|
"""
|
||||||
Tests the core speech actuation functionality by mocking the QI TextToSpeech
|
Tests the core speech actuation functionality by mocking the QI TextToSpeech
|
||||||
service and verifying that it is called correctly.
|
service and verifying that the received message is put into the queue.
|
||||||
"""
|
"""
|
||||||
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
@@ -88,13 +157,330 @@ def test_speech(zmq_context, mocker):
|
|||||||
mock_state.qi_session = mock.Mock()
|
mock_state.qi_session = mock.Mock()
|
||||||
mock_state.qi_session.service.return_value = mock_tts_service
|
mock_state.qi_session.service.return_value = mock_tts_service
|
||||||
|
|
||||||
receiver = ActuationReceiver(zmq_context)
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
receiver._tts_service = None
|
receiver._tts_service = None
|
||||||
receiver._handle_speech({"endpoint": "actuate/speech", "data": "Some message to speak."})
|
receiver._handle_speech({"endpoint": "actuate/speech", "data": "Some message to speak."})
|
||||||
|
|
||||||
mock_state.qi_session.service.assert_called_once_with("ALTextToSpeech")
|
assert receiver._message_queue.qsize() == 1
|
||||||
|
|
||||||
|
queued_item = receiver._message_queue.get()
|
||||||
|
assert queued_item == "Some message to speak."
|
||||||
|
|
||||||
|
def test_speech_priority(mocker):
|
||||||
|
"""
|
||||||
|
Tests that a priority speech message is handled correctly by clearing the queue
|
||||||
|
and placing the priority message at the front.
|
||||||
|
"""
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
|
mock_qi = mock.Mock()
|
||||||
|
sys.modules["qi"] = mock_qi
|
||||||
|
|
||||||
|
mock_tts_service = mock.Mock()
|
||||||
|
mock_state.qi_session = mock.Mock()
|
||||||
|
mock_state.qi_session.service.return_value = mock_tts_service
|
||||||
|
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
|
receiver._message_queue.put("old_message_1")
|
||||||
|
receiver._message_queue.put("old_message_2")
|
||||||
|
|
||||||
|
assert receiver._message_queue.qsize() == 2
|
||||||
|
|
||||||
|
priority_msg = {
|
||||||
|
"endpoint": "actuate/speech",
|
||||||
|
"data": "Urgent Message",
|
||||||
|
"is_priority": True,
|
||||||
|
}
|
||||||
|
receiver._handle_speech(priority_msg)
|
||||||
|
|
||||||
|
assert receiver._message_queue.qsize() == 1
|
||||||
|
queued_item = receiver._message_queue.get()
|
||||||
|
assert queued_item == "Urgent Message"
|
||||||
|
|
||||||
|
def test_handle_messages_loop(mocker):
|
||||||
|
"""
|
||||||
|
Tests the background consumer loop (_handle_messages) processing an item.
|
||||||
|
Runs SYNCHRONOUSLY to ensure coverage tools pick up the lines.
|
||||||
|
"""
|
||||||
|
# Patch Thread so the real background thread NEVER starts automatically
|
||||||
|
mocker.patch("threading.Thread")
|
||||||
|
|
||||||
|
# Mock state
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
|
# Setup initial speaking state to False (covers "Started speaking" print)
|
||||||
|
mock_state.is_speaking = False
|
||||||
|
|
||||||
|
# Mock the TextToSpeech service
|
||||||
|
mock_tts_service = mock.Mock()
|
||||||
|
mock_state.qi_session.service.return_value = mock_tts_service
|
||||||
|
|
||||||
|
# Initialize receiver (Thread is patched, so no thread starts)
|
||||||
|
# Use Mock Context to avoid ZMQ errors
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
|
# Manually inject service (since lazy loading might handle it, but this is safer)
|
||||||
|
receiver._tts_service = mock_tts_service
|
||||||
|
|
||||||
|
# This ensures the while loop iterates exactly once
|
||||||
|
mock_state.exit_event.is_set.side_effect = [False, True]
|
||||||
|
|
||||||
|
# Put an item in the queue
|
||||||
|
receiver._message_queue.put("Hello World")
|
||||||
|
|
||||||
|
# RUN MANUALLY in the main thread
|
||||||
|
# This executes the code: while -> try -> get -> if print -> speaking=True -> say
|
||||||
|
receiver._handle_messages()
|
||||||
|
|
||||||
|
# Assertions
|
||||||
|
assert receiver._message_queue.empty()
|
||||||
|
mock_tts_service.say.assert_called_with("Hello World")
|
||||||
|
assert mock_state.is_speaking is True
|
||||||
|
|
||||||
|
|
||||||
|
def test_handle_messages_queue_empty(mocker):
|
||||||
|
"""
|
||||||
|
Tests the Queue.Empty exception handler in the consumer loop.
|
||||||
|
This covers the logic that resets 'state.is_speaking' to False.
|
||||||
|
"""
|
||||||
|
# Prevent the real background thread from starting
|
||||||
|
mocker.patch("threading.Thread")
|
||||||
|
|
||||||
|
# Mock the state object
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
|
# Setup 'is_speaking' property mock
|
||||||
|
# We set return_value=True so the code enters the 'if state.is_speaking:' block.
|
||||||
|
# We use PropertyMock to track when this attribute is set.
|
||||||
|
type(mock_state).is_speaking = True
|
||||||
|
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
|
# This ensures the while loop body runs exactly once for our test
|
||||||
|
mock_state.exit_event.is_set.side_effect = [False, True]
|
||||||
|
|
||||||
|
# Force get() to raise Queue.Empty immediately (simulate timeout)
|
||||||
|
# We patch the 'get' method on the specific queue instance of our receiver
|
||||||
|
#mocker.patch.object(receiver._message_queue, 'get', side_effect=Queue.Empty)
|
||||||
|
|
||||||
|
# Run the loop logic manually (synchronously)
|
||||||
|
receiver._handle_messages()
|
||||||
|
|
||||||
|
# Final Assertion: Verify is_speaking was set to False
|
||||||
|
# The code execution order is: read (returns True) -> print -> set (to False)
|
||||||
|
# assert_called_with checks the arguments of the LAST call, which is the setter.
|
||||||
|
assert mock_state.is_speaking is False
|
||||||
|
|
||||||
|
|
||||||
|
def test_handle_messages_runtime_error(mocker):
|
||||||
|
"""
|
||||||
|
Tests the RuntimeError exception handler (e.g. lost WiFi connection).
|
||||||
|
Uses a Mock ZMQ context to avoid 'Address already in use' errors.
|
||||||
|
"""
|
||||||
|
# Patch Thread so we don't accidentally spawn real threads
|
||||||
|
mocker.patch("threading.Thread")
|
||||||
|
|
||||||
|
# Mock the state and logging
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
|
||||||
|
# Use a MOCK ZMQ context.
|
||||||
|
# This prevents the receiver from trying to bind to a real TCP port.
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
|
||||||
|
# Initialize receiver with the mock context
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
|
mock_state.exit_event.is_set.side_effect = [False, True]
|
||||||
|
|
||||||
|
receiver._message_queue.put("Test Message")
|
||||||
|
|
||||||
|
# Setup: ...BUT the service raises RuntimeError when asked to speak
|
||||||
|
mock_tts = mock.Mock()
|
||||||
|
mock_tts.say.side_effect = RuntimeError("Connection lost")
|
||||||
|
receiver._tts_service = mock_tts
|
||||||
|
|
||||||
|
# Run the loop logic manually
|
||||||
|
receiver._handle_messages()
|
||||||
|
|
||||||
|
# Assertions
|
||||||
|
assert mock_state.exit_event.is_set.called
|
||||||
|
|
||||||
|
def test_clear_queue(mocker):
|
||||||
|
"""
|
||||||
|
Tests that the clear_queue method properly drains all items from the message queue.
|
||||||
|
"""
|
||||||
|
mocker.patch("threading.Thread")
|
||||||
|
|
||||||
|
# Use Mock Context
|
||||||
|
mock_zmq_ctx = mock.Mock()
|
||||||
|
receiver = ActuationReceiver(mock_zmq_ctx)
|
||||||
|
|
||||||
|
# Populate the queue with multiple items
|
||||||
|
receiver._message_queue.put("msg1")
|
||||||
|
receiver._message_queue.put("msg2")
|
||||||
|
receiver._message_queue.put("msg3")
|
||||||
|
|
||||||
|
assert receiver._message_queue.qsize() == 3
|
||||||
|
|
||||||
|
# Clear the queue
|
||||||
|
receiver.clear_queue()
|
||||||
|
|
||||||
|
# Assert the queue is empty
|
||||||
|
assert receiver._message_queue.qsize() == 0
|
||||||
|
|
||||||
|
def test_gesture_no_data(zmq_context, mocker):
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": ""}, True)
|
||||||
|
# Just ensuring no crash
|
||||||
|
|
||||||
|
|
||||||
|
def test_gesture_invalid_data(zmq_context, mocker):
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": 123}, True)
|
||||||
|
# No crash expected
|
||||||
|
|
||||||
|
|
||||||
|
def test_gesture_single_not_found(zmq_context, mocker):
|
||||||
|
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
|
||||||
|
mock_tags.single_gestures = ["wave", "bow"] # allowed single gestures
|
||||||
|
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": "unknown_gesture"}, True)
|
||||||
|
# No crash expected
|
||||||
|
|
||||||
|
|
||||||
|
def test_gesture_tag_not_found(zmq_context, mocker):
|
||||||
|
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
|
||||||
|
mock_tags.tags = ["happy", "sad"]
|
||||||
|
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
receiver._handle_gesture({"endpoint": "actuate/gesture/tag", "data": "not_a_tag"}, False)
|
||||||
|
# No crash expected
|
||||||
|
|
||||||
|
|
||||||
|
def test_gesture_no_qi_session(zmq_context, mocker):
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
mock_state.qi_session = None
|
||||||
|
|
||||||
|
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
|
||||||
|
mock_tags.single_gestures = ["hello"]
|
||||||
|
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": "hello"}, True)
|
||||||
|
# No crash, path returns early
|
||||||
|
|
||||||
|
|
||||||
|
def test_gesture_single_success(zmq_context, mocker):
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
mock_qi = mock.Mock()
|
||||||
|
sys.modules["qi"] = mock_qi
|
||||||
|
|
||||||
|
# Setup gesture settings
|
||||||
|
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
|
||||||
|
mock_tags.single_gestures = ["wave"]
|
||||||
|
|
||||||
|
mock_animation_service = mock.Mock()
|
||||||
|
mock_state.qi_session = mock.Mock()
|
||||||
|
mock_state.qi_session.service.return_value = mock_animation_service
|
||||||
|
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
receiver._handle_gesture({"endpoint": "actuate/gesture/single", "data": "wave"}, True)
|
||||||
|
|
||||||
|
mock_state.qi_session.service.assert_called_once_with("ALAnimationPlayer")
|
||||||
|
getattr(mock_qi, "async").assert_called_once()
|
||||||
|
assert getattr(mock_qi, "async").call_args[0][0] == mock_animation_service.run
|
||||||
|
assert getattr(mock_qi, "async").call_args[0][1] == "wave"
|
||||||
|
|
||||||
|
|
||||||
|
def test_gesture_tag_success(zmq_context, mocker):
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
mock_qi = mock.Mock()
|
||||||
|
sys.modules["qi"] = mock_qi
|
||||||
|
|
||||||
|
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
|
||||||
|
mock_tags.tags = ["greeting"]
|
||||||
|
|
||||||
|
mock_animation_service = mock.Mock()
|
||||||
|
mock_state.qi_session = mock.Mock()
|
||||||
|
mock_state.qi_session.service.return_value = mock_animation_service
|
||||||
|
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
receiver._handle_gesture({"endpoint": "actuate/gesture/tag", "data": "greeting"}, False)
|
||||||
|
|
||||||
|
mock_state.qi_session.service.assert_called_once_with("ALAnimationPlayer")
|
||||||
|
getattr(mock_qi, "async").assert_called_once()
|
||||||
|
assert getattr(mock_qi, "async").call_args[0][0] == mock_animation_service.runTag
|
||||||
|
assert getattr(mock_qi, "async").call_args[0][1] == "greeting"
|
||||||
|
|
||||||
|
|
||||||
|
def test_handle_message_all_routes(zmq_context, mocker):
|
||||||
|
"""
|
||||||
|
Ensures all handle_message endpoint branches route correctly.
|
||||||
|
"""
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
|
||||||
|
mock_speech = mocker.patch.object(receiver, "_handle_speech")
|
||||||
|
mock_gesture = mocker.patch.object(receiver, "_handle_gesture")
|
||||||
|
|
||||||
|
receiver.handle_message({"endpoint": "actuate/speech", "data": "hi"})
|
||||||
|
receiver.handle_message({"endpoint": "actuate/gesture/tag", "data": "greeting"})
|
||||||
|
receiver.handle_message({"endpoint": "actuate/gesture/single", "data": "wave"})
|
||||||
|
|
||||||
|
mock_speech.assert_called_once()
|
||||||
|
assert mock_gesture.call_count == 2
|
||||||
|
|
||||||
|
|
||||||
|
def test_endpoint_description(zmq_context, mocker):
|
||||||
|
mock_tags = mocker.patch("robot_interface.endpoints.actuation_receiver.GestureTags")
|
||||||
|
mock_tags.tags = ["happy"]
|
||||||
|
mock_tags.single_gestures = ["wave"]
|
||||||
|
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
desc = receiver.endpoint_description()
|
||||||
|
|
||||||
|
assert "gestures" in desc
|
||||||
|
assert desc["gestures"] == ["happy"]
|
||||||
|
|
||||||
|
assert "single_gestures" in desc
|
||||||
|
assert desc["single_gestures"] == ["wave"]
|
||||||
|
|
||||||
|
|
||||||
|
def test_gesture_single_real_gesturetags(zmq_context, mocker):
|
||||||
|
"""
|
||||||
|
Uses the real GestureTags (no mocking) to ensure the receiver
|
||||||
|
references GestureTags.single_gestures correctly.
|
||||||
|
"""
|
||||||
|
# Ensure qi session exists so we pass the early return
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.actuation_receiver.state")
|
||||||
|
mock_state.qi_session = mock.Mock()
|
||||||
|
|
||||||
|
# Mock qi.async to avoid real async calls
|
||||||
|
mock_qi = mock.Mock()
|
||||||
|
sys.modules["qi"] = mock_qi
|
||||||
|
|
||||||
|
# Mock animation service
|
||||||
|
mock_animation_service = mock.Mock()
|
||||||
|
mock_state.qi_session.service.return_value = mock_animation_service
|
||||||
|
|
||||||
|
receiver = ActuationReceiver(zmq_context)
|
||||||
|
|
||||||
|
# Pick a real gesture from GestureTags.single_gestures
|
||||||
|
assert len(GestureTags.single_gestures) > 0, "GestureTags.single_gestures must not be empty"
|
||||||
|
gesture = GestureTags.single_gestures[0]
|
||||||
|
|
||||||
|
receiver._handle_gesture(
|
||||||
|
{"endpoint": "actuate/gesture/single", "data": gesture},
|
||||||
|
is_single=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
mock_state.qi_session.service.assert_called_once_with("ALAnimationPlayer")
|
||||||
|
getattr(mock_qi, "async").assert_called_once()
|
||||||
|
assert getattr(mock_qi, "async").call_args[0][0] == mock_animation_service.run
|
||||||
|
assert getattr(mock_qi, "async").call_args[0][1] == gesture
|
||||||
|
|
||||||
mock_qi.async.assert_called_once()
|
|
||||||
call_args = mock_qi.async.call_args[0]
|
|
||||||
assert call_args[0] == mock_tts_service.say
|
|
||||||
assert call_args[1] == "Some message to speak."
|
|
||||||
|
|||||||
@@ -77,7 +77,8 @@ def test_sending_audio(mocker):
|
|||||||
|
|
||||||
mock_zmq_context = mock.Mock()
|
mock_zmq_context = mock.Mock()
|
||||||
send_socket = mock.Mock()
|
send_socket = mock.Mock()
|
||||||
|
|
||||||
|
mock_state.is_speaking = False
|
||||||
# If there's something wrong with the microphone, it will raise an IOError when `read`ing.
|
# If there's something wrong with the microphone, it will raise an IOError when `read`ing.
|
||||||
stream = mock.Mock()
|
stream = mock.Mock()
|
||||||
stream.read = _fake_read
|
stream.read = _fake_read
|
||||||
@@ -93,6 +94,36 @@ def test_sending_audio(mocker):
|
|||||||
send_socket.assert_called()
|
send_socket.assert_called()
|
||||||
|
|
||||||
|
|
||||||
|
def test_no_sending_if_speaking(mocker):
|
||||||
|
"""
|
||||||
|
Tests the successful sending of audio data over a ZeroMQ socket.
|
||||||
|
"""
|
||||||
|
mock_choose_mic = mocker.patch("robot_interface.endpoints.audio_sender.choose_mic")
|
||||||
|
mock_choose_mic.return_value = {"name": u"Some mic", "index": 0L}
|
||||||
|
|
||||||
|
mock_state = mocker.patch("robot_interface.endpoints.audio_sender.state")
|
||||||
|
mock_state.exit_event.is_set.side_effect = [False, True]
|
||||||
|
|
||||||
|
mock_zmq_context = mock.Mock()
|
||||||
|
send_socket = mock.Mock()
|
||||||
|
|
||||||
|
mock_state.is_speaking = True
|
||||||
|
|
||||||
|
# If there's something wrong with the microphone, it will raise an IOError when `read`ing.
|
||||||
|
stream = mock.Mock()
|
||||||
|
stream.read = _fake_read
|
||||||
|
|
||||||
|
sender = AudioSender(mock_zmq_context)
|
||||||
|
sender.socket.send = send_socket
|
||||||
|
sender.audio.open = mock.Mock()
|
||||||
|
sender.audio.open.return_value = stream
|
||||||
|
|
||||||
|
sender.start()
|
||||||
|
sender.wait_until_done()
|
||||||
|
|
||||||
|
send_socket.assert_not_called()
|
||||||
|
|
||||||
|
|
||||||
def _fake_read_error(num_frames):
|
def _fake_read_error(num_frames):
|
||||||
"""
|
"""
|
||||||
Helper function to simulate an I/O error during microphone stream reading.
|
Helper function to simulate an I/O error during microphone stream reading.
|
||||||
|
|||||||
45
test/unit/test_get_config.py
Normal file
45
test/unit/test_get_config.py
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
from robot_interface.utils.get_config import get_config
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_config_prefers_explicit_value(monkeypatch):
|
||||||
|
"""
|
||||||
|
When a direct value is provided it should be returned without reading the environment.
|
||||||
|
"""
|
||||||
|
monkeypatch.setenv("GET_CONFIG_TEST", "from-env")
|
||||||
|
|
||||||
|
result = get_config("explicit", "GET_CONFIG_TEST", "default")
|
||||||
|
|
||||||
|
assert result == "explicit"
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_config_returns_env_value(monkeypatch):
|
||||||
|
"""
|
||||||
|
If value is None the environment variable should be used.
|
||||||
|
"""
|
||||||
|
monkeypatch.setenv("GET_CONFIG_TEST", "from-env")
|
||||||
|
|
||||||
|
result = get_config(None, "GET_CONFIG_TEST", "default")
|
||||||
|
|
||||||
|
assert result == "from-env"
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_config_casts_env_value(monkeypatch):
|
||||||
|
"""
|
||||||
|
The env value should be cast when a cast function is provided.
|
||||||
|
"""
|
||||||
|
monkeypatch.setenv("GET_CONFIG_PORT", "1234")
|
||||||
|
|
||||||
|
result = get_config(None, "GET_CONFIG_PORT", 0, int)
|
||||||
|
|
||||||
|
assert result == 1234
|
||||||
|
|
||||||
|
|
||||||
|
def test_get_config_casts_default_when_env_missing(monkeypatch):
|
||||||
|
"""
|
||||||
|
When the env var is missing it should fall back to the default and still apply the cast.
|
||||||
|
"""
|
||||||
|
monkeypatch.delenv("GET_CONFIG_MISSING", raising=False)
|
||||||
|
|
||||||
|
result = get_config(None, "GET_CONFIG_MISSING", "42", int)
|
||||||
|
|
||||||
|
assert result == 42
|
||||||
@@ -97,3 +97,120 @@ def test_video_receive_error(zmq_context, mocker):
|
|||||||
sender.video_rcv_loop(mock_video_service, "stream_name")
|
sender.video_rcv_loop(mock_video_service, "stream_name")
|
||||||
|
|
||||||
send_socket.assert_not_called()
|
send_socket.assert_not_called()
|
||||||
|
|
||||||
|
def test_video_stream_camera_fail(zmq_context, mocker):
|
||||||
|
"""
|
||||||
|
Test that the function logs an error and returns early if
|
||||||
|
the webcam cannot be opened.
|
||||||
|
"""
|
||||||
|
_patch_basics(mocker)
|
||||||
|
|
||||||
|
# Mock cv2 and logging
|
||||||
|
mock_cv2 = mocker.patch("robot_interface.endpoints.video_sender.cv2")
|
||||||
|
mock_logging = mocker.patch("robot_interface.endpoints.video_sender.logging")
|
||||||
|
|
||||||
|
# Setup the mock capture to fail isOpened()
|
||||||
|
mock_cap = mock.Mock()
|
||||||
|
mock_cap.isOpened.return_value = False
|
||||||
|
mock_cv2.VideoCapture.return_value = mock_cap
|
||||||
|
|
||||||
|
sender = VideoSender(zmq_context)
|
||||||
|
sender.test_video_stream()
|
||||||
|
|
||||||
|
# Assertions
|
||||||
|
mock_cv2.VideoCapture.assert_called_with(0)
|
||||||
|
|
||||||
|
# Ensure the loop was never entered and cleanup didn't happen
|
||||||
|
assert not mock_cap.read.called
|
||||||
|
assert not mock_cap.release.called
|
||||||
|
|
||||||
|
|
||||||
|
def test_video_stream_read_fail(zmq_context, mocker):
|
||||||
|
"""
|
||||||
|
Test that the function logs a warning and continues the loop
|
||||||
|
if a specific frame fails to read.
|
||||||
|
"""
|
||||||
|
_patch_basics(mocker)
|
||||||
|
_patch_exit_event(mocker) # Run loop exactly once
|
||||||
|
|
||||||
|
mock_cv2 = mocker.patch("robot_interface.endpoints.video_sender.cv2")
|
||||||
|
mock_logging = mocker.patch("robot_interface.endpoints.video_sender.logging")
|
||||||
|
|
||||||
|
# Setup capture to open successfully, but fail the read()
|
||||||
|
mock_cap = mock.Mock()
|
||||||
|
mock_cap.isOpened.return_value = True
|
||||||
|
# Return (False, None) simulating a failed frame read
|
||||||
|
mock_cap.read.return_value = (False, None)
|
||||||
|
mock_cv2.VideoCapture.return_value = mock_cap
|
||||||
|
|
||||||
|
sender = VideoSender(zmq_context)
|
||||||
|
# Mock the socket to ensure nothing is sent
|
||||||
|
sender.socket = mock.Mock()
|
||||||
|
|
||||||
|
sender.test_video_stream()
|
||||||
|
|
||||||
|
# Ensure we skipped the processing steps
|
||||||
|
assert not mock_cv2.resize.called
|
||||||
|
assert not sender.socket.send.called
|
||||||
|
|
||||||
|
# Ensure cleanup happened at the end
|
||||||
|
mock_cap.release.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
|
def test_video_stream_success(zmq_context, mocker):
|
||||||
|
"""
|
||||||
|
Test the happy path: Frame read -> Resize -> Encode -> Send.
|
||||||
|
"""
|
||||||
|
_patch_basics(mocker)
|
||||||
|
_patch_exit_event(mocker) # Run loop exactly once
|
||||||
|
|
||||||
|
mock_cv2 = mocker.patch("robot_interface.endpoints.video_sender.cv2")
|
||||||
|
|
||||||
|
# Setup constants usually found in cv2
|
||||||
|
mock_cv2.IMWRITE_JPEG_QUALITY = 1
|
||||||
|
mock_cv2.INTER_AREA = 2
|
||||||
|
|
||||||
|
# Setup capture to work perfectly
|
||||||
|
mock_cap = mock.Mock()
|
||||||
|
mock_cap.isOpened.return_value = True
|
||||||
|
fake_frame = "original_frame_data"
|
||||||
|
mock_cap.read.return_value = (True, fake_frame)
|
||||||
|
mock_cv2.VideoCapture.return_value = mock_cap
|
||||||
|
|
||||||
|
# Setup Resize and Encode
|
||||||
|
mock_cv2.resize.return_value = "small_frame_data"
|
||||||
|
|
||||||
|
# Mock buffer behavior
|
||||||
|
mock_buffer = mock.Mock()
|
||||||
|
mock_buffer.tobytes.return_value = b"encoded_bytes"
|
||||||
|
# imencode returns (retval, buffer)
|
||||||
|
mock_cv2.imencode.return_value = (True, mock_buffer)
|
||||||
|
|
||||||
|
sender = VideoSender(zmq_context)
|
||||||
|
sender.socket = mock.Mock()
|
||||||
|
|
||||||
|
sender.test_video_stream()
|
||||||
|
|
||||||
|
# Assertions
|
||||||
|
# 1. Check waitKey (the 1ms delay)
|
||||||
|
mock_cv2.waitKey.assert_called_with(1)
|
||||||
|
|
||||||
|
# 2. Check Resize logic
|
||||||
|
mock_cv2.resize.assert_called_with(
|
||||||
|
fake_frame,
|
||||||
|
(320, 240),
|
||||||
|
interpolation=mock_cv2.INTER_AREA
|
||||||
|
)
|
||||||
|
|
||||||
|
# 3. Check Encode logic
|
||||||
|
mock_cv2.imencode.assert_called_with(
|
||||||
|
'.jpg',
|
||||||
|
"small_frame_data",
|
||||||
|
[mock_cv2.IMWRITE_JPEG_QUALITY, 70]
|
||||||
|
)
|
||||||
|
|
||||||
|
# 4. Check Socket Send
|
||||||
|
sender.socket.send.assert_called_with(b"encoded_bytes")
|
||||||
|
|
||||||
|
# 5. Check Cleanup
|
||||||
|
mock_cap.release.assert_called_once()
|
||||||
Reference in New Issue
Block a user