Target selection¶
At any given time, you can change the target the TrackMan device aims at.
This is done by requesting an image from the camera and point at the target. Thereafter a convenience method can be invoked to convert the selected pixel to a 3D coordinate that can be set.
Image Retrieval¶
To get an image, the camera must be set into target selection mode.
This is done by Posting to /Setup with the following JSON:
{
"Camera": {
"IsCapturing": true,
"ActiveProfile": "1"
},
"Snapshots": {
"IsEnabled": true
}
}
This puts the camera into target selection mode.
HTTP GET¶
Snapshot¶
Acquires a still image (snapshot) if possible and optionally includes any associated metadata. The snapshot image is delivered in the JPEG format. If the metadata is included, the response will be a multipart response with the metadata as JSON. The snapshot may be from a previously cached one or from a new freshly taken one. The choice is up to the system if it delivers a cached one or if it waits on taking a new one. In addition, various caching options can be implemented via ordinary HTTP means using the usual HTTP header fields, such as Cache-Control, If-Modified-Since, If-Match, If-Unmodified-Since, etc. If it is not currently possible to get a snapshot image (feature unavailable), the method fails with an appropriate HTTP error.
It is also possible to only get the JPEG image data and not any additional metadata. This thus effectively disables the multipart response delivery. This is handled by ordinary HTTP means using the Accept header field to differentiate. I.e. if accepting anything (the default) or image/* or just image/jpeg, only the JPEG image data will be returned, whereas only if specifically accepting multipart/mixed (or multipart/*), the metadata is included as well. Here is a simple indication of the HTTP response layout in case of metadata being included (follows RFC-2046):
...
Content-Type: multipart/mixed; boundary="boundaryZYX"
Content-Length: <value>
--boundaryZYX
Content-Type: image/jpeg
Content-Length: <value>
... [Raw binary JPEG data] \...
--boundaryZYX
Content-Type: application/json
Content-Length: <value>
{ ... [Image metadata JSON] ...}
--boundaryZYX---
Utils/ConvertPixelPositions¶
Converts from 2D pixel position and a 3D absolute distance to 3D position in RCS for given camera metadata. The parameters for the method are given in JSON format and include the (relevant parts of the) image/video metadata and a list of positions to be converted. Each position is defined by a 2D pixel position and an absolute 3D distance (in RCS units). The pixel position is given as an absolute position within the (virtual) image area defined by the accompanying image metadata. Thus the pixel position range is ([0;width-1],[0;height-1]), where (0,0) is in the upper-left corner of the image. Floating-point values are allowed. The 3D distance is given in the RCS unit of meters. The list of positions can contain from one to many different positions that are all converted in the same operation. The JSON body for the method is thus defined as:
{
"Metadata": {
// The relevant image meta data \...
},
"PixelPositions": [ // array of:
{
"Position": [ <number_X>, <number_Y> ],
"Distance3D": <number>
},
// ...
]
}
Upon successful conversion, the method returns a list of the converted 3D RCS positions in JSON format:
{
"Positions3D": [ // array of:
{
"Position": [ <number_X>, <number_Y>, <number_Z> ]
},
//...
]
}
In case of just any error in any input position, no converted positions are returned. Either all positions can be converted without any errors, or none will be. The list of positions to be converted is implicitly ordered following directly the array indices/ordering in the JSON format. The output list of converted positions is ordered similarly and has a similar number of array elements (i.e. same length). Thus a converted result at JSON array index i comes from conversion of the input position at the same JSON array index.
Utils/Convert3DPositions¶
If it's needed to draw something in the image (like a target flag, or tee position) that you know the 3D point of in relative to the radar, it's possible to convert that position into a pixel.
Converts from absolute 3D positions in RCS to pixel positions for given camera metadata. The parameters for the method are given in JSON format and include the (relevant parts of the) image/video metadata and a list of 3D positions to be converted. Each position is defined by their absolute 3D position in RCS units. The list of positions can contain from one to many different positions that are all converted in the same operation. The JSON body for the method is thus defined as:
{
"Metadata": {
// The relevant image/video meta data
},
"Positions3D": [ // array of:
{
"Position": [ <number_X>, <number_Y>, <number_Z> ]
},
// ...
]
}
Upon successful conversion, the method returns a list of the converted 2D pixel positions in JSON format:
{
"PixelPositions": [ // array of:
{
"Position": [ <number_X>, <number_Y> ]
},
// ...
]
}
Similar comments apply in case of errors and in relation to ordering of the input and output arrays as for the previous method.