Skip to content
Snippets Groups Projects
Commit 698e74da authored by Alica Kandler's avatar Alica Kandler
Browse files

Calibration & Planning/Calibration

parent 95417c2c
No related branches found
No related tags found
1 merge request!361Calibration & Planning/Calibration
Showing
with 393 additions and 47 deletions
# Extrinsic Calibration
Page about extrinsic calibration.
\ No newline at end of file
After the intrinsic calibration is done you can move on to the extrinsic calibration.
For this you first need to prepare the coordinate points measured during the experiments and possibly prepare the
corresponding images.
## Image & Point File Preparation
If you decided on a [3D calibration](/planning/calibration.md) where you place e.g. a pole on each point on the ground,
you need to create one image where all the points are visible at once. For this, it is helpful to combine cutouts of
the processed points inside one image (Figure 1). To create such an image, a free programme as e.g. [Gimp](https://gimp.org) can be helpful.
:::{figure-md} extrinsic_all_enlarged
![extrinsic_all_enlarged](images/extrinsic_all_enlarged.png)
Figure 1: Enlargement of combined image of all calibration points being processed during the experiments.
:::
Once the combined image has been created you have to write down the coordinates for each point assigned during the experiment.
These coordinates need to be collected in a text editor and saved as a `.3dc` file. It does not matter where the point
(0,0) is, however, the relations of the point to one another should be represented correctly.
The first line of the file contains the total number of coordinates points in the file. Starting from the second row,
all coordinates should be listed in centimeter according to their X, Y and Z coordinates. You can view an example of
our coordinate system and the corresponding points in Figure 2.
<br>
:::{Note}
The markings in the picture are solely for your orientation - you do not have to recreate this!
<br>
Since we have chosen to mark a coordinate system on the floor and create a 3D system by holding pole of known length
(200 centimeters) on each point, our Z coordinates are either 0 (floor) or 200 (top of pole).
:::
<br>
:::{figure-md} extrinsic_grid_and_points
![extrinsic_grid_and_points](images/extrinsic_grid_and_points.png){width=950}
Figure 2: Example of a possible coordinate system and the corresponding point file.
:::
After you have created your point file save it with the file ending `.3dc`. This is the file format that can be loaded
into PeTrack.
## PeTrack Workflow
With the combined image of all coordinate points and the point file created, you can now perform the extrinsic calibration
in PeTrack. For that, open the image of all combined coordinate points (Figure 1) in PeTrack or drag and drop it into the
[tab view](/user_interface).
::: {Tip}
To avoid shifting the coordinate system while clicking and dragging the mouse in the [video tab](/user_interface) it is
recommended to check the `fix` checkbox in the `coordinate system` and in the `alignment grid` section
at the bottom of the `calibration tab`.
:::
Now all coordinate points that are listed in the points file need to be selected in the combined image.
<br>
You need to select the points **in the same order** as they appear in the points file!
<br>
You can select the points with `Ctrl + double click left mouse button`. Select the points as accurate as possible
by zooming into the image. In case you are unhappy with the selected point you can unselect it by
`Ctrl + double click right mouse button`. <br>
:::{Tip}
If the green circles that appear around the chosen points are too large for your taste, go to the `tracking` tab in the
[tab view](/user_interface) and scroll to the `path` section on the page. Here you can uncheck the `head size` checkbox
to the right of `show current point` and now you are able to enter a smaller value of your own choosing. This value
represents the size of the green circles.
:::
Once you have selected all points in the same order as your points file, your chosen points should look similar to
the example in Figure 3.
:::{figure-md} extrinsic_select_points
![extrinsic_select_points](images/extrinsic_select_points.png){width=950}
Figure 3: Example of a selected points on the combined image in PeTrack.
:::
Now you can click on `load` in the `extrinsic parameters` section of the `calibration tab`. Navigate to your `points.3dc`
file and select it. Now click on `fetch` and PeTrack will assign the pixel coordinates of your selected points on the
screen to the real-world coordinates written down in the `points.3dc` file.
<br>
You will also note that once you click on fetch your selected points on the screen will disappear.
<br>
Now you have the option to view the error of your extrinsic calibration by clicking `error`. This will give you an idea
of e.g. how well you prepared your coordinate system and the accuracy of your point selection in PeTrack. If you are happy
with the results you can click `save` and the pixel coordinates that were matched to the real-world coordinates will also
be saved in the `points.3dc` file.
<br>
In case you are unhappy with the outcome, you can redo the calibration by selecting new points and loading in the
original `points.3dc` file again.
<br>
:::{Tip}
You can view the selected and calculated calibration points by checking the `calibration points` checkbox in the `coordinate system`
section. If you notice a points that is particularly bad in alignment with the calibrated value, you can redo the calibration and exclude
this point. This will minimize your error for the extrinsic calibration.
:::
After you are finished with the calibration steps it is recommended that you save the PeTrack project with the current calibration
status (intrinsic & extrinsic calibration). In case something goes wrong during the following steps, you have a project
you can return to and do not have to redo everything from the start.
:::{Tip}
To avoid scrolling through the numbers in the `calibration tab` you can check the `immutable` checkbox on the top right
of all the sections on the page. This can save you a lot of trouble going forward!
:::
9
-200 0 0 734.434 345.028
0 0 0 1005.5 340.418
200 0 0 1286.75 332.926
-100 100 0 867.95 201.331
100 100 0 1144.59 194.799
0 200 0 1003.77 58.017
-200 0 200 613.597 189.804
0 0 200 1015.68 180.007
200 0 200 1431.6 165.022
docs/source/calibration/images/extrinsic_all_enlarged.png

842 KiB

docs/source/calibration/images/extrinsic_grid_and_points.png

253 KiB

docs/source/calibration/images/extrinsic_select_points.png

540 KiB

docs/source/calibration/images/intrinsic_calibration_section.png

54.7 KiB

File added
docs/source/calibration/images/points.png

6.26 KiB

docs/source/calibration/images/pre_post_intrinsic.jpg

420 KiB

# Intrinsic Calibration
Page about intrinsic calibration.
## Image Preparation
During the experiments, a [calibration pattern](/planning/calibration.md) was filmed with the same camera
settings as used during the experiment. In order to use this pattern for the intrinsic calibration, screenshots
have to be taken out of the video. A free tool as e.g. [Gimp](https://www.gimp.org/) can be used to
take screenshots out of the video with identical pixel dimensions as the video recording.
Wir haben eine Aenderung.
\ No newline at end of file
:::{Important}
Make sure to select images that together fill out the entire area of interest from the video! Keep the images per
side balanced so a particular view is not overrepresented. [More Information...](/planning/calibration.md)
:::
## PeTrack Workflow
After opening PeTrack, the calibration tab will be the opened tab in the [tab view](/user_interface).
For the intrinsic calibration we will focus on the `intrinsic parameters` section in the middle
of the calibration tab.
:::{figure-md} intrinsic-calib-section
![intrinsic_calib_section](images/intrinsic_calibration_section.png){width=300px}
Intrinsic calibration section in PeTrack
:::
:::{Tip}
To be able to see the changes that occur or the quality of the calibration, it can be helpful to load e.g.
a video of the experiments into PeTrack. For that, drag a video into the [tab view](/user_interface)
of PeTrack.
:::
<br>
To select the images of the intrinsic calibration, click on the `files` button at the bottom of the
`intrinsic parameters` tab. Now select your intrinsic calibration images from your local storage.
After the images have been selected, click on the `auto` button at the bottom of the
`intrinsic calibration` section. By click on the `auto` button, the intrinsic calibration will be started.
During the calculation you might see your calibration images appearing in the video view with a
colored grid over them.
<br>
After the calculation is done, your initially selected video will appear back in the video view.
Now select the `apply` checkbox at the top of the `intrinsic parameters` section and you will see the
intrinsic calibration be applied. You can check the quality of the calibration by checking if straight lines in the
original setup are now being displayed straight in PeTrack as well. If you have used a wide-angled lense the fisheye
effect of the lense should be gone as well.
:::{figure-md} pre_post_intrinsic
![pre_post_intrinsic](images/pre_post_intrinsic.jpg)
Pre intrinsic calibration vs. Post intrinsic calibration.
:::
If you are not happy with the outcome of the calibration you can load different images and trigger
a new calculation of the calibration. Make sure that your area of interest in the video is well straightened before moving on.
:::{Important}
Note that you have to finalize your intrinsic calibration before moving on!
<br>
The extrinsic calibration can only be performed if an intrinsic calibration is loaded and applied!
:::
# Performing the Intrinsic and Extrinsic Calibration
## Intrinsic Calibration
### Preparation
To perform the intrinsic calibration you first need to prepare your [cameras](/planning/camera.md) and gather your tools.
Only when your camera settings for your experiment have been finalized you can perform the intrinsic calibration.
Regarding tools, we recommend to use a [chessboard pattern](https://jugit.fz-juelich.de/ped-dyn-emp/petrack/-/wikis/uploads/79770f9bbf41b6bebe9aeabfa5e10e05/pattern.pdf)
for the calibration, print it and fixate it on e.g. such a [sturdy surface](https://en.wikipedia.org/wiki/Sandwich_panel).
<br>
:::{Note}
The pattern should be completely flat and without wrinkles for the calibration so printing it on a piece of paper will not
be sufficient!
:::
### Performance
When performing the calibration it is good to know that only the area starting from the second row to the outer edge of
the chessboard is being used for the calculation (Figure 1).
:::{figure-md} chessboard_border
![chessboard_border](images/chessboard_border.jpg)
Figure 1: Only the area inside the red rectangle will later be used for the calculation of the intrinsic calibration.
:::
Now record a short video of the chessboard pattern while trying to bring the area later used for calculation (Figure 1)
as close as possible to the edges of your camera view. Move **slowly** over the chessboard and cover one side after another
until the pattern has in total covered the entire view (Figure 2).
:::{figure-md} chessboard_coverage
![chessboard_coverage](images/chessboard_coverage.jpg)
Figure 2: Screenshots from a calibration video focusing of specific areas of the camera view. In total the cover the entire
view. From top left to right bottom the focus lies on the following area of the view: top left, top right, bottom left, bottom right.
:::
The reason to cover the entire camera view is that the calibration will only be applied to the covered area. If only the
center is covered, the center will be well calibrated but the border region will not!
Additionally, try to select an equal number of images overing each side of the camera view, so the calibration will be
applied to the entire view equally. You can find examples of calibration images covering the entire view, only the center
and heavily the right side of the view in Figure 3.
:::{figure-md} calibration_examples
![calibration_examples](images/calibration_examples.jpg)
Figure 3: 1a: Overlay of calibration images covering entire view, 1b: Good overall calibration result; 2a: Overlay of
calibration images covering only the center, 2b: Good calibration result in the center, poor result around the border;
3a: Overlay of calibration images covering heavily the right side, 3b: Good calibration result on the right side,
poor quality on the left side of the image.
:::
:::{Tip}
Since it is best to take the calibration video on scene at the experiments, take your time while recording the video.
Move extra slow and even move the area used for calculation out of the camera view. When taking images out of the video
you will be able to select the perfect moment in time when the pattern is closest to the border.
:::
## Extrinsic Calibration
### Preparation
In addition to the intrinsic calibration, an extrinsic calibration must also be performed on scene. For this we recommend
using a sturdy pole e.g. a ranging pole with a level attached to it (Figure 4). Additionally, a cross laser, measuring
rods and adhesive points can be of great help.
:::{figure-md} ranging_pole_level
![ranging_pole_level](images/ranging_pole_level.png)
Figure 4: Example of a ranging pole and a level that can be attached to the pole.
:::
::: {Tip}
The pole should be about 2 meters tall to still be close to the head height of pedestrians while at the same time ensuring
that the person performing the calibration does not accidentally cover the top of the pole.
:::
Before performing the calibration you should decide on a calibration grid that will be used for the calibration. This
grid should be spread over the experimental area and should be prepared by e.g. putting adhesive points on known points
on the ground. Hereby it is important to place the points very accurately because even small deviation can cause a big error
over the large distance of the experimental area. You can view an example of a 2 meter by 2 meter coordinate grid in Figure 5.
:::{figure-md} coordinate_grid
![coordinate_grid](images/coordinate_grid.png)
Figure 5: Example of a 2m x 2m coordinate grid. Each put coordinate point is encircled in red.
:::
:::{Note}
Your coordinate grid should have a size that fits the experimental setup. A second option is to measure out objects that
are in your experimental area and use them as your coordinate points. However, sometimes it gets tricky to have them
cover the entire experimental area and not only the edges.
:::
<br>
Create a sketch of your coordinate grid including measurements. Also note down the time at which you are starting the
extrinsic calibration. This will help you to locate the calibration, especially in longer video recordings.
### Performance
The aim of the coordinate grid is to log known 3D points in the experimental setup. There are already known 2D points through the
adhesive points on the ground and by placing the ranging pole (of known height) on the points, we can add points from the third dimension.
Use the level that is attached to the pole to check the alignment. When you are happy with the alignment, make e.g. a rapid
movement with the pole. Later when taking images from the video, you can identify the point in time when the pole was
perfectly aligned by navigating to the last frame before the rapid movement with the pole.
<br>
Walk through your coordinate grid and log each calibration point to perform the extrinsic calibration.
:::{Tip}
While performing the calibration, make sure to not stand in between the pole and the camera.
This will help you to avoid blocking the view to the point on the ground with your feet or the top of the pole with your head.
:::
# Camera Selection and Parameter Settings
The cameras with the corresponding lens are the centerpiece for collecting the raw data. The selection of the camera model and lens and the setting of the parameters must therefore be carried out very carefully and, if possible, tested in the planned or similar environment.
The aim of PeTrack is to determine the head trajectory of each person as accurately as possible. The perspective view of a camera can occlude people from each other or from the surrounding structures. Here the vertical view on a crowd of people from above minimizes this **occlusion**. A large focal length or a small **angle of view** further reduce the negative influence of the perspective view. To cover the experimental area, the mounting height must be increased if the angle of view is reduced (or an overlapping camera grid and a subsequent data combination is needed). The greater the difference in persons’ size, the greater the occlusion, so that it may make sense to limit the subject acquisition to a range of person sizes (if this does not bias the study results).
The larger the angle of view, the more important it is to take the **height of the person** into account. The error, which increases towards the border of the image, can be seen in [Figure 1](#persView). If the angle of view is small, it may be sufficient to assume the average person height of the test subjects. Otherwise, the size of the person must be measured, e.g. using stereo cameras or individual coding. The assignment can be realized by color-coded height classes or by individual codes using questionnaires that include the person's height (see [markers](/recognition/recognition.md)). The person size must be specified relative to the coordinate system of the [extrinsic calibration](/calibration/extrinsic_calibration.md). If the origin is on the ground, the height of the person including shoes must be indicted. The varying head height caused by the bobbing movement of walking as well as non-planar movements, e.g. on stairs, can be realized using stereo cameras.
The cameras with the corresponding lens are the centerpiece for collecting the raw data. The selection of the camera
model and lens and the setting of the parameters must therefore be carried out very carefully and, if possible, tested
in the planned or similar environment.
The aim of PeTrack is to determine the head trajectory of each person as accurately as possible. The perspective view of
a camera can occlude people from each other or from the surrounding structures. Here the vertical view on a crowd of
people from above minimizes this **occlusion**. A large focal length or a small **angle of view** further reduce the
negative influence of the perspective view. To cover the experimental area, the mounting height must be increased if the
angle of view is reduced (or an overlapping camera grid and a subsequent data combination is needed). The greater the
difference in persons’ size, the greater the occlusion, so that it may make sense to limit the subject acquisition to a
range of person sizes (if this does not bias the study results).
The larger the angle of view, the more important it is to take the **height of the person** into account. The error,
which increases towards the border of the image, can be seen in [Figure 1](#persView). If the angle of view is small, it
may be sufficient to assume the average person height of the test subjects. Otherwise, the size of the person must be
measured, e.g. using stereo cameras or individual coding. The assignment can be realized by color-coded height classes
or by individual codes using questionnaires that include the person's height (
see [markers](/recognition/recognition.md)). The person size must be specified relative to the coordinate system of
the [extrinsic calibration](/calibration/extrinsic_calibration.md). If the origin is on the ground, the height of the
person including shoes must be indicted. The varying head height caused by the bobbing movement of walking as well as
non-planar movements, e.g. on stairs, can be realized using stereo cameras.
::::{figure-md} persView
......@@ -12,44 +28,90 @@ The larger the angle of view, the more important it is to take the **height of t
:width: 50%
:::
For a difference in size between two persons $d_h$ depending on the angle $\alpha$ to the camera plumb line or optical axis, the error in calculating the position in the movement plane is $e_h = |d_h\tan\alpha|$.
For a difference in size between two persons $d_h$ depending on the angle $\alpha$ to the camera plumb line or optical
axis, the error in calculating the position in the movement plane is $e_h = |d_h\tan\alpha|$.
::::
The **image resolution** must be sufficient for the features to be extracted. If only colored caps must be localized, the resolution can be much lower than if codes applied to the caps have to be read (see [markers](/recognition/recognition.md)). For the code marker within a sharp image, at least 3 pixel or better 4 pixel in each direction per marker element or bit are required.
The **frame rate** is sufficient with standard rates of 24&#160;fps to 30&#160;fps, as a maximum of 7&#160;cm is moved between consecutive images when walking and the distance in time can be assumed to be linear. For running people, increasing the frame rate can improve tracking.
The **focus** must be at the subjects' head height and is usually easier to adjust on the floor using an object at the distance of the later heads. A small image sensor in cameras usually results in a large depth of field and makes it easier to find or set the focal point.
The **aperture size** should be set to a low value for a large depth of field. The sharpness is usually lower at the border of the image and must be tested there to see whether the marker elements to be detected can be read out. Both properties or settings that increase sharpness (small image sensor, small aperture size) have the disadvantage that the amount of light collected is reduced as a result.
A fast **shutter speed** or short **exposure time** also makes the image sharper as a whole and reduces motion blur, but also results in a darker image. An exposure time of 1/150&#160;s is sufficient for people walking in crowds and 1/500&#160;s for people running. It is also important to ensure that the shutter speed matches the ambient light conditions (see below).
A low **gain** or a low **ISO value** results in a sharper image, but may have to be increased due to the set aperture size and shutter speed. Increasing the gain increases the image noise. Aperture size, shutter speed and gain all influence the brightness of the image.
The **recording format** usually influences the sharpness of the image due to the compression and should therefore be selected so that the elements to be extracted are clearly recognizable. The size of the storage media must match the planned recording duration for the selected recording format. The video recording should be set to full frame or progressive scan and no interlaced recording for the sequence of sub-fields should be taken.
All camera parameters must be set **manually** so that they do not change during the experiment and thus possibly change the intrinsic calibration or influence the parameters for marker recognition.
In summary, the following can be summarized in simple terms, even if the recommendations have a contradictory effect on each other:
- Small angle of view
- High mounting height
- High image resolution
- High refresh rate
- Small aperture size
- Low exposure time
- Low gain
The **[intrinsic calibration](/calibration/intrinsic_calibration.md)** must be carried out after changing the camera parameters like angle of view, image resolution, aperture site, focus or recording format. Switching on and off the camera, temperature variations or the transport of the camera can also affect the intrinsic calibration. Therefore, the calibration pattern should be recorded on site on the day of the experiments after setting the camera parameters. The calibration recordings are easier to carry out on the ground. The distance of the pattern to the camera must result in a sharp pattern recording but not lead to a change of focus.
The **[extrinsic calibration](/calibration/extrinsic_calibration.md)** must be performed after each change to the camera parameters or the position or orientation of the camera. Therefore, the calibration points should be recorded during the experiments after the final suspension and alignment. Recalibration may be necessary if the camera has been moved, e.g. by changing the storage medium, plugging in or unplugging cables or moving the superordinate suspension.
Ideally, the power supply, camera control and, if necessary, data transmission of the cameras should be carried out via **cable**. Depending on the length of the recording, for powering an internal battery or a connected power bank may also be sufficient. A required wireless control of the camera should be verifiable (status lights or monitor).
When suspending the cameras, it is essential to ensure that they are secured against falling by attaching **safety ropes**, especially as people will often be walking underneath them.
If possible, the recordings should be backed up twice on site after the experiments. For experiments lasting several days, it must be ensured that the **backup** can be completed and carried out overnight for all media or sensors.
The video recordings are often also used for a **qualitative review** and evaluation of the experiments. For this purpose, it can be useful to additionally record other perspectives on the experimental area, e.g. a view covering the whole experiment. Zudem ist eine Kamera mit Audioaufnahme sinnvoll, um Ansagen oder akustische Reaktionen etc. im Nachgang zu erfassen. A camera with audio recording is also useful afterwards, e.g. for capturing announcements or acoustic reactions.
There is a wide range of **camera types** and models. Roughly speaking, they can be divided into the classes industrial cameras and camcorders. Often for industrial cameras the recording is done on computers connected via cable and for camcorders directly at the camera. Industrial cameras have the advantage that they are more flexible, because they are assembled from individual components (camera, lens, cable, storage, computing unit), but this leads on the other hand to the fact that they are more error prone (frame drops, connection interruption). Industrial cameras are easier synchronizable and steerable and also allow a direct further processing of the data and an automation. The storage size is more limited for camcorders, but the handling is much easier and for the same video quality camcorders are cheaper. For not too complex scenarios we suggest using camcorders because of an easier handling and less technical challenges and problems.
\ No newline at end of file
The **image resolution** must be sufficient for the features to be extracted. If only colored caps must be localized,
the resolution can be much lower than if codes applied to the caps have to be read (
see [markers](/recognition/recognition.md)). For the code marker within a sharp image, at least 3 pixel or better 4
pixel in each direction per marker element or bit are required.
The **frame rate** is sufficient with standard rates of 24&#160;fps to 30&#160;fps, as a maximum of 7&#160;cm is moved
between consecutive images when walking and the distance in time can be assumed to be linear. For running people,
increasing the frame rate can improve tracking.
The **focus** must be at the subjects' head height and is usually easier to adjust on the floor using an object at the
distance of the later heads. A small image sensor in cameras usually results in a large depth of field and makes it
easier to find or set the focal point.
The **aperture size** should be set to a low value for a large depth of field. The sharpness is usually lower at the
border of the image and must be tested there to see whether the marker elements to be detected can be read out. Both
properties or settings that increase sharpness (small image sensor, small aperture size) have the disadvantage that the
amount of light collected is reduced as a result.
A fast **shutter speed** or short **exposure time** also makes the image sharper as a whole and reduces motion blur, but
also results in a darker image. An exposure time of 1/150&#160;s is sufficient for people walking in crowds and
1/500&#160;s for people running. It is also important to ensure that the shutter speed matches the ambient light
conditions (see below).
A low **gain** or a low **ISO value** results in a sharper image, but may have to be increased due to the set aperture
size and shutter speed. Increasing the gain increases the image noise. Aperture size, shutter speed and gain all
influence the brightness of the image.
The **recording format** usually influences the sharpness of the image due to the compression and should therefore be
selected so that the elements to be extracted are clearly recognizable. The size of the storage media must match the
planned recording duration for the selected recording format. The video recording should be set to full frame or
progressive scan and no interlaced recording for the sequence of sub-fields should be taken.
All camera parameters must be set **manually** so that they do not change during the experiment and thus possibly change
the intrinsic calibration or influence the parameters for marker recognition.
In summary, the following can be summarized in simple terms, even if the recommendations have a contradictory effect on
each other:
- Small angle of view
- High mounting height
- High image resolution
- High refresh rate
- Small aperture size
- Low exposure time
- Low gain
The **[intrinsic calibration](calibration.md)** must be carried out after changing the camera parameters like angle of
view, image resolution, aperture site, focus or recording format. Switching on and off the camera, temperature
variations or the transport of the camera can also affect the intrinsic calibration. Therefore, the calibration pattern
should be recorded on site on the day of the experiments after setting the camera parameters. The calibration recordings
are easier to carry out on the ground. The distance of the pattern to the camera must result in a sharp pattern
recording but not lead to a change of focus.
The **[extrinsic calibration](calibration.md)** must be performed after each change to the camera parameters or the
position or orientation of the camera. Therefore, the calibration points should be recorded during the experiments after
the final suspension and alignment. Recalibration may be necessary if the camera has been moved, e.g. by changing the
storage medium, plugging in or unplugging cables or moving the superordinate suspension.
Ideally, the power supply, camera control and, if necessary, data transmission of the cameras should be carried out via
**cable**. Depending on the length of the recording, for powering an internal battery or a connected power bank may also
be sufficient. A required wireless control of the camera should be verifiable (status lights or monitor).
When suspending the cameras, it is essential to ensure that they are secured against falling by attaching **safety ropes
**, especially as people will often be walking underneath them.
If possible, the recordings should be backed up twice on site after the experiments. For experiments lasting several
days, it must be ensured that the **backup** can be completed and carried out overnight for all media or sensors.
The video recordings are often also used for a **qualitative review** and evaluation of the experiments. For this
purpose, it can be useful to additionally record other perspectives on the experimental area, e.g. a view covering the
whole experiment. Zudem ist eine Kamera mit Audioaufnahme sinnvoll, um Ansagen oder akustische Reaktionen etc. im
Nachgang zu erfassen. A camera with audio recording is also useful afterwards, e.g. for capturing announcements or
acoustic reactions.
There is a wide range of **camera types** and models. Roughly speaking, they can be divided into the classes industrial
cameras and camcorders. Often for industrial cameras the recording is done on computers connected via cable and for
camcorders directly at the camera. Industrial cameras have the advantage that they are more flexible, because they are
assembled from individual components (camera, lens, cable, storage, computing unit), but this leads on the other hand to
the fact that they are more error prone (frame drops, connection interruption). Industrial cameras are easier
synchronizable and steerable and also allow a direct further processing of the data and an automation. The storage size
is more limited for camcorders, but the handling is much easier and for the same video quality camcorders are cheaper.
For not too complex scenarios we suggest using camcorders because of an easier handling and less technical challenges
and problems.
\ No newline at end of file
docs/source/planning/images/calibration_examples.jpg

332 KiB

docs/source/planning/images/chessboard.jpg

106 KiB

docs/source/planning/images/chessboard_border.jpg

114 KiB

docs/source/planning/images/chessboard_coverage.jpg

248 KiB

docs/source/planning/images/coordinate_grid.png

605 KiB

docs/source/planning/images/ranging_pole_level.png

118 KiB

......@@ -8,6 +8,7 @@ First of all, based on the research question, it should be determined which data
:::{toctree}
:maxdepth: 1
camera
calibration
surrounding
combining
workflow
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment