This is an old revision of the document!
Simovni 2 Specifications
Context
The context is the problem of the quality of UAP observation reports.
A written description is ambiguous. A picture is worth a thousand words. A 3D animation can fully describe the visual and spatio-temporal aspects of an observation.
By simulating physics (even very simplified) or ensuring at least some internal consistency in the trajectories, the simulation can help test/reject hypotheses.
Purpose
Obtain better and more elaborate estimates of the observation parameters.
Motivations
- The availability in 2021, on the consumer market of standalone (untethered) Head Mounted Displays (HMDs) for Virtual Reality (VR) and Augmented Reality (AR). These new HMDs do not even need to work in a fully controlled environment. They don't need “reference bases” or pattern targets to be placed around.
- The availability of free frameworks for the development of applications targeted on these devices. Namely : Unity 3d.
- all the reasons that motivated the invention of the SIMOVNI.
Some of the issues with SIMOVNI are solved, some appear. Many new things can be done that were impossible.
Solved with some HMDs
- unobstructed view solves the issue of the under estimation of angular sizes when looking through an opening (eyepiece of a refractor or binoculars for example)
The new or remaining issues with HMDs are developed below.
We expect better and more elaborate parameters estimations with such a tool than with the basic methods used by the investigators.
Philosophy
KISS (Keep It Simple Stupid) and remain pragmatic.
One of our goals in this development, is to over specify and explore the theoretical/practical potential of the technology, yet limit ourselves to the most simple developments and proceed progressively and evolve the software based on the field results.
We thus also define a minimum implementation.
Before doing anything, we need to be sure we can obtain something of interest (see limitations).
Estimates
List
- Orientation of the UAP : 2 operating modes.
- Absolute (*) 1)
- Billboard always facing the witness
- Billboard turned : that is, an optional post re-orientation is applied to the billboard orientation
- Distance (*)
- Size : 2 operating modes
- Size (*)
- Angular size
- Color(s)
- Apparent Luminosity (*)
- Level of blurriness
- Position of the UAP : 2 operating modes.
- Direction relative to the witness (*) (azimuth and angular height)
- Position relative to the witness (cartesian coordinates, X,Y,Z)
- Shape & Surface state / Texture
- Evolution over time of all the estimates
- Trajectory as well as all the other parameters : May be done using key points.
Extracted/Computed data
- Angular speed
- Angular size
Additional optional informations displayed ?
- @ observation time
- Main Stars
- Planets, Moon, Sun
- Satellites
- Planes
- Alt/Az grid
Additional setup parameter
- North Direction (*) to be measured/re-aligned on site in order to get absolute geo-aligned azimuth.
Quality
For each information that can be quantized as a number, it is informative to obtain min and max estimates by the witness, for a 100% confidence level as well as some lower confidence level like 50%. And the value for which the witness is the most confident.
Normalization
In the session process, make the witness estimate by memory the characteristics of some known things for reference. Typically for example : the moon. Size and luminosity.
HMD Types
These standalone HMDs can be placed into 2 categories. 3DOF and 6DOF (degrees of freedom). All use an IMU (Inertial Measurement Unit).
- 3DOF category : they only estimate their orientation solely by use of gyroscope + accelerometer (IMU) and magnetometer. They are cheaper.
- 6DOF category : they fully locate themselves in space by use of gyroscope + accelerometer (IMU) and magnetometer AND by “looking” at the environment (using many cameras looking all around).
- In principle, all 6DOF HMDs should be able to run in a 3DOF degraded mode.
- Some of these HMDs can track the hands. This in itself is a little revolution. For instance, if the user points a direction with his index, the HMD can display to the user a virtual laser that extends his index to infinity. The HMD can thus extract the absolute direction pointed to by the index.
- Some of these HMDs can track the eyes. The accuracy is not very high (0.5°). But with some tricks, it should be possible to reach higher accuracies.
For our application, we don't always need the full 6DOF tracking. 3DOF tracking is enough. This is because quite often, the witnesses do not translate during the observation.
To name a few (6DOF except if otherwise stated)
- VR
- Oculus Rift
- Oculus Quest 2
- HP Reverb
- Samsung Gear VR type (3DOF category)
- AR
- Hololens
- Hololens 2
- Magic Leap
Expected limitations
Many of the characteristics of current HMDs make them actually worse than a basic optical system like the SimOvni.
Due to the technologies involved, all HMDs have several of these issues
- Very limited dynamic range of the brightness of virtual objects displayed
- cannot display objects of very low luminosity
- cannot display objects of high luminosity
- Resolution still below human eye resolution, but we are getting really close
- as an example : the oculus quest 2 has a resolution of about 20 pix/°, this is to be compared with the 1 arc minute (60 pix/°) of the 20/20 human eye.
- Vergence-Accommodation Conflict : Unfortunately, almost all the HMDs on the market are designed to display the virtual images at a fixed distance from the user (lets say 2m). By this you should understand that the optical system creates a virtual image that is seen clearly if the user accommodates at that 2m distance. For 3d objects not located at 2 meters from the user's eyes in virtual space, this produces a “vergence conflict” in the visual system of the user. For example, when the HMD displays a 3d object “at infinity”, the eyes are oriented parallel but still should accommodate at 2 meters (while the brain/eyes expects infinity). I don't know if studies have estimated the impact, but it may alter the estimations by the witness.
Still tracking in low light ? I do not know right now, the minimum of luminosity of the environment needed for the HMD to be able to track its pose in space accurately without too much noise. This remains to be estimated.
More general tracking limitations : What about tracking in a moving car or train ? This becomes even more challenging for the HMD. It may be able to track the inside, but not the outside and certainly not both at the same time.
Using Virtual Reality HMDs (head mounted displays)
- Very limited dynamic range for the display of the environment. Unable to show a dark sky. Especially with LCD screens. OLED screens can do much better.
- This is not a limitation per se, but using VR HMDs imply that some modeling of the environment be done before a session with the witness.
- It would involve typically as a minimum that a 360° photographic panorama be done and quite some work to create the transparency layer for the sky. It is quite time consuming in some situations where the sky is cluttered with trees for example.
- The fact that such “modeling” is not needed in augmented reality is one of the advantages of AR over VR.
Using Augmented Reality HMDs
All HMDs for augmented reality have the advantage of removing the need for the modeling of the environment. Only the phenomenon observed by the witness has to be modeled.
See through HMDs
- Bad color rendition of the virtual objects : the optical systems that use the technology of wave guides suffer from terrible color distortions. By terrible I mean that white objects actually appear pink, greenish or reddish and in a non uniform way.
- Limited angle of view : the most advanced, standalone see through HMD, the Hololens 2, only has 35° of horizontal field of view. Fortunately, few UAP cases involve UAPs seen under angles of view higher than 35°.
- Very good resolution for Hololens 2: 47 pix/°
- Quasi perfect environment display : by design.
Video through HMDs (limitations in addition to the limitations of HMDs for Virtual Reality)
- Bad resolution for the display of the environment due to the limited resolution of the video cameras. The video resolutions of AR HMDs are much lower than consumer cameras because AR requires a frame rate higher than 60fps. Here we are talking about 17pix/°.
- Cameras unable to work in low light : expect big video noise in the dark.
- Very limited dynamic range for the display of the environment. Video though HMDs combine the limitations of their cameras and displays. Unable to show a dark sky. Especially with LCD screens. OLED screens can do much better but cannot compensate for the noise of the camera. In principle, if the environment does not move much, denoising algorithms could be applied. But I doubt that this behavior comes free with the HMD. I don't expect “to be able”/“have the time” to develop this myself.
Suitability
Can the HMDs simulate the expected characteristics ?
- Orientation of UAP
- no problem
- Size | Angular size : star like to 160°
- Star-Like : we are limited by the resolution.
How star-like does a single lit pixel feels like ?
- Big things : We are limited to the field of view of the displays. Can be as low as 35° or as high as 90°
- Distance
- We deal with UAPs at distance of more than 6 meters. Beyond that distance there is no issue of accommodation conflict provided the HMD projets the virtual image at infinity.
- Color
- The color gamut depends on the display technology. Ultra violet and deep red colors cannot be rendered well. But that is not a huge problem as far as the witness can tell the problem.
- Shape
- OK. About any shape can be simulated
- Surface state / Texture
- The display system is not limiting. Here the difficulty is not the display system, but our ability to know what the witness saw. The limitation is the ability of the witness to describe what he saw. The human eye may be able to discriminate some additional characteristic of the light (polarization) but we won't go that far.
- Apparent Luminosity: From the brightness of a star/satellite to a blinding electrical arc ?
- That would be one of the most interesting parameters to measure accurately.
. Question : how contrasted and luminous are these HMDs screens ? Luminosity of a pixel ?
. How do they compare to a mag -1 star or the moon ? Because this parameter is not calibrated at all (not even by product) there is a need for a calibration file per product and we will have to do it. This is a unique need. It makes this project innovative.
- Level of blurriness
- no problem
- Direction relative to the witness :Altitude, Azimuth
- no problem.
- Evolution over time of all the estimates
- The display frame rate are quite high (90hz+). High speed changes can be simulated. The HMD is not the limiting factor.
Setup
proposal
Phase 1 : Investigation. As for any investigation, the witness provides to the investigator a description of the UAP. If this makes sense, the witness should make a drawing (color pencils) of the UAP. And all details pertaining to a 3D simulation.
Phase 2 : 3D modeling, 3D shape animation, 3D trajectory, 3D scene preparation
Phase 3 : On site with witness and alike conditions if possible.
While the witness wears the HMD, the investigator remote controls some of what the witness sees. This may be done with a smart-phone or a computer.
The parameters should be tunable in real time by the investigator and/or the witness.
Static Parameters
Unfortunately, we are not able to update in real time these; too complex
- Shape
- Surface state / Texture
By this, you should understand : one cannot change in real time these parameters beyond the dynamics described in phase 1 and modeled in phase 2
witness side parameters
- Orientation of UAP :
- In Mode 1 : absolute Yaw, Pitch, Roll.
- In Mode 2 : billboard, always facing witness. No control.
- Direction relative to the witness : Altitude, Azimuth this is done by looking or pointing the HMD in the wanted direction.
- Color ? How ?
investigator side parameters & controls
parameters
The investigator can control all the parameters.
- Orientation of UAP :
- In Mode 1 : absolute Yaw, Pitch, Roll. (*)
- In Mode 2 : billboard, always facing witness. No control. Yaw, Pitch are computed in real time. Roll can be controlled
- Direction relative to the witness : Altitude, Azimuth (*)
- Color ? How ?
and also
- North Direction (*)
- Selection of Orientation mode : 1 or 2.
- Level of blurriness
- Distance. Can be arbitrary if it is unknown. If arbitrary, distance should be set to a fixed value of more that 25m (for invisible stereo parallax), but not too far in order to remain closer than the back clipping plane of the rendering engine. (*)
- Size : 2 operating modes. What's best ? Depends on the case. The origin of the 3D model should rather be defined close to its “center of gravity”. The bounding box of the model at scale 1 is what is used for these computations. Since the size is a bit ill defined concept, the parameter that is controlled at low level is the scale of the 3D model.
- Size. The size considered is the longest side of the bounding box
as seen from the witness. What is used as a parameter to interpolate is that max size in meters. The scale to apply to the model is computed accordingly. (*) - Angular Size. The max angular size over the 3 axis as seen from the witness is what is used. What is used internally as a parameter to interpolate is the size (as defined just above) that produces that angular size. …
not very satisfactory.
- Apparent Luminosity (*)
- 3D animation for the evolution of the Shape & Surface state / Texture animation time : anim_time
- 8 free parameters (additional undifferentiated parameters for any use, floating point numbers) (*)
Representing the Angular size Internally as Size at a Distance (50m by default for example) allows for a swapping between the two modes. The angular size can be computed at any time. make a function for it.
dynamics
- addition/removal of key points (in order to control the evolution over time of all the parameters).
- Each time a new key point N is inserted, one should specify the list of parameters for which this key point plays a role. All the other parameters will remain interpolated linearly from N-1 to N+1.
- Each time a key point is removed, before removal, the user is recalled the parameters it did play on.
- User sets a time for each key point ? No. Rather only a duration for each segment (seconds). The time for each key point is computed.
play the simulation
- start, pause, restart, forward, backward (or a slider) for the time parameter (key)
If there is a 3D animation for the evolution of the shape of the UAP ? It is typically under control of a single anim_time parameter.
A save button should save the parameters and key points current values in a basic text file in Json format, list of structures. Using https://github.com/zanders3/json/
All parameters using “Système International” units. Degrees for angles.
Practicality
Bibliography
Psychology of perception
Le problème de la taille apparente de la lune selon Camille Flammarion : COMMENT VOYEZ VOUS LA LUNE GROSSE ? LA NATURE, N° 29, 20 décembre 1873, page 38 à 40 en copie sur le site web de Dominique Caudron. http://oncle-dom.fr/paranormal/ovni/ufologie/temoins/taille_lune.htm
On “The power of methods guided by recognition rather than description in the reconstruction of a fleeting event” http://www.project1947.com/shg/symposium/shepard.html & https://rr0.org/time/1/9/6/8/07/29/Symposium/Shepard/ To be interpreted with caution.
SIMOVNI history
Pierre Lagrange : “A propos des prétendus aspects psychologiques et sociologiques des témoignages d’observation d’ovnis”. Annexe consacrée à l'historique du SIMOVNI. https://www.cnes-geipan.fr/sites/default/files/29_LAGRANGE_full.pdf
The paper trail shows Dominique Caudron was the first in France to build a practical device around 1974.