VR = R ➡️ VR + VR objects = AR

The ARDOM ( Augmented Reality Distributed On Mobiles ) maps each minute of the real world by the moving of a large number of users on the mobile network cooperating to scan the real world into the distributed virtual reality with the radar or lidar mobile devices: correspond each pixel from the captured image to the position of ( the located coordinate + the 3D vector * the radar or lidar distance ) in the 3+1D space. The 4D space will also include our bodies and even our neural signals ( our souls ) so welcome to 《The Matrix》. The objects with the access permissions in the augmented reality are not all stored in one place, they are distributed and are cached only as anyone possibly needs them to present in the augmented reality. Everything occurring in the real world is scanned into the virtual reality; everything happening in the virtual reality could change the real world too but not must be immediately. It could simulate until making sure everything is ok and there is no problem, then let the drones update the real world according to the new right virtual reality.


The augmented reality does not only embed some virtual objects in the real scanned scenes but also changes the real world according to some of the virtual ones by the drones. The viewers watch the virtual reality from the Cloud directly but do not change it, they change the actors in the real world and then the sensors in the actors update the virtual reality in the Cloud. In the pure simulation mode, the actors with sensors are replaced by virtual ones in the virtual reality in the Cloud. However, the Cloud is not servers, the Cloud is the distribution. The actors in the real world are changed by accessing the Cloud and they are the synchronous parts of the Cloud distribution too.
The most basic virtual-real synchronization does not update actions from users both to VR and R at the same time, only syncs to R, and then updates VR directly by the change of R. Because there may be exceptions of failure to update to R, and then VR must be restored to before the update.
However, some objects existing in VR but not in R also need to pretend to exist in R, so it is necessary to move objects in VR first to know how to move in R. But, in the process of updating actions from users to VR and then VR to R, it may also encounter objects that have not yet had time to be updated from R to VR. So, in the process of updating actions from users to VR and R at this same time, if one of them meets an exception, it is inevitable to restore both to before actions.
As for switching to the simulation mode decoupled from reality, just turn off the sensing and actuation to reality.
The NUI (Natural User Interface) for the 2D GUI must recognize and trace the captured 3D image to transform to the UI event(s) first, but I don't care about that because I only do the 3D VR/AR UI. The NUI in the 3D VR/AR UI just directly maps the captured 3D image to the virtual reality to become the model added in it and then lets themselves interact, so it is not necessary to have the event processing. In the 3D VR/AR UI, the pattern recognition and trace only occur in the 3D VR/AR space.


The virtual-real synchronization system allows you to interact with physical objects anywhere worldwide as long as you have access permission. The ultimate realm is the whole screen presenting the scene of the soul out of the body instantly moving to the other side of the world or the perspective of God, and you can watch and interact with any place in the world through the screen in one place, which is equivalent to the reality and the virtual world are always updated in both directions.
Why does the Distributed Augmented Reality Operating System not need the remote I/O application as the traditional flat GUI operating system?
回复删除It accesses the remote I/O in the shared virtual reality directly because everything in the real world is synchronized with the shared virtual reality.