2013年11月8日星期五

The drones vs. the hackers

The drones don't only use the GPS signals to locate themselves but also use the multiple environment data around them to search in the virtual reality to locate, so the hackers can't cheat them by emitting the interference wave. However, the hackers still may palm off the fake instruction to the drones.

2013年7月2日星期二

The completely bionic artificial neural network



The neural signal scanner is harder to implement than the neural network scanner. The scanner is one, and the other one is the container. No matter how the RD progress of the scanner is, the container is easier to implement. We can use the current virtual reality to record the future scanned 3D image and then simulate the real work of the neural system with slower performance. However, if we want it to run faster, we must make the scanned neural network to be the hardware. In addition to that, the way to have virtual reality in the completely bionic artificial neural network is different from the current processor system, and it will run completely the same as the neural system in our bodies. 




The question is: even one day the scanning technology reaches the resolution of the neural ion stream, how to generalize the formula for simulating out the next state from a series of the scanned states and how to be sure it is working the same as the original brain? 





Record the entire brain map at as much spatiotemporal resolution as possible and for as long as possible. This is the most popular way to commemorate life in contemporary times, just store it in the cloud first. For a large number of continuous brain maps, perform a regression analysis to find out formulas or train the neural network to generate the next time point of the brain map, and then compare it with the original brain maps. This will go through a lengthy iterative process until the simulated brain is fairly consistent with the original brain map record. With the development of science and technology in the times, more people of later generations have more detailed records of spatiotemporal resolution, and simulation technology is also constantly improving. But this is a kind of clone, and the original will be still gone. 









2013年6月27日星期四

The Natural User Interface

The NUI (Natural User Interface) for the 2D GUI must recognize and trace the captured 3D image to transform to the UI event(s) first, but I don't care about that because I only do the 3D VR/AR UI. The NUI in the 3D VR/AR UI just directly maps the captured 3D image to the virtual reality to become the model added in it then lets themselves interact, so it is not necessary to have the event processing. In the 3D AR/VR UI, the pattern recognition and trace only occurs in the 3D AR/VR space.

2013年6月26日星期三

The Really Real Virtual Reality: virtual-real synchronization system


VR = R ➡️ VR + VR objects = AR 


The ARDOM ( Augmented Reality Distributed On Mobiles ) maps each minute of the real world by the moving of a large number of users on the mobile network cooperating to scan the real world into the distributed virtual reality with the radar or lidar mobile devices: correspond each pixel from the captured image to the position of ( the located coordinate + the 3D vector * the radar or lidar distance ) in the 3+1D space. The 4D space will also include our bodies and even our neural signals ( our souls ) so welcome to 《The Matrix》.  The objects with the access permissions in the augmented reality are not all stored in one place, they are distributed and are cached only as anyone possibly needs them to present in the augmented reality. Everything occurring in the real world is scanned into the virtual reality; everything happening in the virtual reality could change the real world too but not must be immediately. It could simulate until making sure everything is ok and there is no problem, then let the drones update the real world according to the new right virtual reality. 



The augmented reality does not only embed some virtual objects in the real scanned scenes but also changes the real world according to some of the virtual ones by the drones. The viewers watch the virtual reality from the Cloud directly but do not change it, they change the actors in the real world and then the sensors in the actors update the virtual reality in the Cloud. In the pure simulation mode, the actors with sensors are replaced by virtual ones in the virtual reality in the Cloud. However, the Cloud is not servers, the Cloud is the distribution. The actors in the real world are changed by accessing the Cloud and they are the synchronous parts of the Cloud distribution too. 

The most basic virtual-real synchronization does not update actions from users both to VR and R at the same time, only syncs to R, and then updates VR directly by the change of R. Because there may be exceptions of failure to update to R, and then VR must be restored to before the update.

However, some objects existing in VR but not in R also need to pretend to exist in R, so it is necessary to move objects in VR first to know how to move in R. But, in the process of updating actions from users to VR and then VR to R, it may also encounter objects that have not yet had time to be updated from R to VR. So, in the process of updating actions from users to VR and R at this same time, if one of them meets an exception, it is inevitable to restore both to before actions.

As for switching to the simulation mode decoupled from reality, just turn off the sensing and actuation to reality. 


The NUI (Natural User Interface) for the 2D GUI must recognize and trace the captured 3D image to transform to the UI event(s) first, but I don't care about that because I only do the 3D VR/AR UI. The NUI in the 3D VR/AR UI just directly maps the captured 3D image to the virtual reality to become the model added in it and then lets themselves interact, so it is not necessary to have the event processing. In the 3D VR/AR UI, the pattern recognition and trace only occur in the 3D VR/AR space. 



The virtual-real synchronization system allows you to interact with physical objects anywhere worldwide as long as you have access permission. The ultimate realm is the whole screen presenting the scene of the soul out of the body instantly moving to the other side of the world or the perspective of God, and you can watch and interact with any place in the world through the screen in one place, which is equivalent to the reality and the virtual world are always updated in both directions. 



3D LiDAR Technology

SpaceTop 3D interface lets you reach inside your computer screen

2013年6月25日星期二

電離流體磁導面鈑

空特部各基地間經常互相有不定期不定時無預警的空降突襲演練,有一晚半夜我老弟聽覺感覺到運輸機旋槳的音頻但旁人都沒感覺到說他想太多、多心了,那時運輸機還離基地很遠,基地指揮官抱著不演習白不演習的想法下令備戰,結果當別的單位的空降部隊落到地面時基地部隊老早就埋伏等在那邊了,一鎗未發全數俘虜。旋槳渦輪的噪音來自與空氣粒子的撞擊震動,尤其是推進方向氣流對葉片面上的衝擊,目前電動車安靜無聲但電動飛機和電動船艇仍無法克服噪音問題。倘若有一種面鈑可以電離表面流體使其帶電以磁導流離面成風流即可以完全切面除去風阻噪音,在離子流滑過磁導面鈑時亦導致面鈑裡未通電部份導體產生電流充電,離子流滑過磁導面鈑後即電中和不會留下離子航跡。

2013年6月11日星期二

The dynamic network of the mobile spheres

Give you the limitless number of spheres fully with the directional signal cells on their surfaces, please use them to design a network. Further, if the spheres are moving dynamically, how do you make the network? Please design a mechanism for the dynamic network. This may be for the communication among planets far out, and the same architecture also be on a planet or in space nearly. If we cut the sphere in half, it becomes a hat. Establish the network among these ever moving half sphere hats without any base station, so the hat itself is the ever-moving base station in the enemy territory, and must only use the directional signals away from the enemy detection. The same mechanism also can be applied to the communication among the planets in the outer space. There is no way to have a fixed base station too because the planets are always in revolution and rotation. It can not use the nondirectional signal too because they are the big planets. 



The location-based networking: Everyone owns the relative coordinates map created by scanning the neighborhood and merges another far areas from the far ones through the near ones. The more memories one owns; the bigger map one owns. As one wants to transfer something to the other one, decides the path according to the relative coordinates and their states on the map. 



2013年4月30日星期二

手機超音波透視

其實可以把超音波聲納頭接在手機上,手機有定位定向動感功能可以依據方位將掃瞄平面映射到立體虛擬實境,如此人們就可以隨時隨地自己照,醫師隨時隨地都可以在遠端線上看診。而且手機可以對連續攝得的映像比對自動拼圖,所以受測者不需強迫身體靜止不動。

2013年1月8日星期二

The SimCity in the augmented Google Earth synchronized with the real world



直接在 Google Earth 上玩 SimCity 才有意思,
還有大批 drones 自動雙向更新同步化 virtual earth 和 real earth 就更有意思了。


We only need to design the building in the virtual reality,
and later the drones will implement the building automatically via the synchronous augmented reality.