Proposal for PROPOSAL
01, Cross-Modality
Sept 21
While I was reading an article on Codex from MOD about research on hyperthermia and hypothermia at QinetiQ, this idea popped up: how about we provide opposite visual signal, giving subjects opposite suggestion about what they are experiencing? With synthetic environment, we can easily generate illusive scene implying that the temperature is not as high as experienced. How much visual suggestion can alter body's feeling of temperature? Not sure. But cross-modal projections do exist.
To this point, I'm thinking more about cross-modal interaction. The facilities in VR lab equipped us for researching on cross-modal interaction. For example, visual/audio interaction (we'll need some audio devices), visual/haptics interaction and also I'm thinking of motion perception on treadmill that involves multisensory procedures as well. With the help of VR Technologies, all of these research can be much more convenient than using traditional methods only.
To this point, I'm thinking more about cross-modal interaction. The facilities in VR lab equipped us for researching on cross-modal interaction. For example, visual/audio interaction (we'll need some audio devices), visual/haptics interaction and also I'm thinking of motion perception on treadmill that involves multisensory procedures as well. With the help of VR Technologies, all of these research can be much more convenient than using traditional methods only.
Back to the thought about hyperthermia and hypothermia, if my assumption serves well, we can handily make synthetic environment deploy-able (to field) by implementing it on mobile computing device. Mobile computing device not only makes Virtual Environment deploy-able to field, actually, any lab (I'm thinking of climatic chambers at QinetiQ, indeed, I mean any lab) without large-scale VR Facilities, can adopt mobile VR now. It does not cost a fortune.