To improve motion graph based motion synthesis,semantic control was introduced.Hybrid motion features including both numerical and user-defined semantic relational features were extracted to encode the characteristic aspects contained in the character's poses of the given motion sequences.Motion templates were then automatically derived from the training motions for capturing the spatio-temporal characteristics of an entire given class of semantically related motions.The data streams of motion documents were automatically annotated with semantic motion class labels by matching their respective motion class templates.Finally,the semantic control was introduced into motion graph based human motion synthesis.Experiments of motion synthesis demonstrate the effectiveness of the approach which enables users higher level of semantically intuitive control and high quality in human motion synthesis from motion capture database.
Moving object segmentation is one of the most challenging issues in computer vision. In this paper, we propose a new algorithm for static camera foreground segmentation. It combines Gaussian mixture model (GMM) and active contours method, and produces much better results than conventional background subtraction methods. It formulates foreground segmentation as an energy minimization problem and minimizes the energy function using curve evolution method. Our algorithm integrates the GMM background model, shadow elimination term and curve evolution edge stopping term into energy function. It achieves more accurate segmentation than existing methods of the same type. Promising results on real images demonstrate the potential of the presented method.
Creating realistic virtual humans has been a challenging objective in computer science research for some time. This paper describes an integrated framework for modeling virtual humans with a high level of autonomy. The framework seeks to reproduce human-like believable behavior and movement in virtual humans in a virtual environment. The framework includes a visual and auditory information perception module, a decision network based behavior decision module, and a hierarchical autonomous motion control module. These cooperate to model realistic autonomous individual behavior for virtual humans in real-time interactive virtual environments. The framework was tested in a simulated virtual environment system to demonstrate the ability of the framework to create autonomous, perceptive and intelligent virtual humans in real-time virtual environments.